Refine
Year of publication
- 1999 (397) (remove)
Document Type
- Preprint (397) (remove)
Keywords
- Case-Based Reasoning (10)
- Fallbasiertes Schliessen (5)
- Location Theory (5)
- case-based problem solving (5)
- Abstraction (4)
- Fallbasiertes Schließen (4)
- Knowledge Acquisition (4)
- Internet (3)
- Knowledge acquisition (3)
- Maschinelles Lernen (3)
Faculty / Organisational entity
We present an approach to learning cooperative behavior of agents. Our ap-proach is based on classifying situations with the help of the nearest-neighborrule. In this context, learning amounts to evolving a set of good prototypical sit-uations. With each prototypical situation an action is associated that should beexecuted in that situation. A set of prototypical situation/action pairs togetherwith the nearest-neighbor rule represent the behavior of an agent.We demonstrate the utility of our approach in the light of variants of thewell-known pursuit game. To this end, we present a classification of variantsof the pursuit game, and we report on the results of our approach obtained forvariants regarding several aspects of the classification. A first implementationof our approach that utilizes a genetic algorithm to conduct the search for a setof suitable prototypical situation/action pairs was able to handle many differentvariants.
The common wisdom that goal orderings can be used to improve planning performance is nearly as old as planning itself. During the last decades of research several approaches emerged that computed goal orderings for different planning paradigms, mostly in the area of state-space planning. For partial-order, plan-space planners goal orderings have not been investigated in much detail. Mechanisms developed for statespace planning are not directly applicable because partial-order planners do not have a current (world) state. Further, it is not completely clear how plan-space planners should make use of goal orderings. This paper describes an approach to extract goal orderings to be used by the plan-space planner CAPlan. The extraction of goal orderings is based on the analysis of an extended version of operator graphs which previously have been found useful for the analysis of interactions and recursion of plan-space planners.
Die Verwendung von existierenden Planungsansätzen zur Lösung von realen Anwendungs- problemen führt meist schnell zur Erkenntnis, dass eine vorliegende Problemstellung im Prinzip zwar lösbar ist, der exponentiell anwachsende Suchraum jedoch nur die Behandlung relativ kleiner Aufgabenstellungen erlaubt. Beobachtet man jedoch menschliche Planungsexperten, so sind diese in der Lage bei komplexen Problemen den Suchraum durch Abstraktion und die Verwendung bekannter Fallbeispiele als Heuristiken, entscheident zu verkleinern und so auch für schwierige Aufgabenstellungen zu einer akzeptablen Lösung zu gelangen. In dieser Arbeit wollen wir am Beispiel der Arbeitsplanung ein System vorstellen, das Abstraktion und fallbasierte Techniken zur Steuerung des Inferenzprozesses eines nichtlinearen, hierarchischen Planungssystems einsetzt und so die Komplexität der zu lösenden Gesamtaufgabe reduziert.
We describe a hybrid architecture supporting planning for machining workpieces. The archi- tecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.
We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is build on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been realized that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.
In den letzten Jahren wurden Methoden des fallbasierten Schliessens häufig in Bereichen verwendet, in denen traditionell symbolische Verfahren zum Einsatz kommen, beispielsweise in der Klassifikation. Damit stellt sich zwangsläufig die Frage nach den Unterschieden bzw. der Mächtigkeit dieser Lernverfahren. Jantke [Jantke, 1992] hat bereits Gemeinsamkeiten von Induktiver Inferenz und fallbasierter Klassifikation untersucht. In dieser Arbeit wollen wir einige Zusammenhänge zwischen der Fallbasis, dem Ähnlichkeitsmass und dem zu erlernenden Begriff verdeutlichen. Zu diesem Zweck wird ein einfacher symbolischer Lernalgorithmus (der Versionenraum nach [Mitchell, 1982]) in eine äquivalente, fallbasiert arbeitende Variante transformiert. Die vorgestellten Ergebnisse bestätigen die Äquivalenz von symbolischen und fallbasierten Ansätzen und zeigen die starke Abhängigkeit zwischen dem im System verwendeten Mass und dem zu lernenden Begriff.
Die Mehrzahl aller CBR-Systeme in der Diagnostik verwendet für das Fallretrieval ein numerisches Ähnlichkeitsmass. In dieser Arbeit wird ein Ansatz vorgestellt, bei dem durch die Einführung eines an den Komponenten des zu diagnostizierenden technischen Systems orientierten Ähnlichkeitsbegriffs nicht nur das Retrieval wesentlich verbessert werden kann, sondern sich auch die Möglichkeit zu einer echten Fall- und Lösungstransformation bietet. Dies führt wiederum zu einer erheblichen Verkleinerung der Fallbasis. Die Ver- wendung dieses Ähnlichkeitsbegriffes setzt die Integration von zusätzlichem Wissen voraus, das aus einem qualitativem Modell der Domäne (im Sinne der modellbasierten Diagnostik) gewonnen wird.
Patdex is an expert system which carries out case-based reasoning for the fault diagnosis of complex machines. It is integrated in the Moltke workbench for technical diagnosis, which was developed at the university of Kaiserslautern over the past years, Moltke contains other parts as well, in particular a model-based approach; in Patdex where essentially the heuristic features are located. The use of cases also plays an important role for knowledge acquisition. In this paper we describe Patdex from a principal point of view and embed its main concepts into a theoretical framework.
In nebenläufigen Systemen erleichtert das Konzept der Atomarität vonOperationen, konkurrierende Zugriffe in größere, leichter beherrschbareAbschnitte zu unterteilen. Wenn wir aber Spezifikationen in der forma-len Beschreibungstechnik Estelle betrachten, erweist es sich, daß es un-ter bestimmten Umständen schwierig ist, die Atomarität der sogenanntenTransitionen bei Implementationen exakt einzuhalten, obwohl diese Ato-marität eine konzeptuelle Grundlage der Semantik von Estelle ist. Es wirdaufgezeigt, wie trotzdem sowohl korrekte als auch effiziente nebenläufigeImplementationen erreicht werden können. Schließlich wird darauf hinge-wiesen, daß die das Problem auslösenden Aktionen oft vom Spezifiziererleicht von vorneherein vermieden werden können; und dies gilt auch überden Kontext von Estelle hinaus.
Bestimmung der Ähnlichkeit in der fallbasierten Diagnose mit simulationsfähigen Maschinenmodellen
(1999)
Eine Fallbasis mit bereits gelösten Diagnoseproblemen Wissen über die Struktur der Maschine Wissen über die Funktion der einzelnen Bauteile (konkret und abstrakt) Die hier vorgestellte Komponente setzt dabei auf die im Rahmen des Moltke-Projektes entwickelten Systeme Patdex[Wes91] (fallbasierte Diagnose) und iMake [Sch92] bzw. Make [Reh91] (modellbasierte Generierung von Moltke- Wissensbasen) auf.
The feature interaction problem in telecommunications systems increasingly obstructsthe evolution of such systems. We develop formal detection criteria which render anecessary (but less than sufficient) condition for feature interactions. It can be checkedmechanically and points out all potentially critical spots. These have to be analyzedmanually. The resulting resolution decisions are incorporated formally. Some prototypetool support is already available. A prerequisite for formal criteria is a formal definitionof the problem. Since the notions of feature and feature interaction are often used in arather fuzzy way, we attempt a formal definition first and discuss which aspects can beincluded in a formalization (and therefore in a detection method). This paper describeson-going work.
Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Collecting Experience on the Systematic Development of CBR Applications using the INRECA Methodology
(1999)
This paper presents an overview of the INRECA methodology for building and maintaining CBR applications. This methodology supports the collection and reuse of experience on the systematic development of CBR applications. It is based on the experience factory and the software process modeling approach from software engineering. CBR development experience is documented using software process models and stored in different levels of generality in a three-layered experience base. Up to now, experience from 9 industrial projects enacted by all INRECA II partners has been collected.
Automata-Theoretic vs. Property-Oriented Approaches for the Detection of Feature Interactions in IN
(1999)
The feature interaction problem in Intelligent Networks obstructs more and morethe rapid introduction of new features. Detecting such feature interactions turns out to be a big problem. The size of the systems and the sheer computational com-plexity prevents the system developer from checking manually any feature against any other feature. We give an overview on current (verification) approaches and categorize them into property-oriented and automata-theoretic approaches. A comparisonturns out that each approach complements the other in a certain sense. We proposeto apply both approaches together in order to solve the feature interaction problem.
Planning means constructing a course of actions to achieve a specified set of goals when starting from an initial situation. For example, determining a sequence of actions (a plan) for transporting goods from an initial location to some destination is a typical planning problem in the transportation domain. Many planning problems are of practical interest.
MOLTKE is a research project dealing with a complex technical application. After describing the domain of CNCmachining centers and the applied KA methods, we summarize the concrete KA problems which we have to handle. Then we describe a KA mechanism which supports an engineer in developing a diagnosis system. In chapter 6 weintroduce learning techniques operating on diagnostic cases and domain knowledge for improving the diagnostic procedure of MOLTKE. In the last section of this chapter we outline some essential aspects of organizationalknowledge which is heavily applied by engineers for analysing such technical systems (Qualitative Engineering). Finally we give a short overview of the actual state of realization and our future plans.
Most automated theorem provers suffer from the problem that theycan produce proofs only in formalisms difficult to understand even forexperienced mathematicians. Efforts have been made to transformsuch machine generated proofs into natural deduction (ND) proofs.Although the single steps are now easy to understand, the entire proofis usually at a low level of abstraction, containing too many tedioussteps. Therefore, it is not adequate as input to natural language gen-eration systems.To overcome these problems, we propose a new intermediate rep-resentation, called ND style proofs at the assertion level . After illus-trating the notion intuitively, we show that the assertion level stepscan be justified by domain-specific inference rules, and that these rulescan be represented compactly in a tree structure. Finally, we describea procedure which substantially shortens ND proofs by abstractingthem to the assertion level, and report our experience with furthertransformation into natural language.
In this paper we show that distributing the theorem proving task to several experts is a promising idea. We describe the team work method which allows the experts to compete for a while and then to cooperate. In the cooperation phase the best results derived in the competition phase are collected and the less important results are forgotten. We describe some useful experts and explain in detail how they work together. We establish fairness criteria and so prove the distributed system to be both, complete and correct. We have implementedour system and show by non-trivial examples that drastical time speed-ups are possible for a cooperating team of experts compared to the time needed by the best expert in the team.
Constructing an analogy between a known and already proven theorem(the base case) and another yet to be proven theorem (the target case) oftenamounts to finding the appropriate representation at which the base and thetarget are similar. This is a well-known fact in mathematics, and it was cor-roborated by our empirical study of a mathematical textbook, which showedthat a reformulation of the representation of a theorem and its proof is in-deed more often than not a necessary prerequisite for an analogical inference.Thus machine supported reformulation becomes an important component ofautomated analogy-driven theorem proving too.The reformulation component proposed in this paper is embedded into aproof plan methodology based on methods and meta-methods, where the latterare used to change and appropriately adapt the methods. A theorem and itsproof are both represented as a method and then reformulated by the set ofmetamethods presented in this paper.Our approach supports analogy-driven theorem proving at various levels ofabstraction and in principle makes it independent of the given and often acci-dental representation of the given theorems. Different methods can representfully instantiated proofs, subproofs, or general proof methods, and hence ourapproach also supports these three kinds of analogy respectively. By attachingappropriate justifications to meta-methods the analogical inference can oftenbe justified in the sense of Russell.This paper presents a model of analogy-driven proof plan construction andfocuses on empirically extracted meta-methods. It classifies and formally de-scribes these meta-methods and shows how to use them for an appropriatereformulation in automated analogy-driven theorem proving.
Following Buchberger's approach to computing a Gröbner basis of a poly-nomial ideal in polynomial rings, a completion procedure for finitely generatedright ideals in Z[H] is given, where H is an ordered monoid presented by a finite,convergent semi - Thue system (Sigma; T ). Taking a finite set F ' Z[H] we get a(possibly infinite) basis of the right ideal generated by F , such that using thisbasis we have unique normal forms for all p 2 Z[H] (especially the normal formis 0 in case p is an element of the right ideal generated by F ). As the orderingand multiplication on H need not be compatible, reduction has to be definedcarefully in order to make it Noetherian. Further we no longer have p Delta x ! p 0for p 2 Z[H]; x 2 H. Similar to Buchberger's s - polynomials, confluence criteriaare developed and a completion procedure is given. In case T = ; or (Sigma; T ) is aconvergent, 2 - monadic presentation of a group providing inverses of length 1 forthe generators or (Sigma; T ) is a convergent presentation of a commutative monoid ,termination can be shown. So in this cases finitely generated right ideals admitfinite Gröbner bases. The connection to the subgroup problem is discussed.
This case study examines in detail the theorems and proofs that are shownby analogy in a mathematical textbook on semigroups and automata, thatis widely used as an undergraduate textbook in theoretical computer scienceat German universities (P. Deussen, Halbgruppen und Automaten, Springer1971). The study shows the important role of restructuring a proof for findinganalogous subproofs, and of reformulating a proof for the analogical trans-formation. It also emphasizes the importance of the relevant assumptions ofa known proof, i.e., of those assumptions actually used in the proof. In thisdocument we show the theorems, the proof structure, the subproblems andthe proofs of subproblems and their analogues with the purpose to providean empirical test set of cases for automated analogy-driven theorem proving.Theorems and their proofs are given in natural language augmented by theusual set of mathematical symbols in the studied textbook. As a first step weencode the theorems in logic and show the actual restructuring. Secondly, wecode the proofs in a Natural Deduction calculus such that a formal analysisbecomes possible and mention reformulations that are necessary in order toreveal the analogy.
We provide an overview of UNICOM, an inductive theorem prover for equational logic which isbased on refined rewriting and completion techniques. The architecture of the system as well as itsfunctionality are described. Moreover, an insight into the most important aspects of the internalproof process is provided. This knowledge about how the central inductive proof componentof the system essentially works is crucial for human users who want to solve non-trivial prooftasks with UNICOM and thoroughly analyse potential failures. The presentation is focussedon practical aspects of understanding and using UNICOM. A brief but complete description ofthe command interface, an installation guide, an example session, a detailed extended exampleillustrating various special features and a collection of successfully handled examples are alsoincluded.
While most approaches to similarity assessment are oblivious of knowledge and goals, there is ample evidence that these elements of problem solving play an important role in similarity judgements. This paper is concerned with an approach for integrating assessment of similarity into a framework of problem solving that embodies central notions of problem solving like goals, knowledge and learning.
To prove difficult theorems in a mathematical field requires substantial know-ledge of that field. In this thesis a frame-based knowledge representation formal-ism including higher-order sorted logic is presented, which supports a conceptualrepresentation and to a large extent guarantees the consistency of the built-upknowledge bases. In order to operationalize this knowledge, for instance, in anautomated theorem proving system, a class of sound morphisms from higher-orderinto first-order logic is given, in addition a sound and complete translation ispresented. The translations are bijective and hence compatible with a later proofpresentation.In order to prove certain theorems the comprehension axioms are necessary,(but difficult to handle in an automated system); such theorems are called trulyhigher-order. Many apparently higher-order theorems (i.e. theorems that arestated in higher-order syntax) however are essentially first-order in the sense thatthey can be proved without the comprehension axioms: for proving these theoremsthe translation technique as presented in this thesis is well-suited.
We transform a user-friendly formulation of aproblem to a machine-friendly one exploiting the variabilityof first-order logic to express facts. The usefulness of tacticsto improve the presentation is shown with several examples.In particular it is shown how tactical and resolution theoremproving can be combined.
There are well known examples of monoids in literature which do not admit a finite andcanonical presentation by a semi-Thue system over a fixed alphabet, not even over an arbi-trary alphabet. We introduce conditional Thue and semi-Thue systems similar to conditionalterm rewriting systems as defined by Kaplan. Using these conditional semi-Thue systems wegive finite and canonical presentations of the examples mentioned above. Furthermore weshow, that each finitely generated monoid with decidable word problem is embeddable in amonoid which has a finite canonical conditional presentation.
Typical examples, that is, examples that are representative for a particular situationor concept, play an important role in human knowledge representation and reasoning.In real life situations more often than not, instead of a lengthy abstract characteriza-tion, a typical example is used to describe the situation. This well-known observationhas been the motivation for various investigations in experimental psychology, whichalso motivate our formal characterization of typical examples, based on a partial orderfor their typicality. Reasoning by typical examples is then developed as a special caseof analogical reasoning using the semantic information contained in the correspondingconcept structures. We derive new inference rules by replacing the explicit informa-tion about connections and similarity, which are normally used to formalize analogicalinference rules, by information about the relationship to typical examples. Using theseinference rules analogical reasoning proceeds by checking a related typical example,this is a form of reasoning based on semantic information from cases.
This paper concerns a knowledge structure called method , within a compu-tational model for human oriented deduction. With human oriented theoremproving cast as an interleaving process of planning and verification, the body ofall methods reflects the reasoning repertoire of a reasoning system. While weadopt the general structure of methods introduced by Alan Bundy, we make anessential advancement in that we strictly separate the declarative knowledgefrom the procedural knowledge. This is achieved by postulating some stand-ard types of knowledge we have identified, such as inference rules, assertions,and proof schemata, together with corresponding knowledge interpreters. Ourapproach in effect changes the way deductive knowledge is encoded: A newcompound declarative knowledge structure, the proof schema, takes the placeof complicated procedures for modeling specific proof strategies. This change ofparadigm not only leads to representations easier to understand, it also enablesus modeling the even more important activity of formulating meta-methods,that is, operators that adapt existing methods to suit novel situations. In thispaper, we first introduce briefly the general framework for describing methods.Then we turn to several types of knowledge with their interpreters. Finally,we briefly illustrate some meta-methods.
We present a framework for the integration of the Knuth-Bendix completion algorithm with narrowing methods, compiled rewrite rules, and a heuristic difference reduction mechanism for paramodulation. The possibility of embedding theory unification algorithms into this framework is outlined. Results are presented and discussed for several examples of equality reasoning problems in the context of an actual implementation of an automated theorem proving system (the Mkrp-system) and a fast C implementation of the completion procedure. The Mkrp-system is based on the clause graph resolution procedure. The thesis shows the indispensibility of the constraining effects of completion and rewriting for equality reasoning in general and quantifies the amount of speed-up caused by various enhancements of the basic method. The simplicity of the superposition inference rule allows to construct an abstract machine for completion, which is presented together with computation times for a concrete implementation.
This report presents the main ideas underlyingtheOmegaGamma mkrp-system, an environmentfor the development of mathematical proofs. The motivation for the development ofthis system comes from our extensive experience with traditional first-order theoremprovers and aims to overcome some of their shortcomings. After comparing the benefitsand drawbacks of existing systems, we propose a system architecture that combinesthe positive features of different types of theorem-proving systems, most notably theadvantages of human-oriented systems based on methods (our version of tactics) andthe deductive strength of traditional automated theorem provers.In OmegaGamma mkrp a user first states a problem to be solved in a typed and sorted higher-order language (called POST ) and then applies natural deduction inference rules inorder to prove it. He can also insert a mathematical fact from an integrated data-base into the current partial proof, he can apply a domain-specific problem-solvingmethod, or he can call an integrated automated theorem prover to solve a subprob-lem. The user can also pass the control to a planning component that supports andpartially automates his long-range planning of a proof. Toward the important goal ofuser-friendliness, machine-generated proofs are transformed in several steps into muchshorter, better-structured proofs that are finally translated into natural language.This work was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D2, D3)
An important property and also a crucial point ofa term rewriting system is its termination. Transformation or-derings, developed by Bellegarde & Lescanne strongly based on awork of Bachmair & Dershowitz, represent a general technique forextending orderings. The main characteristics of this method aretwo rewriting relations, one for transforming terms and the otherfor ensuring the well-foundedness of the ordering. The centralproblem of this approach concerns the choice of the two relationssuch that the termination of a given term rewriting system can beproved. In this communication, we present a heuristic-based al-gorithm that partially solves this problem. Furthermore, we showhow to simulate well-known orderings on strings by transformationorderings.
This report presents a methodology to guide equational reasoningin a goal directed way. Suggested by rippling methods developed inthe field of inductive theorem proving we use attributes of terms andheuristics to determine bridge lemmas, i.e. lemmas which have tobe used during the proof of the theorem. Once we have found sucha bridge lemma we use the techniques of difference unification andrippling to enable its use.
This paper develops a sound and complete transformation-based algorithm forunification in an extensional order-sorted combinatory logic supporting constantoverloading and a higher-order sort concept. Appropriate notions of order-sortedweak equality and extensionality - reflecting order-sorted fij-equality in thecorresponding lambda calculus given by Johann and Kohlhase - are defined, andthe typed combinator-based higher-order unification techniques of Dougherty aremodified to accommodate unification with respect to the theory they generate. Thealgorithm presented here can thus be viewed as a combinatory logic counterpartto that of Johann and Kohlhase, as well as a refinement of that of Dougherty, andprovides evidence that combinatory logic is well-suited to serve as a framework forincorporating order-sorted higher-order reasoning into deduction systems aimingto capitalize on both the expressiveness of extensional higher-order logic and theefficiency of order-sorted calculi.
We consider the problem of verifying confluence and termination of conditionalterm rewriting systems (TRSs). For unconditional TRSs the critical pair lemmaholds which enables a finite test for confluence of (finite) terminating systems.And for ensuring termination of unconditional TRSs a couple of methods forconstructing appropiate well-founded term orderings are known. If however ter-mination is not guaranteed then proving confluence is much more difficult. Re-cently we have obtained some interesting results for unconditional TRSs whichprovide sufficient criteria for termination plus confluence in terms of restrictedtermination and confluence properties. In particular, we have shown that anyinnermost terminating and locally confluent overlay system is complete, i.e. ter-minating and confluent. Here we generalize our approach to the conditional caseand show how to solve the additional complications due to the presence of con-ditions in the rules. Our main result can be stated as follows: Any conditionalTRS which is an innermost terminating semantical overlay system such that all(conditional) critical pairs are joinable is complete.
We will answer a question posed in [DJK91], and will show that Huet's completion algorithm [Hu81] becomes incomplete, i.e. it may generate a term rewriting system that is not confluent, if it is modified in a way that the reduction ordering used for completion can be changed during completion provided that the new ordering is compatible with the actual rules. In particular, we will show that this problem may not only arise if the modified completion algorithm does not terminate: Even if the algorithm terminates without failure, the generated finite noetherian term rewriting system may be non-confluent. Most existing implementations of the Knuth-Bendix algorithm provide the user with help in choosing a reduction ordering: If an unorientable equation is encountered, then the user has many options, especially, the one to orient the equation manually. The integration of this feature is based on the widespread assumption that, if equations are oriented by hand during completion and the completion process terminates with success, then the generated finite system is a maybe non terminating but locally confluent system (see e.g. [KZ89]). Our examples will show that this assumption is not true.
Even though it is not very often admitted, partial functions do play asignificant role in many practical applications of deduction systems. Kleenehas already given a semantic account of partial functions using three-valuedlogic decades ago, but there has not been a satisfactory mechanization. Recentyears have seen a thorough investigation of the framework of many-valuedtruth-functional logics. However, strong Kleene logic, where quantificationis restricted and therefore not truth-functional, does not fit the frameworkdirectly. We solve this problem by applying recent methods from sorted logics.This paper presents a resolution calculus that combines the proper treatmentof partial functions with the efficiency of sorted calculi.
The team work method is a concept for distributing automated theoremprovers and so to activate several experts to work on a given problem. We haveimplemented this for pure equational logic using the unfailing KnuthADBendixcompletion procedure as basic prover. In this paper we present three classes ofexperts working in a goal oriented fashion. In general, goal oriented experts perADform their job "unfair" and so are often unable to solve a given problem alone.However, as a team member in the team work method they perform highly effiADcient, even in comparison with such respected provers as Otter 3.0 or REVEAL,as we demonstrate by examples, some of which can only be proved using teamwork.The reason for these achievements results from the fact that the team workmethod forces the experts to compete for a while and then to cooperate by exADchanging their best results. This allows one to collect "good" intermediate resultsand to forget "useless" ones. Completion based proof methods are frequently reADgarded to have the disadvantage of being not goal oriented. We believe that ourapproach overcomes this disadvantage to a large extend.
In 1978, Klop demonstrated that a rewrite system constructed by adding the untyped lambda calculus, which has the Church-Rosser property, to a Church-Rosser first-order algebraic rewrite system may not be Church-Rosser. In contrast, Breazu-Tannen recently showed that argumenting any Church-Rosser first-order algebraic rewrite system with the simply-typed lambda calculus results in a Church-Rosser rewrite system. In addition, Breazu-Tannen and Gallier have shown that the second-order polymorphic lambda calculus can be added to such rewrite systems without compromising the Church-Rosser property (for terms which can be provably typed). There are other systems for which a Church-Rosser result would be desirable, among them being X^t+SP+FIX, the simply-typed lambda calculus extended with surjective pairing and fixed points. This paper will show that Klop's untyped counterexample can be lifted to a typed system to demonstrate that X^t+SP+FIX is not Church-Rosser.
Over the past thirty years there have been significant achievements in the field of auto-mated theorem proving with respect to the reasoning power of the inference engines.Although some effort has also been spent to facilitate more user friendliness of the de-duction systems, most of them failed to benefit from more recent developments in therelated fields of artificial intelligence (AI), such as natural language generation and usermodeling. In particular, no model is available which accounts both for human deductiveactivities and for human proof presentation. In this thesis, a reconstructive architecture issuggested which substantially abstracts, reorganizes and finally translates machine-foundproofs into natural language. Both the procedures and the intermediate representationsof our architecture find their basis in computational models for informal mathematicalreasoning and for proof presentation. User modeling is not incorporated into the currenttheory, although we plan to do so later.
This paper presents a new way to use planning in automated theorem provingby means of distribution. To overcome the problem that often subtasks fora proof problem can not be detected a priori (which prevents the use of theknown planning and distribution techniques) we use a team of experts that workindependently with different heuristics on the problem. After a certain amount oftime referees judge their results using the impact of the results on the behaviourof the expert and a supervisor combines the selected results to a new startingpoint.This supervisor also selects the experts that can work on the problem inthe next round. This selection is a reactive planning task. We outline whichinformation the supervisor can use to fulfill this task and how this informationis processed to result in a plan or to revise a plan. We also show that the useof planning for the assignment of experts to the team allows the system to solvemany different examples in an acceptable time with the same start configurationand without any consultation of the user.Plans are always subject to changeShin'a'in proverb
The background of this paper is the area of case-based reasoning. This is a reasoning technique where one tries to use the solution of some problem which has been solved earlier in order to obta in a solution of a given problem. As example of types of problems where this kind of reasoning occurs very often is the diagnosis of diseases or faults in technical systems. In abstract terms this reduces to a classification task. A difficulty arises when one has not just one solved problem but when there are very many. These are called "cases" and they are stored in the case-base. Then one has to select an appropriate case which means to find one which is "similar" to the actual problem. The notion of similarity has raised much interest in this context. We will first introduce a mathematical framework and define some basic concepts. Then we will study some abstract phenomena in this area and finally present some methods developed and realized in a system at the University of Kaiserslautern.
The introduction of sorts to first-order automated deduction has broughtgreater conciseness of representation and a considerable gain in efficiency byreducing the search space. It is therefore promising to treat sorts in higherorder theorem proving as well.In this paper we present a generalization of Huet's Constrained Resolutionto an order-sorted type theory SigmaT with term declarations. This system buildscertain taxonomic axioms into the unification and conducts reasoning withthem in a controlled way. We make this notion precise by giving a relativizationoperator that totally and faithfully encodes SigmaT into simple type theory.
In this report we present a case study of employing goal-oriented heuristics whenproving equational theorems with the (unfailing) Knut-Bendix completion proce-dure. The theorems are taken from the domain of lattice ordered groups. It will bedemonstrated that goal-oriented (heuristic) criteria for selecting the next critical paircan in many cases significantly reduce the search effort and hence increase per-formance of the proving system considerably. The heuristic, goalADoriented criteriaare on the one hand based on so-called "measures" measuring occurrences andnesting of function symbols, and on the other hand based on matching subterms.We also deal with the property of goal-oriented heuristics to be particularly helpfulin certain stages of a proof. This fact can be addressed by using them in a frame-work for distributed (equational) theorem proving, namely the "teamwork-method".
Planverfahren
(1999)
We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.
A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.
One of the problems of autonomous mobile systems is the continuous tracking of position and orientation. In most cases, this problem is solved by dead reckoning, based on measurement of wheel rotations or step counts and step width. Unfortunately dead reckoning leads to accumulation of drift errors and is very sensitive against slippery. In this paper an algorithm for tracking position and orientation is presented being nearly independent from odometry and its problems with slippery. To achieve this results, a rotating range-finder is used, delivering scans of the environmental structure. The properties of this structure are used to match the scans from different locations in order to find their translational and rotational displacement. For this purpose derivatives of range-finder scans are calculated which can be used to find position and orientation by crosscorrelation.