Refine
Year of publication
- 1999 (351) (remove)
Document Type
- Preprint (351) (remove)
Language
- English (351) (remove)
Keywords
- Case-Based Reasoning (5)
- Location Theory (5)
- case-based problem solving (5)
- Abstraction (4)
- Knowledge Acquisition (4)
- Fallbasiertes Schließen (3)
- Internet (3)
- Knowledge acquisition (3)
- Multicriteria Optimization (3)
- case-based reasoning (3)
- distributed software development (3)
- distributed software development process (3)
- explanation-based learning (3)
- problem solving (3)
- resolution (3)
- Algebraic Optimization (2)
- Brillouin light scattering spectroscopy (2)
- CAPlan (2)
- Combinatorial Optimization (2)
- Deduction (2)
- Fallbasiertes Schliessen (2)
- Geometrical Algorithms (2)
- Kinetic Schemes (2)
- MOLTKE-Projekt (2)
- SDL (2)
- Wissensakquisition (2)
- application (2)
- average density (2)
- building automation (2)
- case based reasoning (2)
- design patterns (2)
- formal specification (2)
- incompressible Navier-Stokes equations (2)
- lattice Boltzmann method (2)
- learning system (2)
- low Mach number limit (2)
- requirements engineering (2)
- reuse (2)
- spin wave quantization (2)
- Abelian groups (1)
- Agents (1)
- Algebraic optimization (1)
- Analysis (1)
- Analytic semigroup (1)
- Applications (1)
- Approximation (1)
- Approximation Algorithms (1)
- Automated Reasoning (1)
- Automated theorem proving (1)
- Autonomous mobile robots (1)
- Autoregression (1)
- Bayes risk (1)
- Bisector (1)
- Blackboard architecture (1)
- Brillouin light scattering (1)
- Brownian motion (1)
- CNC-Maschine (1)
- COMOKIT (1)
- Case Study (1)
- Case-based problem solving (1)
- Causal Ordering (1)
- Causality (1)
- Chapman Enskog distributions (1)
- Chorin's projection scheme (1)
- Classification (1)
- CoMo-Kit (1)
- Collocation Method plus (1)
- Complexity and performance of numerical algorithms (1)
- Computational Fluid Dynamics (1)
- Computer Assisted Tomograp (1)
- Concept mapping (1)
- Concept maps (1)
- Constraint Graphs (1)
- Contract net (1)
- Control Design Styles (1)
- Convexity (1)
- Cooperative decision making (1)
- Correlation (1)
- Cosine function (1)
- Coxeter groups (1)
- Crofton's intersection formulae (1)
- Damon-Eshbach spin wave modes (1)
- Decision Making (1)
- Declarative and Procedural Knowledge (1)
- Design Patterns (1)
- Design Styles (1)
- Diagnosesystem (1)
- Difference Reduction (1)
- Differential Cross-Sections (1)
- Discrete decision problems (1)
- Discrete velocity models (1)
- Distributed Computation (1)
- Distributed Deb (1)
- Distributed Software Development (1)
- Distributed Software Development Projects (1)
- Distributed System (1)
- Distributed software development support (1)
- Distributed systems (1)
- EBG (1)
- Ecommerce (1)
- Elastic properties (1)
- Equality reasoning (1)
- Equational Reasoning (1)
- Experimental Data (1)
- Feature Technology (1)
- Forbidden Regions (1)
- Fredholm integral equation of the second kind (1)
- Fuzzy Programming (1)
- GPS-satellite-to-satellite tracking (1)
- Global Optimization (1)
- Global Predicate Detection (1)
- Global optimization (1)
- HOT (1)
- HTE (1)
- Hadwiger's recursive de nition of the Euler number (1)
- Hamiltonian groups (1)
- Helmholtz decomposition (1)
- High frequency switching (1)
- Homogeneous Relaxation (1)
- Ill-posed Problems (1)
- Improperly posed problems (1)
- Impulse control (1)
- Intelligent Agents (1)
- Intelligent agents (1)
- Interleaved Planning (1)
- Iterative Methods (1)
- Jeffreys' prior (1)
- Kinetic Schems (1)
- Kinetic theory (1)
- Knuth-Bendix completion algorithm (1)
- Kullback Leibler distance (1)
- Lagrangian Functions (1)
- Lattice Boltzmann Method (1)
- Lattice Boltzmann methods (1)
- Lexicographic Order (1)
- Lexicographic max-ordering (1)
- Linear membership function (1)
- Local completeness (1)
- Location theory (1)
- Logic Design (1)
- Logical Time (1)
- MOO (1)
- Map Building (1)
- Markov process (1)
- Maturity of Software Engineering (1)
- Max-Ordering (1)
- Mechanical Engineering (1)
- Methods (1)
- Mie representation (1)
- Minkowski space (1)
- Multicriteria Location (1)
- Multicriteria optimization (1)
- Multiple Criteria (1)
- Multiple Objective Programs (1)
- NP-completeness (1)
- Navier-Stokes equations (1)
- Nonstationary processes (1)
- Numerical Simulation (1)
- Open-Source (1)
- PATDEX (1)
- Palm distribution (1)
- Palm distributions (1)
- Pareto Optimality (1)
- Pareto Points (1)
- Partial functions (1)
- Planning and Verification (1)
- Polynomial Eigenfunctions (1)
- Position- and Orientation Estimation (1)
- Potential transform (1)
- Problem Solvers (1)
- Process Management (1)
- Process support (1)
- Produktionsdesign (1)
- Project Management (1)
- Random Errors (1)
- Rarefied Polyatomic Gases (1)
- Rectifiability (1)
- Requirements engineering (1)
- Requirements/Specifications (1)
- Resonant tunneling diode (1)
- Saddle Points (1)
- Self-Referencing (1)
- Semantics of Programming Languages (1)
- Shannon capacity (1)
- Shannon optimal priors (1)
- Shock Wave Problem (1)
- Similarity Assessment (1)
- Smalltalk (1)
- Software Agents (1)
- Software Engineering (1)
- Software development (1)
- Software engineering (1)
- Spatial Binary Images (1)
- Spectral Analysis (1)
- Square-mean Convergence (1)
- Stoner-like magnetic particles (1)
- Tactics (1)
- Term rewriting systems (1)
- Theorem of Plemelj-Privalov (1)
- Topology Preserving Networks (1)
- Translation planes (1)
- Triangular fuzzy number (1)
- Vector Time (1)
- Vetor optimization (1)
- Virtual Corporation (1)
- Virtual Software Projects (1)
- Voronoi diagram (1)
- Wide Area Multimedia Group Interaction (1)
- Wissenserwerb (1)
- Word problem (1)
- Workflow Replication (1)
- World Wide Web (1)
- adaption (1)
- analogy (1)
- anisotropic coupling between magnetic i (1)
- approximation methods (1)
- arbitrary function (1)
- arrays of magnetic dots and wires (1)
- artificial intelligence (1)
- assembly sequence design (1)
- automated code generation (1)
- automated computer learning (1)
- automated synchronization (1)
- autonomes Lernen (1)
- autonomous learning (1)
- average densities (1)
- bicriterion path problems (1)
- bipolar quantum drift diffusion model (1)
- bootstrap (1)
- business process modelling (1)
- cancer (1)
- case-based planner (1)
- case-based planning (1)
- cash management (1)
- center hyperplane (1)
- centrally symmetric polytope (1)
- co-learning (1)
- combined systems with sha (1)
- common transversal (1)
- communication architectures (1)
- communication protocols (1)
- communication subsystem (1)
- compilation (1)
- complete presentations (1)
- compressible Navier Stokes equations (1)
- computer aided planning (1)
- computer-supported cooperative work (1)
- concept representation (1)
- conceptual representation (1)
- concurrent software (1)
- confluence (1)
- conservative extension (1)
- consistency (1)
- constraint satisfaction problem (CSP) (1)
- continuous media (1)
- convex distance funtion (1)
- convex operator (1)
- cooperative problem solving (1)
- customization of communication protocols (1)
- decision support (1)
- decrease direction (1)
- deficiency (1)
- density distribution (1)
- design processes (1)
- diagnostic problems (1)
- direct product (1)
- directional derivative (1)
- discrete element method (1)
- discrete equilibrium distributions (1)
- discrete velocity models (1)
- discretization (1)
- disjoint union (1)
- distributed (1)
- distributed c (1)
- distributed deduction (1)
- distributed document management (1)
- distributed enterprise (1)
- distributed groupware environment (1)
- distributed multi-platform software development (1)
- distributed multi-platform software development projects (1)
- distributed software configuration management (1)
- distributed softwaredevelopment tools (1)
- enhanced coercivity (1)
- exchange coupling (1)
- exchange rate (1)
- exchange-bias bilayer Fe/MnPd (1)
- experience base (1)
- experimental software engineering (1)
- exponential rate (1)
- f-dissimilarity (1)
- fallbasiertes Schliessen (1)
- fallbasiertes planen (1)
- final prediction error (1)
- finite difference method (1)
- formal description techniques (1)
- formal reasoning (1)
- formulation as integral equation (1)
- frames (1)
- frequency splitting betwe (1)
- gauge (1)
- general multidimensional moment problem (1)
- generalized Gummel itera (1)
- generic design of a customized communication subsystem (1)
- geodetic (1)
- geomagnetic field modelling from MAGSAT data (1)
- geometric measure theory (1)
- geometrical algorithms (1)
- geopotential determination (1)
- global optimization (1)
- goal oriented completion (1)
- granular flow (1)
- growth optimal portfolios (1)
- harmonic WFT (1)
- head-on collisions (1)
- heterogeneous large-scale distributed DBMS (1)
- high-level caching of potentially shared networked documents (1)
- higher-order anisotropies (1)
- higher-order tableaux calculus (1)
- higher-order theorem prover (1)
- hyperbolic systems of conservation laws (1)
- hyperplane transversal (1)
- industrial supervision (1)
- inelastic light scattering (1)
- information (1)
- information systems engineering (1)
- innermost termination (1)
- instanton method (1)
- intelligent agents (1)
- interest oriented portfolios (1)
- internal approximation (1)
- internet event synchronizer (1)
- intersection local time (1)
- inverse Fourier transform (1)
- inverse mathematical models (1)
- isochronous streams (1)
- knowledge space (1)
- lacunarity distribution (1)
- large deviations (1)
- learning (1)
- level splitting (1)
- linked abstraction workflows (1)
- locally maximal clone (1)
- location (1)
- location problem (1)
- logarithmic average (1)
- logarithmic utility (1)
- macroscopic quantum coherence (1)
- magnetic Ni80Fe20 wires (1)
- magnetostatic surface spin waves (1)
- many-valued logic (1)
- martingale measu (1)
- maximum-entropy (1)
- middleware (1)
- minimal paths (1)
- minimax risk (1)
- mobile agents (1)
- mobile agents approach (1)
- modelling time (1)
- modularity (1)
- moment realizability (1)
- monitoring and managing distributed development processes (1)
- multi-agent architecture (1)
- multicriteria minimal path problem is presented (1)
- multidimensional Kohonen algorithm (1)
- multimedia (1)
- multiple objective linear programming problem (1)
- multiple-view product modeling (1)
- multiresolution analysis (1)
- narrowing (1)
- negotiation (1)
- neural networks (1)
- non-convex optimization (1)
- noninformative prior (1)
- nonlinear thresholding (1)
- norm (1)
- normal cone (1)
- numeraire portfolios (1)
- object frameworks (1)
- order selection (1)
- order-sorted logic (1)
- order-two densities (1)
- order-two density (1)
- ovoids (1)
- paramodulation (1)
- plan enactment (1)
- polyhedral norm (1)
- portfolio optimisation (1)
- preservation of relations (1)
- problem formulation (1)
- problem solvers (1)
- process model (1)
- process modelling (1)
- process support system (PROSYT) (1)
- process-centred environments (1)
- profiles (1)
- programmable client-server systems (1)
- projected quasi-gradient method (1)
- protocol (1)
- pseudo-compressibility method (1)
- quadratic forms (1)
- quasi-one-dimensional spin wave envelope solitons (1)
- radiation therapy (1)
- rate control (1)
- reactive systems (1)
- real time (1)
- real-time (1)
- real-time temporal logic (1)
- receptive safety properties (1)
- reference prior (1)
- regularization by wavelets (1)
- rela (1)
- reliability (1)
- requirements (1)
- robustness (1)
- scaled translates (1)
- shape aniso-tropies (1)
- shear flow (1)
- short magnetic fieldpulses (1)
- similarity measure (1)
- single domain uniaxial magnetic particles (1)
- software agents (1)
- software project (1)
- software project management (1)
- spin wave excitations (1)
- squares (1)
- statistical experiment (1)
- stochastic stability (1)
- switching properties (1)
- system behaviour (1)
- tactics (1)
- tangent measure distributions (1)
- temporal logic (1)
- termination (1)
- theorem prover (1)
- theorem proving (1)
- thin h-BN films (1)
- time series (1)
- time-varying autoregression (1)
- topology preserving maps (1)
- transition rates (1)
- transverse bias field (1)
- treatment planning (1)
- trial systems (1)
- two-dimensional self-focused spin wave packets (1)
- typical examples (1)
- uncertainty principle (1)
- uniform ergodicity (1)
- uniqueness (1)
- value preserving portfolios (1)
- vector wavelets (1)
- virtual market place (1)
- viscosity solutions (1)
- visual process modelling environment (1)
- wavelet estimators (1)
- wavelet transform (1)
- weak termination (1)
- windowed Fourier transform (1)
- work coordination (1)
- yttrium-iron garnet (YIG) fi (1)
Faculty / Organisational entity
We present an approach to learning cooperative behavior of agents. Our ap-proach is based on classifying situations with the help of the nearest-neighborrule. In this context, learning amounts to evolving a set of good prototypical sit-uations. With each prototypical situation an action is associated that should beexecuted in that situation. A set of prototypical situation/action pairs togetherwith the nearest-neighbor rule represent the behavior of an agent.We demonstrate the utility of our approach in the light of variants of thewell-known pursuit game. To this end, we present a classification of variantsof the pursuit game, and we report on the results of our approach obtained forvariants regarding several aspects of the classification. A first implementationof our approach that utilizes a genetic algorithm to conduct the search for a setof suitable prototypical situation/action pairs was able to handle many differentvariants.
The common wisdom that goal orderings can be used to improve planning performance is nearly as old as planning itself. During the last decades of research several approaches emerged that computed goal orderings for different planning paradigms, mostly in the area of state-space planning. For partial-order, plan-space planners goal orderings have not been investigated in much detail. Mechanisms developed for statespace planning are not directly applicable because partial-order planners do not have a current (world) state. Further, it is not completely clear how plan-space planners should make use of goal orderings. This paper describes an approach to extract goal orderings to be used by the plan-space planner CAPlan. The extraction of goal orderings is based on the analysis of an extended version of operator graphs which previously have been found useful for the analysis of interactions and recursion of plan-space planners.
We describe a hybrid architecture supporting planning for machining workpieces. The archi- tecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.
We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is build on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been realized that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.
The feature interaction problem in telecommunications systems increasingly obstructsthe evolution of such systems. We develop formal detection criteria which render anecessary (but less than sufficient) condition for feature interactions. It can be checkedmechanically and points out all potentially critical spots. These have to be analyzedmanually. The resulting resolution decisions are incorporated formally. Some prototypetool support is already available. A prerequisite for formal criteria is a formal definitionof the problem. Since the notions of feature and feature interaction are often used in arather fuzzy way, we attempt a formal definition first and discuss which aspects can beincluded in a formalization (and therefore in a detection method). This paper describeson-going work.
Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Collecting Experience on the Systematic Development of CBR Applications using the INRECA Methodology
(1999)
This paper presents an overview of the INRECA methodology for building and maintaining CBR applications. This methodology supports the collection and reuse of experience on the systematic development of CBR applications. It is based on the experience factory and the software process modeling approach from software engineering. CBR development experience is documented using software process models and stored in different levels of generality in a three-layered experience base. Up to now, experience from 9 industrial projects enacted by all INRECA II partners has been collected.
Automata-Theoretic vs. Property-Oriented Approaches for the Detection of Feature Interactions in IN
(1999)
The feature interaction problem in Intelligent Networks obstructs more and morethe rapid introduction of new features. Detecting such feature interactions turns out to be a big problem. The size of the systems and the sheer computational com-plexity prevents the system developer from checking manually any feature against any other feature. We give an overview on current (verification) approaches and categorize them into property-oriented and automata-theoretic approaches. A comparisonturns out that each approach complements the other in a certain sense. We proposeto apply both approaches together in order to solve the feature interaction problem.
Planning means constructing a course of actions to achieve a specified set of goals when starting from an initial situation. For example, determining a sequence of actions (a plan) for transporting goods from an initial location to some destination is a typical planning problem in the transportation domain. Many planning problems are of practical interest.
MOLTKE is a research project dealing with a complex technical application. After describing the domain of CNCmachining centers and the applied KA methods, we summarize the concrete KA problems which we have to handle. Then we describe a KA mechanism which supports an engineer in developing a diagnosis system. In chapter 6 weintroduce learning techniques operating on diagnostic cases and domain knowledge for improving the diagnostic procedure of MOLTKE. In the last section of this chapter we outline some essential aspects of organizationalknowledge which is heavily applied by engineers for analysing such technical systems (Qualitative Engineering). Finally we give a short overview of the actual state of realization and our future plans.
Most automated theorem provers suffer from the problem that theycan produce proofs only in formalisms difficult to understand even forexperienced mathematicians. Efforts have been made to transformsuch machine generated proofs into natural deduction (ND) proofs.Although the single steps are now easy to understand, the entire proofis usually at a low level of abstraction, containing too many tedioussteps. Therefore, it is not adequate as input to natural language gen-eration systems.To overcome these problems, we propose a new intermediate rep-resentation, called ND style proofs at the assertion level . After illus-trating the notion intuitively, we show that the assertion level stepscan be justified by domain-specific inference rules, and that these rulescan be represented compactly in a tree structure. Finally, we describea procedure which substantially shortens ND proofs by abstractingthem to the assertion level, and report our experience with furthertransformation into natural language.
In this paper we show that distributing the theorem proving task to several experts is a promising idea. We describe the team work method which allows the experts to compete for a while and then to cooperate. In the cooperation phase the best results derived in the competition phase are collected and the less important results are forgotten. We describe some useful experts and explain in detail how they work together. We establish fairness criteria and so prove the distributed system to be both, complete and correct. We have implementedour system and show by non-trivial examples that drastical time speed-ups are possible for a cooperating team of experts compared to the time needed by the best expert in the team.
Constructing an analogy between a known and already proven theorem(the base case) and another yet to be proven theorem (the target case) oftenamounts to finding the appropriate representation at which the base and thetarget are similar. This is a well-known fact in mathematics, and it was cor-roborated by our empirical study of a mathematical textbook, which showedthat a reformulation of the representation of a theorem and its proof is in-deed more often than not a necessary prerequisite for an analogical inference.Thus machine supported reformulation becomes an important component ofautomated analogy-driven theorem proving too.The reformulation component proposed in this paper is embedded into aproof plan methodology based on methods and meta-methods, where the latterare used to change and appropriately adapt the methods. A theorem and itsproof are both represented as a method and then reformulated by the set ofmetamethods presented in this paper.Our approach supports analogy-driven theorem proving at various levels ofabstraction and in principle makes it independent of the given and often acci-dental representation of the given theorems. Different methods can representfully instantiated proofs, subproofs, or general proof methods, and hence ourapproach also supports these three kinds of analogy respectively. By attachingappropriate justifications to meta-methods the analogical inference can oftenbe justified in the sense of Russell.This paper presents a model of analogy-driven proof plan construction andfocuses on empirically extracted meta-methods. It classifies and formally de-scribes these meta-methods and shows how to use them for an appropriatereformulation in automated analogy-driven theorem proving.
Following Buchberger's approach to computing a Gröbner basis of a poly-nomial ideal in polynomial rings, a completion procedure for finitely generatedright ideals in Z[H] is given, where H is an ordered monoid presented by a finite,convergent semi - Thue system (Sigma; T ). Taking a finite set F ' Z[H] we get a(possibly infinite) basis of the right ideal generated by F , such that using thisbasis we have unique normal forms for all p 2 Z[H] (especially the normal formis 0 in case p is an element of the right ideal generated by F ). As the orderingand multiplication on H need not be compatible, reduction has to be definedcarefully in order to make it Noetherian. Further we no longer have p Delta x ! p 0for p 2 Z[H]; x 2 H. Similar to Buchberger's s - polynomials, confluence criteriaare developed and a completion procedure is given. In case T = ; or (Sigma; T ) is aconvergent, 2 - monadic presentation of a group providing inverses of length 1 forthe generators or (Sigma; T ) is a convergent presentation of a commutative monoid ,termination can be shown. So in this cases finitely generated right ideals admitfinite Gröbner bases. The connection to the subgroup problem is discussed.
This case study examines in detail the theorems and proofs that are shownby analogy in a mathematical textbook on semigroups and automata, thatis widely used as an undergraduate textbook in theoretical computer scienceat German universities (P. Deussen, Halbgruppen und Automaten, Springer1971). The study shows the important role of restructuring a proof for findinganalogous subproofs, and of reformulating a proof for the analogical trans-formation. It also emphasizes the importance of the relevant assumptions ofa known proof, i.e., of those assumptions actually used in the proof. In thisdocument we show the theorems, the proof structure, the subproblems andthe proofs of subproblems and their analogues with the purpose to providean empirical test set of cases for automated analogy-driven theorem proving.Theorems and their proofs are given in natural language augmented by theusual set of mathematical symbols in the studied textbook. As a first step weencode the theorems in logic and show the actual restructuring. Secondly, wecode the proofs in a Natural Deduction calculus such that a formal analysisbecomes possible and mention reformulations that are necessary in order toreveal the analogy.
We provide an overview of UNICOM, an inductive theorem prover for equational logic which isbased on refined rewriting and completion techniques. The architecture of the system as well as itsfunctionality are described. Moreover, an insight into the most important aspects of the internalproof process is provided. This knowledge about how the central inductive proof componentof the system essentially works is crucial for human users who want to solve non-trivial prooftasks with UNICOM and thoroughly analyse potential failures. The presentation is focussedon practical aspects of understanding and using UNICOM. A brief but complete description ofthe command interface, an installation guide, an example session, a detailed extended exampleillustrating various special features and a collection of successfully handled examples are alsoincluded.
While most approaches to similarity assessment are oblivious of knowledge and goals, there is ample evidence that these elements of problem solving play an important role in similarity judgements. This paper is concerned with an approach for integrating assessment of similarity into a framework of problem solving that embodies central notions of problem solving like goals, knowledge and learning.
To prove difficult theorems in a mathematical field requires substantial know-ledge of that field. In this thesis a frame-based knowledge representation formal-ism including higher-order sorted logic is presented, which supports a conceptualrepresentation and to a large extent guarantees the consistency of the built-upknowledge bases. In order to operationalize this knowledge, for instance, in anautomated theorem proving system, a class of sound morphisms from higher-orderinto first-order logic is given, in addition a sound and complete translation ispresented. The translations are bijective and hence compatible with a later proofpresentation.In order to prove certain theorems the comprehension axioms are necessary,(but difficult to handle in an automated system); such theorems are called trulyhigher-order. Many apparently higher-order theorems (i.e. theorems that arestated in higher-order syntax) however are essentially first-order in the sense thatthey can be proved without the comprehension axioms: for proving these theoremsthe translation technique as presented in this thesis is well-suited.
We transform a user-friendly formulation of aproblem to a machine-friendly one exploiting the variabilityof first-order logic to express facts. The usefulness of tacticsto improve the presentation is shown with several examples.In particular it is shown how tactical and resolution theoremproving can be combined.
There are well known examples of monoids in literature which do not admit a finite andcanonical presentation by a semi-Thue system over a fixed alphabet, not even over an arbi-trary alphabet. We introduce conditional Thue and semi-Thue systems similar to conditionalterm rewriting systems as defined by Kaplan. Using these conditional semi-Thue systems wegive finite and canonical presentations of the examples mentioned above. Furthermore weshow, that each finitely generated monoid with decidable word problem is embeddable in amonoid which has a finite canonical conditional presentation.
Typical examples, that is, examples that are representative for a particular situationor concept, play an important role in human knowledge representation and reasoning.In real life situations more often than not, instead of a lengthy abstract characteriza-tion, a typical example is used to describe the situation. This well-known observationhas been the motivation for various investigations in experimental psychology, whichalso motivate our formal characterization of typical examples, based on a partial orderfor their typicality. Reasoning by typical examples is then developed as a special caseof analogical reasoning using the semantic information contained in the correspondingconcept structures. We derive new inference rules by replacing the explicit informa-tion about connections and similarity, which are normally used to formalize analogicalinference rules, by information about the relationship to typical examples. Using theseinference rules analogical reasoning proceeds by checking a related typical example,this is a form of reasoning based on semantic information from cases.
This paper concerns a knowledge structure called method , within a compu-tational model for human oriented deduction. With human oriented theoremproving cast as an interleaving process of planning and verification, the body ofall methods reflects the reasoning repertoire of a reasoning system. While weadopt the general structure of methods introduced by Alan Bundy, we make anessential advancement in that we strictly separate the declarative knowledgefrom the procedural knowledge. This is achieved by postulating some stand-ard types of knowledge we have identified, such as inference rules, assertions,and proof schemata, together with corresponding knowledge interpreters. Ourapproach in effect changes the way deductive knowledge is encoded: A newcompound declarative knowledge structure, the proof schema, takes the placeof complicated procedures for modeling specific proof strategies. This change ofparadigm not only leads to representations easier to understand, it also enablesus modeling the even more important activity of formulating meta-methods,that is, operators that adapt existing methods to suit novel situations. In thispaper, we first introduce briefly the general framework for describing methods.Then we turn to several types of knowledge with their interpreters. Finally,we briefly illustrate some meta-methods.
We present a framework for the integration of the Knuth-Bendix completion algorithm with narrowing methods, compiled rewrite rules, and a heuristic difference reduction mechanism for paramodulation. The possibility of embedding theory unification algorithms into this framework is outlined. Results are presented and discussed for several examples of equality reasoning problems in the context of an actual implementation of an automated theorem proving system (the Mkrp-system) and a fast C implementation of the completion procedure. The Mkrp-system is based on the clause graph resolution procedure. The thesis shows the indispensibility of the constraining effects of completion and rewriting for equality reasoning in general and quantifies the amount of speed-up caused by various enhancements of the basic method. The simplicity of the superposition inference rule allows to construct an abstract machine for completion, which is presented together with computation times for a concrete implementation.
This report presents the main ideas underlyingtheOmegaGamma mkrp-system, an environmentfor the development of mathematical proofs. The motivation for the development ofthis system comes from our extensive experience with traditional first-order theoremprovers and aims to overcome some of their shortcomings. After comparing the benefitsand drawbacks of existing systems, we propose a system architecture that combinesthe positive features of different types of theorem-proving systems, most notably theadvantages of human-oriented systems based on methods (our version of tactics) andthe deductive strength of traditional automated theorem provers.In OmegaGamma mkrp a user first states a problem to be solved in a typed and sorted higher-order language (called POST ) and then applies natural deduction inference rules inorder to prove it. He can also insert a mathematical fact from an integrated data-base into the current partial proof, he can apply a domain-specific problem-solvingmethod, or he can call an integrated automated theorem prover to solve a subprob-lem. The user can also pass the control to a planning component that supports andpartially automates his long-range planning of a proof. Toward the important goal ofuser-friendliness, machine-generated proofs are transformed in several steps into muchshorter, better-structured proofs that are finally translated into natural language.This work was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D2, D3)
An important property and also a crucial point ofa term rewriting system is its termination. Transformation or-derings, developed by Bellegarde & Lescanne strongly based on awork of Bachmair & Dershowitz, represent a general technique forextending orderings. The main characteristics of this method aretwo rewriting relations, one for transforming terms and the otherfor ensuring the well-foundedness of the ordering. The centralproblem of this approach concerns the choice of the two relationssuch that the termination of a given term rewriting system can beproved. In this communication, we present a heuristic-based al-gorithm that partially solves this problem. Furthermore, we showhow to simulate well-known orderings on strings by transformationorderings.
This report presents a methodology to guide equational reasoningin a goal directed way. Suggested by rippling methods developed inthe field of inductive theorem proving we use attributes of terms andheuristics to determine bridge lemmas, i.e. lemmas which have tobe used during the proof of the theorem. Once we have found sucha bridge lemma we use the techniques of difference unification andrippling to enable its use.
This paper develops a sound and complete transformation-based algorithm forunification in an extensional order-sorted combinatory logic supporting constantoverloading and a higher-order sort concept. Appropriate notions of order-sortedweak equality and extensionality - reflecting order-sorted fij-equality in thecorresponding lambda calculus given by Johann and Kohlhase - are defined, andthe typed combinator-based higher-order unification techniques of Dougherty aremodified to accommodate unification with respect to the theory they generate. Thealgorithm presented here can thus be viewed as a combinatory logic counterpartto that of Johann and Kohlhase, as well as a refinement of that of Dougherty, andprovides evidence that combinatory logic is well-suited to serve as a framework forincorporating order-sorted higher-order reasoning into deduction systems aimingto capitalize on both the expressiveness of extensional higher-order logic and theefficiency of order-sorted calculi.
We consider the problem of verifying confluence and termination of conditionalterm rewriting systems (TRSs). For unconditional TRSs the critical pair lemmaholds which enables a finite test for confluence of (finite) terminating systems.And for ensuring termination of unconditional TRSs a couple of methods forconstructing appropiate well-founded term orderings are known. If however ter-mination is not guaranteed then proving confluence is much more difficult. Re-cently we have obtained some interesting results for unconditional TRSs whichprovide sufficient criteria for termination plus confluence in terms of restrictedtermination and confluence properties. In particular, we have shown that anyinnermost terminating and locally confluent overlay system is complete, i.e. ter-minating and confluent. Here we generalize our approach to the conditional caseand show how to solve the additional complications due to the presence of con-ditions in the rules. Our main result can be stated as follows: Any conditionalTRS which is an innermost terminating semantical overlay system such that all(conditional) critical pairs are joinable is complete.
We will answer a question posed in [DJK91], and will show that Huet's completion algorithm [Hu81] becomes incomplete, i.e. it may generate a term rewriting system that is not confluent, if it is modified in a way that the reduction ordering used for completion can be changed during completion provided that the new ordering is compatible with the actual rules. In particular, we will show that this problem may not only arise if the modified completion algorithm does not terminate: Even if the algorithm terminates without failure, the generated finite noetherian term rewriting system may be non-confluent. Most existing implementations of the Knuth-Bendix algorithm provide the user with help in choosing a reduction ordering: If an unorientable equation is encountered, then the user has many options, especially, the one to orient the equation manually. The integration of this feature is based on the widespread assumption that, if equations are oriented by hand during completion and the completion process terminates with success, then the generated finite system is a maybe non terminating but locally confluent system (see e.g. [KZ89]). Our examples will show that this assumption is not true.
Even though it is not very often admitted, partial functions do play asignificant role in many practical applications of deduction systems. Kleenehas already given a semantic account of partial functions using three-valuedlogic decades ago, but there has not been a satisfactory mechanization. Recentyears have seen a thorough investigation of the framework of many-valuedtruth-functional logics. However, strong Kleene logic, where quantificationis restricted and therefore not truth-functional, does not fit the frameworkdirectly. We solve this problem by applying recent methods from sorted logics.This paper presents a resolution calculus that combines the proper treatmentof partial functions with the efficiency of sorted calculi.
The team work method is a concept for distributing automated theoremprovers and so to activate several experts to work on a given problem. We haveimplemented this for pure equational logic using the unfailing KnuthADBendixcompletion procedure as basic prover. In this paper we present three classes ofexperts working in a goal oriented fashion. In general, goal oriented experts perADform their job "unfair" and so are often unable to solve a given problem alone.However, as a team member in the team work method they perform highly effiADcient, even in comparison with such respected provers as Otter 3.0 or REVEAL,as we demonstrate by examples, some of which can only be proved using teamwork.The reason for these achievements results from the fact that the team workmethod forces the experts to compete for a while and then to cooperate by exADchanging their best results. This allows one to collect "good" intermediate resultsand to forget "useless" ones. Completion based proof methods are frequently reADgarded to have the disadvantage of being not goal oriented. We believe that ourapproach overcomes this disadvantage to a large extend.
In 1978, Klop demonstrated that a rewrite system constructed by adding the untyped lambda calculus, which has the Church-Rosser property, to a Church-Rosser first-order algebraic rewrite system may not be Church-Rosser. In contrast, Breazu-Tannen recently showed that argumenting any Church-Rosser first-order algebraic rewrite system with the simply-typed lambda calculus results in a Church-Rosser rewrite system. In addition, Breazu-Tannen and Gallier have shown that the second-order polymorphic lambda calculus can be added to such rewrite systems without compromising the Church-Rosser property (for terms which can be provably typed). There are other systems for which a Church-Rosser result would be desirable, among them being X^t+SP+FIX, the simply-typed lambda calculus extended with surjective pairing and fixed points. This paper will show that Klop's untyped counterexample can be lifted to a typed system to demonstrate that X^t+SP+FIX is not Church-Rosser.
Over the past thirty years there have been significant achievements in the field of auto-mated theorem proving with respect to the reasoning power of the inference engines.Although some effort has also been spent to facilitate more user friendliness of the de-duction systems, most of them failed to benefit from more recent developments in therelated fields of artificial intelligence (AI), such as natural language generation and usermodeling. In particular, no model is available which accounts both for human deductiveactivities and for human proof presentation. In this thesis, a reconstructive architecture issuggested which substantially abstracts, reorganizes and finally translates machine-foundproofs into natural language. Both the procedures and the intermediate representationsof our architecture find their basis in computational models for informal mathematicalreasoning and for proof presentation. User modeling is not incorporated into the currenttheory, although we plan to do so later.
This paper presents a new way to use planning in automated theorem provingby means of distribution. To overcome the problem that often subtasks fora proof problem can not be detected a priori (which prevents the use of theknown planning and distribution techniques) we use a team of experts that workindependently with different heuristics on the problem. After a certain amount oftime referees judge their results using the impact of the results on the behaviourof the expert and a supervisor combines the selected results to a new startingpoint.This supervisor also selects the experts that can work on the problem inthe next round. This selection is a reactive planning task. We outline whichinformation the supervisor can use to fulfill this task and how this informationis processed to result in a plan or to revise a plan. We also show that the useof planning for the assignment of experts to the team allows the system to solvemany different examples in an acceptable time with the same start configurationand without any consultation of the user.Plans are always subject to changeShin'a'in proverb
The background of this paper is the area of case-based reasoning. This is a reasoning technique where one tries to use the solution of some problem which has been solved earlier in order to obta in a solution of a given problem. As example of types of problems where this kind of reasoning occurs very often is the diagnosis of diseases or faults in technical systems. In abstract terms this reduces to a classification task. A difficulty arises when one has not just one solved problem but when there are very many. These are called "cases" and they are stored in the case-base. Then one has to select an appropriate case which means to find one which is "similar" to the actual problem. The notion of similarity has raised much interest in this context. We will first introduce a mathematical framework and define some basic concepts. Then we will study some abstract phenomena in this area and finally present some methods developed and realized in a system at the University of Kaiserslautern.
The introduction of sorts to first-order automated deduction has broughtgreater conciseness of representation and a considerable gain in efficiency byreducing the search space. It is therefore promising to treat sorts in higherorder theorem proving as well.In this paper we present a generalization of Huet's Constrained Resolutionto an order-sorted type theory SigmaT with term declarations. This system buildscertain taxonomic axioms into the unification and conducts reasoning withthem in a controlled way. We make this notion precise by giving a relativizationoperator that totally and faithfully encodes SigmaT into simple type theory.
In this report we present a case study of employing goal-oriented heuristics whenproving equational theorems with the (unfailing) Knut-Bendix completion proce-dure. The theorems are taken from the domain of lattice ordered groups. It will bedemonstrated that goal-oriented (heuristic) criteria for selecting the next critical paircan in many cases significantly reduce the search effort and hence increase per-formance of the proving system considerably. The heuristic, goalADoriented criteriaare on the one hand based on so-called "measures" measuring occurrences andnesting of function symbols, and on the other hand based on matching subterms.We also deal with the property of goal-oriented heuristics to be particularly helpfulin certain stages of a proof. This fact can be addressed by using them in a frame-work for distributed (equational) theorem proving, namely the "teamwork-method".
We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.
A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.
One of the problems of autonomous mobile systems is the continuous tracking of position and orientation. In most cases, this problem is solved by dead reckoning, based on measurement of wheel rotations or step counts and step width. Unfortunately dead reckoning leads to accumulation of drift errors and is very sensitive against slippery. In this paper an algorithm for tracking position and orientation is presented being nearly independent from odometry and its problems with slippery. To achieve this results, a rotating range-finder is used, delivering scans of the environmental structure. The properties of this structure are used to match the scans from different locations in order to find their translational and rotational displacement. For this purpose derivatives of range-finder scans are calculated which can be used to find position and orientation by crosscorrelation.
Dynamic Lambda Calculus
(1999)
The goal of this paper is to lay a logical foundation for discourse theories by providing analgebraic foundation of compositional formalisms for discourse semantics as an analogon tothe simply typed (lambda)-calculus. Just as that can be specialized to type theory by simply providinga special type for truth values and postulating the quantifiers and connectives as constantswith fixed semantics, the proposed dynamic (lambda)-calculus DLC can be specialized to (lambda)-DRT byessentially the same measures, yielding a much more principled and modular treatment of(lambda)-DRT than before; DLC is also expected to eventually provide a conceptually simple basisfor studying higher-order unification for compositional discourse theories.Over the past few years, there have been a series of attempts [Zee89, GS90, EK95, Mus96,KKP96, Kus96] to combine the Montagovian type theoretic framework [Mon74] with dynamicapproaches, such as DRT [Kam81]. The motivation for these developments is to obtain a generallogical framework for discourse semantics that combines compositionality and dynamic binding.Let us look at an example of compositional semantics construction in (lambda)-DRT which is one ofthe above formalisms [KKP96, Kus96]. By the use of fi-reduction we arrive at a first-order DRTrepresentation of the sentence A i man sleeps. (i denoting an index for anaphoric binding.)
This paper shows how a new approach to theorem provingby analogy is applicable to real maths problems. This approach worksat the level of proof-plans and employs reformulation that goes beyondsymbol mapping. The Heine-Borel theorem is a widely known result inmathematics. It is usually stated in R 1 and similar versions are also truein R 2 , in topology, and metric spaces. Its analogical transfer was proposedas a challenge example and could not be solved by previous approachesto theorem proving by analogy. We use a proof-plan of the Heine-Boreltheorem in R 1 as a guide in automatically producing a proof-plan of theHeine-Borel theorem in R 2 by analogy-driven proof-plan construction.
This paper addresses a model of analogy-driven theorem proving that is more general and cognitively more adequate than previous approaches. The model works at the level ofproof-plans. More precisely, we consider analogy as a control strategy in proof planning that employs a source proof-plan to guide the construction of a proof-plan for the target problem. Our approach includes a reformulation of the source proof-plan. This is in accordance with the well known fact that constructing ananalogy in maths often amounts to first finding the appropriate representation which brings out the similarity of two problems, i.e., finding the right concepts and the right level of abstraction. Several well known theorems were processed by our analogy-driven proof-plan construction that could not be proven analogically by previous approaches.
This paper addresses analogy-driven auto-mated theorem proving that employs a sourceproof-plan to guide the search for a proof-planof the target problem. The approach presen-ted uses reformulations that go beyond symbolmappings and that incorporate frequently usedre-representations and abstractions. Severalrealistic math examples were successfully pro-cessed by our analogy-driven proof-plan con-struction. One challenge example, a Heine-Borel theorem, is discussed here. For this ex-ample the reformulaitons are shown step bystep and the modifying actions are demon-strated.
Analogy in CLAM
(1999)
CL A M is a proof planner, developed by the Dream group in Edinburgh,that mainly operates for inductive proofs. This paper addresses the questionhow an analogy model that I developed independently of CL A M can beapplied to CL A M and it presents analogy-driven proof plan construction as acontrol strategy of CL A M . This strategy is realized as a derivational analogythat includes the reformulation of proof plans. The analogical replay checkswhether the reformulated justifications of the source plan methods hold inthe target as a permission to transfer the method to the target plan. SinceCL A M has very efficient heuristic search strategies, the main purpose ofthe analogy is to suggest lemmas, to replay not commonly loaded methods,to suggest induction variables and induction terms, and to override controlrather than to construct a target proof plan that can be built by CL A Mitself more efficiently.
Distributed systems are an alternative to shared-memorymultiprocessors for the execution of parallel applications.PANDA is a runtime system which provides architecturalsupport for efficient parallel and distributed program-ming. PANDA supplies means for fast user-level threads,and for a transparent and coordinated sharing of objectsacross a homogeneous network. The paper motivates themajor architectural choices that guided our design. Theproblem of sharing data in a distributed environment isdiscussed, and the performance of appropriate mecha-nisms provided by the PANDA prototype implementation isassessed.
AbstractOne main purpose for the use of formal description techniques (FDTs) is formal reasoningand verification. This requires a formal calculus and a suitable formal semantics of theFDT. In this paper, we discuss the basic verification requirements for Estelle, and howthey can be supported by existing calculi. This leads us to the redefinition of the stanADdard Estelle semantics using Lamport's temporal logic of actions and Dijkstra's predicatetransformers.
The increasing use of distributed computer systems leads to an increasingneed for distributed applications. Their development in various domains like of-fice automation or computer integrated manufacturing is not sufficiently sup-ported by current techniques. New software engineering concepts are needed inthe three areas 'languages', 'tools', and 'environments'. We believe that object-oriented techniques and graphics support are key approaches to major achieve-ments in all three areas. As a consequence, we developed a universal object-oriented graphical editor ODE as one of our basic tools (tool building tool).ODE is based on the object-oriented paradigm, with some important extensionslike built-in object relations. It has an extensible functional language which al-lows for customization of the editor. ODE was developed as part of DOCASE, asoftware production environment for distributed applications. The basic ideas ofDOCASE will be presented and the requirements for ODE will be pointed out.Then ODE will be described in detail, followed by a sample customization ofODE: the one for the DOCASE design language.
This paper presents a completely systematic design procedure for asynchronous controllers.The initial step is the construction of a signal transition graph (STG, an interpreted Petri net) ofthe dialog between data path and controller: a formal representation without reference to timeor internal states. To implement concurrently operating control structures, and also to reducedesign effort and circuit cost, this STG can be decomposed into overlapping subnets. A univer-sal initial solution is then obtained by algorithmically constructing a primitive flow table fromeach component net. This step links the procedure to classical asynchronous design, in particu-lar to its proven optimization methods, without restricting the set of solutions. In contrast toother approaches, there is no need to extend the original STG intuitively.