Kaiserslautern - Fachbereich Informatik
Refine
Year of publication
- 1999 (267) (remove)
Document Type
- Preprint (206)
- Article (52)
- Report (5)
- Master's Thesis (3)
- Study Thesis (1)
Has Fulltext
- yes (267)
Keywords
- Case-Based Reasoning (11)
- AG-RESY (6)
- Fallbasiertes Schliessen (5)
- HANDFLEX (5)
- PARO (5)
- case-based problem solving (5)
- Abstraction (4)
- Fallbasiertes Schließen (4)
- Knowledge Acquisition (4)
- resolution (4)
Faculty / Organisational entity
Mechanised reasoning systems and computer algebra systems have apparentlydifferent objectives. Their integration is, however, highly desirable, since in manyformal proofs both of the two different tasks, proving and calculating, have to beperformed. Even more importantly, proof and computation are often interwoven andnot easily separable. In the context of producing reliable proofs, the question howto ensure correctness when integrating a computer algebra system into a mechanisedreasoning system is crucial. In this contribution, we discuss the correctness prob-lems that arise from such an integration and advocate an approach in which thecalculations of the computer algebra system are checked at the calculus level of themechanised reasoning system. This can be achieved by adding a verbose mode to thecomputer algebra system which produces high-level protocol information that can beprocessed by an interface to derive proof plans. Such a proof plan in turn can beexpanded to proofs at different levels of abstraction, so the approach is well-suited forproducing a high-level verbalised explication as well as for a low-level machine check-able calculus-level proof. We present an implementation of our ideas and exemplifythem using an automatically solved extended example.
We propose a specification language for the formalization of data types with par-tial or non-terminating operations as part of a rewrite-based logical frameworkfor inductive theorem proving. The language requires constructors for designat-ing data items and admits positive/negative conditional equations as axioms inspecifications. The (total algebra) semantics for such specifications is based onso-called data models. We present admissibility conditions that guarantee theunique existence of a distinguished data model with properties similar to thoseof the initial model of a usual equational specification. Since admissibility of aspecification requires confluence of the induced rewrite relation, we provide aneffectively testable confluence criterion which does not presuppose termination.
To prove difficult theorems in a mathematical field requires substantial know-ledge of that field. In this paper a frame-based knowledge representation formalismis presented, which supports a conceptual representation and to a large extent guar-antees the consistency of the built-up knowledge bases. We define a semantics ofthe representation by giving a translation into the underlaying logic.
The amount of user interaction is the prime cause of costs in interactiveprogram verification. This paper describes an internal analogy techniquethat reuses subproofs in the verification of state-based specifications. Itidentifies common patterns of subproofs and their justifications in orderto reuse these subproofs; thus significant savings on the number of userinteractions in a verification proof are achievable.
We present an empirical study of mathematical proofs by diagonalization, the aim istheir mechanization based on proof planning techniques. We show that these proofs canbe constructed according to a strategy that (i) finds an indexing relation, (ii) constructsa diagonal element, and (iii) makes the implicit contradiction of the diagonal elementexplicit. Moreover we suggest how diagonal elements can be represented.
Top-down and bottom-up theorem proving approaches have each specific ad-vantages and disadvantages. Bottom-up provers profit from strong redundancycontrol and suffer from the lack of goal-orientation, whereas top-down provers aregoal-oriented but have weak calculi when their proof lengths are considered. Inorder to integrate both approaches our method is to achieve cooperation betweena top-down and a bottom-up prover: The top-down prover generates subgoalclauses, then they are processed by a bottom-up prover. We discuss theoreticaspects of this methodology and we introduce techniques for a relevancy-basedfiltering of generated subgoal clauses. Experiments with a model eliminationand a superposition-based prover reveal the high potential of our cooperation approach.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
We examine an approach for demand-driven cooperative theorem proving.We briefly point out the problems arising from the use of common success-driven cooperation methods, and we propose the application of our approachof requirement-based cooperative theorem proving. This approach allows for abetter orientation on current needs of provers in comparison with conventional co-operation concepts. We introduce an abstract framework for requirement-basedcooperation and describe two instantiations of it: Requirement-based exchangeof facts and sub-problem division and transfer via requests. Finally, we reporton experimental studies conducted in the areas superposition and unfailing com-pletion.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
HOT is an automated higher-order theorem prover based on HTE, an extensional higher-order tableaux calculus (Kohlhase 95). The first part of the paper introduces a variant of the calculus which closely corresponds to the proof procedure implemented in HOT. The second part discusses HOT's design that can be characterized as a concurrent Blackboard architecture. We show the usefulness of the implementation by including benchmark results for over one hundred solved problems from logic and set theory.
Orderings on polynomial interpretations of operators represent a powerful technique for proving thetermination of rewriting systems. One of the main problems of polynomial orderings concerns thechoice of the right interpretation for a given rewriting system. It is very difficult to develop techniquesfor solving this problem. Here, we present three new heuristic approaches: (i) guidelines for dealingwith special classes of rewriting systems, (ii) an algorithm for choosing appropriate special polynomialsas well as (iii) an extension of the original polynomial ordering which supports the generation ofsuitable interpretations. All these heuristics will be applied to examples in order to illustrate theirpractical relevance.
A large set of criteria to evaluate formal methods for reactive systems is presented. To make this set more comprehensible, it is structured according to a Concept-Model of formal methods. It is made clear that it is necessary to make the catalogue more specific before applying it. Some of the steps needed to do so are explained. As an example the catalogue is applied within the context of the application domain building automation systems to three different formal methods: SDL, statecharts, and a temporallogic.
Im Bereich der Expertensysteme ist das Problemlösen auf der Basis von bekannten Fallbeispielen ein derzeit sehr aktuelles Thema. Auch für Diagnoseaufgaben gewinnt der fallbasierte Ansatz immer mehr an Bedeutung. In diesem Papier soll der im Rahmen des Moltke -Projektes1 an der Universität Kaiserslautern entwickelte fallbasierte Problemlöser Patdex/22 vorgestellt werden. Ein erster Prototyp, Patdex/1, wurde bereits 1988 entwickelt.
We present a mathematical knowledge base containing the factual know-ledge of the first of three parts of a textbook on semi-groups and automata,namely "P. Deussen: Halbgruppen und Automaten". Like almost all math-ematical textbooks this textbook is not self-contained, but there are somealgebraic and set-theoretical concepts not being explained. These concepts areadded to the knowledge base. Furthermore there is knowledge about the nat-ural numbers, which is formalized following the first paragraph of "E. Landau:Grundlagen der Analysis".The data base is written in a sorted higher-order logic, a variant of POST ,the working language of the proof development environment OmegaGamma mkrp. We dis-tinguish three different types of knowledge: axioms, definitions, and theorems.Up to now, there are only 2 axioms (natural numbers and cardinality), 149definitions (like that for a semi-group), and 165 theorems. The consistency ofsuch knowledge bases cannot be proved in general, but inconsistencies may beimported only by the axioms. Definitions and theorems should not lead to anyinconsistency since definitions form conservative extensions and theorems areproved to be consequences.
Das System ART (ASF RRL Translation) stellt im wesentlichen eine Umgebung dar,in welcher die Modularisierbarkeit von Beweisen (Induktionsbeweisen über Gleichungs-spezifikationen) untersucht werden kann. Es wurde die bereits bestehende Spezifikati-onsprache ASF (siehe [BeHeKl89]), in welcher modularisierte Spezifikationen möglichsind, so erweitert, daß zusätzlich auch Beweisaufgaben spezifiziert werden können. Imfolgenden wird diese erweiterte Spezifikationsprache auch ASF genannt. Als Bewei-ser für die Beweisaufgaben einer Spezifikation wurde RRL (siehe [KaZh89]) gewählt.RRL kann sowohl Kommandos aus einem File abarbeiten, wie auch Sitzungsprotokolleanfertigen, mit deren Hilfe sich die Beweisverläufe und Benutzereingaben der entspre-chenden RRL-Sitzung rekonstruieren lassen. In ART kann nun eine ASF-Spezifikation,die Beweisaufgaben umfassen kann, in ein File übersetzt werden, welches von RRLabgearbeitet werden kann. Dies wird im folgenden kurz mit 'Übersetzung von ASF nach RRL' bezeichnet. Bei der Abarbeitung eines solchen Files wird von RRL ein Sit-zungsprotokoll angelegt. ART kann dieses Sitzungsprotokoll dazu heranziehen, neueErgebnisse, wie etwa den erfolgreichen Beweis einer Beweisaufgabe, zu ermitteln, umdiese Ergebnisse der ursprüngliche Spezifikation hinzuzufügen. Dies wird im folgendenkurz mit 'Rückübersetzung von RRL nach ASF' bezeichnet. Im Kern besteht ART alsoaus einer Komponente zur Übersetzung von ASF nach RRL und aus einer Komponentezur Rückübersetzung von RRL nach ASF.
The paper shows that characterizing the causal relationship between significant events is an important but non-trivial aspect for understanding the behavior of distributed programs. An introduction to the notion of causality and its relation to logical time is given; some fundamental results concerning the characterization of causality are pre- sented. Recent work on the detection of causal relationships in distributed computations is surveyed. The relative merits and limitations of the different approaches are discussed, and their general feasibility is analyzed.
Die systematische Verbesserung von Techniken zur Entwicklung und Betreuung von Software setzt eine explizite Darstellung der in einem Projekt ablaufenden Vorgnge (Prozesse) voraus. Diese Darstellungen (Prozemodelle) werden durch Software- Prozemodellierung gewonnen. Eine Sprache zur Beschreibung solcher Modelle ist MVP-L. Verschiedene Standard-Prozemodelle existieren bereits. Bisher gibt es jedoch kaum dokumentierte Software-Entwicklungsprozesse, die speziell fr die Entwicklung reaktiver Systeme entworfen worden sind, d. h. auf die besonderen Anfordernisse bei der Entwicklung reaktiver Systeme zugeschnitten sind. Auch ist bisher nur wenig Erfahrung dokumentiert, fr welche Art von Projektkontexten diese Prozesse gltig sind. Eine Software- Entwicklungsmethode, die - mit Einschrnkungen - zur Entwicklung reaktiver Systeme geeignet ist, ist SOMT (SDL-oriented Object Modeling Technique). Dieser Bericht beschreibt die erfahrungsbasierte Modellierung der Software-Entwicklungsprozesse von SOMT mit MVP-L. Zunchst werden inhaltliche Grundlagen der Software-Entwicklungsmethode SOMT beschrieben. Insbesondere wird auf die eingesetzten Techniken und deren Kombination eingegangen. Anschlieend werden mgliche Projektkontexte charakterisiert, in denen das SOMT-Modell im Sinne eines Erfahrungselements Gltigkeit hat. Darauf werden der Modellierungsvorgang sowie hierbei gemachte Erfahrungen dokumentiert. Eine vollstndige Darstellung des Modells in grafischer MVP-L-Notation befindet sich im Anhang. Die Darstellung des Modells in textueller Notation kann der SFB-Erfahrungsdatenbank entnommen werden.
In order to improve the quality of software systems and to set up a more effective process for their development, many attempts have been made in the field of software engineering. Reuse of existing knowledge is seen as a promising way to solve the outstanding problems in this field. In previous work we have integrated the design pattern concept with the formal design language SDL, resulting in a certain kind of pattern formalization. For the domain of communication systems we have also developed a pool of SDL patterns with an accompanying process model for pattern application. In this paper we present an extension that combines the SDL pattern approach with the experience base concept. This extension supports a systematic method for empirical evaluation and continuous improvement of the SDL pattern approach. Thereby the experience base serves as a repository necessary for effective reuse of the captured knowledge. A comprehensive usage scenario is described which shows the advantages of the combined approach. To demonstrate its feasibility, first results of a research case study are given.
Ziel dieser Arbeit ist es, eine Methode zur Verfügung zu stellen, mit der ein Simulator für gebäudespezifische Aufgaben modelliert werden kann. Die Modellierung muß dabei so angelegt sein, daß sowohl einfache als auch sehr komplexe Simulatoren für spezielle Gebäude entworfen werden können. Aus dem erstellten Modell ist es anschließend möglich, mit Hilfe von Generatoren automatisch ein Programm zu erzeugen. Dadurch kann ein Entwerfer ohne spezielle Kenntnisse auf dem Gebiet der Simulation einen Gebäude-Simulator entwickeln. Zur Modellierung wurde ein domänenspezifischer Katalog von Entwurfsmustern erstellt. Dabei können die einzelnen Muster direkt zur Modellierung und Codegenerierung eingesetzt werden.
The purpose of this expose is to explain the generic design of a customized communication subsystem. The expose addresses both functional and non-functional aspects. Starting point is a real-time requirement from the application area building automation. We show how this application requirement and some background information about the application area lead to a system architecture, a communication service, a protocol architecture and to the selection, adaptation, and composition of protocol functionalities. The reader will probably be surprised how much effort is necessary in order to implement the innocuous, innocent, inconspicuous looking application requirement. Formal description techniques (FDTs) will be used in all design phases.
Today's communication systems are typically structured into several layers, where each layer realizes a fixed set of protocol functionalities. These functionalities have been carefully chosen such that a wide range of applications can be supported and protocols work in a general environment of networks. However, due to evolving network technologies as well as increased and varying demands of modern applications general-purpose protocol stacks are not always adequate. To improve this situation new flexible communication architectures have been developed which enable the configuration of customized communication subsystems by composing a proper set of reusable building blocks. In particular, several approaches to automatic configuration of communication subsystems have been reported in the literature. This report gives an overview of theses approaches (F-CCS, Da CaPo, x-Kernel, and ADAPTIVE) and, in particular, defines a framework, which identifies common architectural issues and configuration tasks.
A new approach for modelling time that does not rely on the concept of a clock is proposed. In order to establish a notion of time, system behaviour is represented as a joint progression of multiple threads of control, which satisfies a certain set of axioms. We show that the clock-independent time model is related to the well-known concept of a global clock and argue that both approaches establish the same notion of time.
Due to the large variety of modern applications and evolving network technologies, a small number of general-purpose protocol stacks will no longer be sufficient. Rather, customization of communication protocols will play a major role. In this paper, we present an approach that has the potential to substantially reduce the effort for designing customized protocols. Our approach is based on the concept of design patterns, which is well-established in object oriented software development. We specialize this concept to communication protocols, and - in addition - use formal description techniques (FDTs) to specify protocol design patterns as well as rules for their instantiation and composition. The FDTs of our choice are SDL-92 and MSCs, which offer suitable language support. We propose an SDL pattern description template and relate pattern-based configuring of communication protocols to existing SDL methodologies. Particular SDL patterns and the configuring of a customized resource reservation protocol are presented in detail.
A non-trivial real-time requirement obeying a pattern that can be foundin various instantiations in the application domain building automation, and which is therefore called generic, is investigated in detail. Starting point is a description of a real-time problem in natural language augmented by a diagram, in a style often found in requirements documents. Step by step, this description is made more precise and finally transformed into a surprisingly concise formal specification, written in real-time temporal logic with customized operators. Wereason why this formal specification precisely captures the original description- as far as this is feasible due to the lack of precision of natural language.
A Tailored Real Time Temporal Logic for Specifying Requirements of Building Automation Systems
(1999)
A tailored real time temporal logic for specifying requirements of building automation systems is introduced and analyzed. The logic features several new real time operators, which are chosen with regard to the application area. The new operators improve the conciseness and readability of requirements as compared to a general-purpose real time temporal logic. In addition, some of the operators also enhance the expressiveness of the logic. A number of properties of the new operators are presented and proven.
A generic approach to the formal specification of system requirements is presented. It is based on a pool of requirement patterns, which are related to design patterns well-known in object-oriented software development. The application of such patterns enhances the reusability and genericity as well as the intelligibility of the formal requirement specification. The approach is instantiated by a tailored real-time temporal logic and by selecting building automation systems as application domain. With respect to this domain, the pattern discovery and reuse tasks are explained and illustrated, and a set of typical requirement patterns is presented. Finally, the results of a case study where the approach has been applied are summarized.
Several activities around the world aim at integrating object-oriented data models with relational ones in order to improve database management systems. As a first result of these activities, object-relational database management systems (ORDBMS) are already commercially available and, simultaneously, are subject to several research projects. This (position) paper reports on our activities in exploiting object-relational database technology for establishing repository manager functionality supporting software engineering (SE) processes. We argue that some of the key features of ORDBMS can directly be exploited to fulfill many of the needs of SE processes. Thus, ORDBMS, as we think, are much better suited to support SE applications than any others. Nevertheless, additional functionality, e. g., providing adequate version management, is required in order to gain a completely satisfying SE repository. In order to remain flexible, we have developed a generative approach for providing this additional functionality. It remains to be seen whether this approach, in turn, can effectively exploit ORDBMS features. This paper, therefore, wants to show that ORDBMS can substantially contribute to both establishing and running SE repositories.
The background of this paper is the area of case-based reasoning. This is a reasoning technique where one tries to use the solution of some problem which has been solved earlier in order to obta in a solution of a given problem. As example of types of problems where this kind of reasoning occurs very often is the diagnosis of diseases or faults in technical systems. In abstract terms this reduces to a classification task. A difficulty arises when one has not just one solved problem but when there are very many. These are called "cases" and they are stored in the case-base. Then one has to select an appropriate case which means to find one which is "similar" to the actual problem. The notion of similarity has raised much interest in this context. We will first introduce a mathematical framework and define some basic concepts. Then we will study some abstract phenomena in this area and finally present some methods developed and realized in a system at the University of Kaiserslautern.
The development of software products has become a highly cooperative and distributed activity involving working groups at geographically distinct places. These groups show an increasing mobility and a very flexible organizational structure. Process methodology and technology have to take such evolutions into account. A possible direction for the emergence of new process technology and methodology is to take benefit from recent advances within multiagent systems engineering : innovative methodologies for adaptable and autonomous architectures; they exhibit interesting features to support distributed software processes.
Coordinating distributed software development projectsbecomes more difficult, as software becomes more complex, team sizes and organisational overheads increase,and software components are sourced from disparate places. We describe the development of a range of softwaretools to support coordination of such projects. Techniques we use include asynchronous and semi -synchronousediting, software process modelling and enactment, developer-specified coordination agents, and component-based tool integration.
SmallSync, an internet event synchronizer, is intended to provide a monitoring and visualization methodology for permitting simultaneous analysis and control of multiple remote processes on the web. The current SmallSync includes: (1) a mechanism to multicast web window-based commands, message passing events and process execution events among processes; (2) an event synchronizer to allow concurrent execution of some functions on multiple machines; (3) a means to report when these events cause errors in the processes; and (4) ad hoc visualization of process states using existing visualizers.
Geographically distributed software development holds much promise for increasing market penetration and speeding up development cycles. However, it also comes with a set of new challenges for those developing the software, bought about by the distance among colleagues.This paper outlines a new research project underway to explore those issues and their implications for organizing geographically distributed software development efforts. We also describe the approaches we are taking towards providing solutions - in the form of processes and technology - to address the challenges of working remotely.
We present a new software architecture in which all concepts necessary to achieve fault tolerance can be added to an appli- cation automatically without any source code changes. As a case study, we consider the problem of providing a reliable service despite node failures by executing a group of replicat- ed servers. Replica creation and management as well as fail- ure detection and recovery are performed automatically by a separate fault tolerance layer (ft-layer) which is inserted be- tween the server application and the operating system kernel. The layer is invisible for the application since it provides the same functional interface as the operating system kernel, thus making the fault tolerance property of the service completely transparent for the application. A major advantage of our ar- chitecture is that the layer encapsulates both fault tolerance mechanisms and policies. This allows for maximum flexibility in the choice of appropriate methods for fault tolerance with- out any changes in the application code.
PANDA is a run-time package based on a very small operating system kernel which supports distributed applications written in C++. It provides powerful abstractions such as very efficient user-level threads, a uniform global address space, object and thread mobility, garbage collection, and persistent objects. The paper discusses the design ration- ales underlying the PANDA system. The fundamental features of PANDA are surveyed, and their implementation in the current prototype environment is outlined.
Requirements engineering (RE) is a necessary part of the software development process, as it helps customers and designers identify necessary system requirements. If these stakeholders are separated by distance, we argue that a distributed groupware environment supporting a cooperative requirements engineering process must be supplied that allows them to negotiate software requirements. Such a groupware environment must support aspects of joint work relevant to requirements negotiation: synchronous and asynchronous collaboration, telepresence, and teledata. It should also add explicit support for a structured RE process, which includes the team's ability to discuss multiple perspectives during requirements acquisition and traceability. We chose the TeamWave software platform as an environment that supplied the basic collaboration capabilities, and tailored it to fit the specific needs of RE.
Accelerating the maturation process within the software engineering discipline may result in boosts of development productivity. One way to enable this acceleration is to develop tools and processes to mimic evolution of traditional engineering disciplines. Principles established in traditional engineering disciplines represent high-level guidance to constructing these tools and processes. This paper discusses two principles found in the traditional engineering disciplines and how these principles can apply to mature the software engineering discipline. The discussion is concretized through description of the Collaborative Management Environment, a software system under collaborative development among several national laboratories.
Collecting Experience on the Systematic Development of CBR Applications using the INRECA Methodology
(1999)
This paper presents an overview of the INRECA methodology for building and maintaining CBR applications. This methodology supports the collection and reuse of experience on the systematic development of CBR applications. It is based on the experience factory and the software process modeling approach from software engineering. CBR development experience is documented using software process models and stored in different levels of generality in a three-layered experience base. Up to now, experience from 9 industrial projects enacted by all INRECA II partners has been collected.
Automata-Theoretic vs. Property-Oriented Approaches for the Detection of Feature Interactions in IN
(1999)
The feature interaction problem in Intelligent Networks obstructs more and morethe rapid introduction of new features. Detecting such feature interactions turns out to be a big problem. The size of the systems and the sheer computational com-plexity prevents the system developer from checking manually any feature against any other feature. We give an overview on current (verification) approaches and categorize them into property-oriented and automata-theoretic approaches. A comparisonturns out that each approach complements the other in a certain sense. We proposeto apply both approaches together in order to solve the feature interaction problem.
Planning means constructing a course of actions to achieve a specified set of goals when starting from an initial situation. For example, determining a sequence of actions (a plan) for transporting goods from an initial location to some destination is a typical planning problem in the transportation domain. Many planning problems are of practical interest.
Integrated project management means that design and planning are interleaved with plan execution, allowing both the design and plan to be changed as necessary. This requires that the right effects of change are propagated through the plan and design. When this is distributed among designers and planners, no one may have all of the information to perform such propagation and it is important to identify what effects should be propagated to whom when. We describe a set of dependencies among plan and design elements that allow such notification by a set of message-passing software agents. The result is to provide a novel level of computer support for complex projects.
Recent studies on planning, comparing plan re-use and plan generation, have shown that both the above tasks may have the same degree of computational complexity, even if we deal with very similar problems. The aim of this paper is to show that the same kind of results apply also for diagnosis. We propose a theoretical complexity analysis coupled with some experimental tests, intended to evaluate the adequacy of adaptation strategies which re-use the solutions of past diagnostic problems in order to build a solution to the problem to be solved. Results of such analysis show that, even if diagnosis re-use falls into the same complexity class of diagnosis generation (they are both NP-complete problems), practical advantages can be obtained by exploiting a hybrid architecture combining case-based and modelbased diagnostic problem solving in a unifying framework.
Verfahren des Maschinellen Lernens haben heute eine Reife erreicht, die zu ersten erfolgreichen industriellen Anwendungen geführt hat. In der Prozessdiagnose und -steuerung ermöglichen Lernverfahren die Klassifikation und Bewertung von Betriebszuständen, d.h. eine Grobmodellierung eines Prozesses, wenn dieser nicht oder nur teilweise mathematisch beschreibbar ist. Ausserdem gestatten Lernverfahren die automatische Generierung von Klassifizierungsprozeduren, die deterministisch abgearbeitet werden und daher für die Belange der Echtzeitdiagnose und -steuerung u.U. zeiteffektiver als Inferenzmechanismen auf logischer bzw. Produktionsregelbasis sind, da letztere immer mit zeitaufwendigen Suchprozessen verbunden sind.
We present the adaptation process in a CBR application for decision support in the domain of industrial supervision. Our approach uses explanations to approximate relations between a problem description and its solution, and the adaptation process is guided by these explanations (a more detailed presentation has been done in [4]).
The CBR team of the LISA is involved in several applied research projects based on the CBR paradigm. These applications use adaptation to solve the specific problems they face. So, we have capitalized some experience about how can be expressed and formalized adaptation processes. The bibliography on the subject is quite important but demonstrates a lake of formalism. At most, there exists some classifications about different types of adaptation.
Cooperative decision making involves a continuous process, assessing the validity ofdata, information and knowledge acquired and inferred by the colleagues, that is, the shared knowledge space must be transparent. The ACCORD methodology provides aninterpretation framework for the mapping of domain facts - constituting the world model of the expert - onto conceptual models, which can be expressed in formalrepresentations. The ACCORD-BPM framework allows a stepwise and inarbitrary reconstruction of the problem solving competence of BPM experts as a prerequisite foran appropriate architecture of both BPM knowledge bases and the BPM-"reasoning device".
This paper describes how knowledge-based techniques can be used to overcome problems of workflow management in engineering applications. Using explicit process and product models as a basis for a workflow interpreter allows to alternate planning and execution steps, resulting in an increased flexibility of project coordination and enactment. To gain the full advantages of this flexibility, change processes have to be supported by the system. These require an improved traceability of decisions and have to be based on dependency management and change notification mechanisms. Our methods and techniques are illustrated by two applications: Urban land-use planning and software process modeling.
Information technology support for complex, dynamic, and distributed business processes as they occur in engineering domains requires an advanced process management system which enhances currently available workflow management services with respect to integration, flexibility, and adapt ation. We present an uniform and flexible framework for advanced process management on an a bstract level which uses and adapts agent technology from distributed artificial intelligence for both modelling and enacting of processes. We identify two different frameworks for applying agent tec hnology to process management: First, as a multi-agent system with the domain of process manag ement. Second, as a key infrastructure technology for building a process management system. We will then follow the latter approach and introduce different agent types for managing activities, products, and resources which capture specific views on the process.
It is generally agreed that one of the most challenging issues facing the case-based reasoning community is that of adaptation. To date the lion's share of CBR research has concentrated on the retrieval of similar cases, and the result is a wide range of quality retrieval techniques. However, retrieval is just the first part of the CBR equation, because once a similar case has been retrieved it must be adapted. Adaptation research is still in its earliest stages, and researchers are still trying to properly understand and formulate the important issues. In this paper I describe a treatment of adaptation in the context of a case-based reasoning system for software design, called Deja Vu. Deja Vu is particularly interesting, not only because it performs automatic adaptation of retrieved cases, but also because it uses a variety of techniques to try and reduce and predict the degree of adaptation necessary.
About the approach The approach of TOPO was originally developed in the FABEL project1[1] to support architects in designing buildings with complex installations. Supplementing knowledge-based design tools, which are available only for selected subtasks, TOPO aims to cover the whole design process. To that aim, it relies almost exclusively on archived plans. Input to TOPO is a partial plan, and output is an elaborated plan. The input plan constitutes the query case and the archived plans form the case base with the source cases. A plan is a set of design objects. Each design object is defined by some semantic attributes and by its bounding box in a 3-dimensional coordinate system. TOPO supports the elaboration of plans by adding design objects.
INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.
Der Wissenserwerb erschwert bisher häufig den Einsatz wissensbasierter Systeme der Arbeitsplanerstellung in der industriellen Praxis. Die meisten Anwendungen gestatten nur das Erfassen und Editieren des durch aufwendige Erhebung, Systematisierung und Formulierung gewonnenen fachspezifischen Planungswissens. Im Rahmen eines DFG-Projektes soll die Anwendbarkeit bekannter maschineller Lernverfahren auf technologische Reihenfolge- und Zuordnungsprobleme im Rahmen der generierenden Arbeitsplanerstellung von Teilefertigungsprozessen im Maschinenbau nachgewiesen werden. Dazu wird ein Prototyp mit Hilfe eines verfügbaren Softwarewerkzeuges entwickelt, der das maschinelle Lernen aus vorgegebenen Beispielen ermöglichen und mit einem existierenden Prototypen der wissensbasierten Arbeistplanung kommunizieren soll. Der folgende Beitrag gibt einen Überblick über das mit Lernverfahren zu behandelnde Planungswissen und stellt mögliche Repräsentationsmöglichkeiten des Wissens zur Diskussion.
Learning from examples is a field of research in machine learning where class descriptions, like decision trees or implications (production rules or horn clauses) are produced using positive and negative examples as information. To solve this task many different heuristic search strategies have been developed, so far. The search by specialization is the most widely used search strategy, whereas other approaches use a search by generalization only. JoJo is an algorithm that combines both search directions into one search procedure. According to the estimated quality of the currently regarded rule either a generalization or specialization step is carried out by deleting or adding one premise to the conjunction part of the rule. But, to create an even more flexible (and faster) algorithm, it should be possible to delete or add more than just one premise at a time. Relaxing this restriction of JoJo led to the new highly flexible algorithm Frog that additionally uses a third search direction.
Das Ziel dieses Projekts war es, anhand von empirischen Untersuchungen klassische statistische Verfahren und aktuelle Methoden des Maschinellen Lernens mit einem Ansatz zu vergleichen, der in der Arbeitsgruppe entworfen und theoretisch analysiert wurde. Implementiert wurden f"unf Verfahren, einige davon in verschiedenen Varianten: FeedForward Neuronale Netze, Entscheidungsbäume, Bayes Entscheidungen, die auf Chow-Expansionen beruhen, Harmonische Analyse und die Methode des N"achsten Nachbarn. Als Referenzmassstab wurden Vorhersagen herangezogen, die den Trend oder den Mittelwert der letzten letzten Beobachtungen vorhersagten. Als Daten standen 16 Zeitreihen von Aktien- und Devisenkursen zur Verf"ugung. Jede der Zeitreihen bestand aus 2000 Daten, von denen die ersten 1500 zum Training und die restlichen 500 für den Vergleich der Verfahren dienten. Dabei zeigte es sich, dass die naiven Referenzverfahren einen recht guten Pr"ufstein darstellten. Die Bayes-Entscheidungen und die Entscheidungsbäume erwiesen sich als besonders stark und übertrafen die Referenzmethoden fast immer. Neuronale Netze und die Methode des n"achsten Nachbarn waren etwa genausogut, während die Harmonische Analyse für kurzfristige Vorhersagen schlechter und für langfristige besser war. Bei Entscheidungsbäumen und Neuronalen Netzen fiel auf, dass kleine B"aume bzw. Netze bessere Ergebnisse lieferten als grosse.
Reusing Proofs
(1999)
We develop a learning component for a theorem prover designed for verifying statements by mathematical induction. If the prover has found a proof, it is analyzed yielding a so-called catch. The catch provides the features of the proof which are relevant for reusing it in subsequent verification tasks and may also suggest useful lemmata. Proof analysis techniques for computing the catch are presented. A catch is generalized in a certain sense for increasing the reusability of proofs. We discuss problems arising when learning from proofs and illustrate our method by several examples.
Die Induktive Logische Programmierung (ILP) ist ein Forschungsgebiet, das Techniken aus dem Maschinellen Lernen und der Logischen Programmierung vereint. Sie untersucht das klassische Problem induktiven Lernens aus klassifizierten Beispielen im Rahmen der Hornlogik erster Stufe. Inzwischen gibt es eine grosse Zahl verschiedener Ansätze für dieses Lernproblem, die sich hauptsächlich in der Suchrichtung im Hypothesenraum, den Generalisierungs- und Spezialisierungsoperatoren und den verwendeten nichtlogischen Beschränkungen (Bias) unterscheiden. Der Vergleich und die Integration dieser verschiedenen Ansätze war die Hauptmotivation für die Entwicklung des Systems MILES. MILES ist eine Programmierumgebung für die ILP, die neben Mechanismen zur Repräsentation und Verwaltung von Beispielen, Hintergrundwissen und Hypothesen einen Werkzeugkasten mit einem Grossteil der bekannten Generalisierungs-, Spezialisierungs- und Reformulierungsoperatoren enthält. Eine generische Kontrolle erlaubt, verschiedene dieser Operatoren in einen spezifischen ILP-Algorithmus zu integrieren. In diesem Beitrag wird ein kurzer Überblick über die Repräsentation, die Operatoren und die Kontrolle von MILES gegeben.
Die Verfahren der Induktiven Logischen Programmierung (ILP) [Mug93] haben die Aufgabe, aus einer Menge von positiven Beispielen E+, einer Menge von negativen Beispielen E und dem Hintergrundwissen B ein logisches Programm P zu lernen, das aus einer Menge von definiten Klauseln C : l0 l1, : : : ,ln besteht. Da der Hypothesenraum für Hornlogik unendlich ist, schränken viele Verfahren die Hypothesensprache auf eine endliche ein. Auch wird oft versucht, die Hypothesensprache so einzuschränken, dass nur Programme gelernt werden können, für die die Konsistenz entscheidbar ist. Eine andere Motivation, die Hypothesensprache zu beschränken, ist, dass das Wissen über das Zielprogramm, das schon vorhanden ist, ausgenutzt werden soll. So sind für bestimmte Anwendungen funktionsfreie Hypothesenklauseln ausreichend, oder es ist bekannt, dass das Zielprogramm funktional ist.
In diesem Beitrag werden konnektionistische Lernverfahren für die wissensbasierte Diagnose technischer Systeme vorgestellt. Es werden zwei Problemstellungen untersucht: die Prognose von Signalverläufen technischer Zustandsgrössen sowie die diagnostische Klassifikation von Systemzuständen und die Ergebnisse der Untersuchungen dargestellt.
Although it is acknowledged that internal iterators are easier and safer to use than conventional external iterators, it is commonly assumed that they are not applicable in languages without builtin support for closures and that they are less flexible than external iterators. We present an iteration framework that uses objects to emulate closures, separates structure exploration and data consumption, and generalizes on folding, thereby invalidating both the above statements. Our proposed "transfold" scheme allows processing one or more data structures simultaneously without exposing structure representations and without writing explicit loops. We show that the use of two functional concepts (function parameterization and lazy evaluation) within an object-oriented language allows combining the safety and economic usage of internal iteration with the flexibility and client control of external iteration. Sample code is provided using the statically typed EIFFEL language.
As the properties of components have gradually become clearer, attention has started to turn to the architectural issues which govern their interaction and composition. In this paper we identify some of the major architectural questions affecting component-based software develop-ment and describe the predominant architectural dimensions. Of these, the most interesting is the "architecture hierarchy" which we believe is needed to address the "interface vicissitude" problem that arises whenever interaction refinement is explicitly documented within a component-based system. We present a solution to this problem based on the concept of stratified architectures and object metamorphosis Finally, we describe how these concepts may assist in increasing the tailorability of component-based frameworks.
The value of software inspection for uncovering defects early in the development lifecycle has been well documented. Of the various types of inspection methods published to date, experiments have shown perspective-based inspection to be one of the most effective, because of its enhanced coverage of the defect space. However, inspections in general, and perspective-based inspections in particular, have so far been applied predominantly in the context of conventional structured development methods, and then almost always to textual artifacts, such as requirements documents or code modules. Object oriented-models, particularly of the graphical form, have so far not been adequately addressed by inspection methods. This paper tackles this problem by first discussing the difficulties involved in tailoring the perspective-based inspection approach to object-oriented development methods and, second, by presenting a generalization of the approach which overcomes these limitations. The new version of the approach is illustrated in the context of UML-based object-oriented development.
We present an overview of various learning techniques used in automated theorem provers. We characterize the main problems arising in this context and classify the solutions to these problems from published approaches. We analyze the suitability of several combinations of solutions for different approaches to theorem proving and place these combinations in a spectrum ranging from provers using very specialized learning approaches to optimally adapt to a small class of proof problems, to provers that learn more general kinds of knowledge, resulting in systems that are less efficient in special cases but show improved performance for a wide range of problems. Finally, we suggest combinations of solutions for various proof philosophies.
We present a cooperation concept for automated theorem provers that isbased on a periodical interchange of selected results between several incarnationsof a prover. These incarnations differ from each other in the search heuristic theyemploy for guiding the search of the prover. Depending on the strengths' andweaknesses of these heuristics different knowledge and different communicationstructures are used for selecting the results to interchange.Our concept is easy to implement and can easily be integrated into alreadyexisting theorem provers. Moreover, the resulting cooperation allows the dis-tributed system to find proofs much faster than single heuristics working alone.We substantiate these claims by two case studies: experiments with the DiCoDesystem that is based on the condensed detachment rule and experiments with theSPASS system, a prover for first order logic with equality based on the super-position calculus. Both case studies show the improvements by our cooperationconcept.
This paper presents a new kind of abstraction, which has been developed for the purpose of proofplanning. The basic idea of this paper is to abstract a given theorem and to find an abstractproof of it. Once an abstract proof has been found, this proof has to be refined to a real proofof the original theorem. We present a goal oriented abstraction for the purpose of equality proofplanning, which is parameterized by common parts of the left- and right-hand sides of the givenequality. Therefore, this abstraction technique provides an abstract equality problem which ismore adequate than those generated by the abstractions known so far. The presented abstractionalso supports the heuristic search process based on the difference reduction paradigm. We give aformal definition of the abstract space including the objects and their manipulation. Furthermore,we prove some properties in order to allow an efficient implementation of the presented abstraction.
This report is a first attempt of formalizing the diagonalization proof technique.We give a strategy how to systematically construct diagonalization proofs: (i) findingan indexing relation, (ii) constructing a diagonal element, and (iii) making the implicitcontradiction of the diagonal element explicit. We suggest a declarative representationof the strategy and describe how it can be realized in a proof planning environment.
We examine different possibilities of coupling saturation-based theorem pro-vers by exchanging positive/negative information. We discuss which positive ornegative information is well-suited for cooperative theorem proving and show inan abstract way how this information can be used. Based on this study, we in-troduce a basic model for cooperative theorem proving. We present theoreticalresults regarding the exchange of positive/negative information as well as practi-cal methods and heuristics that allow for a gain of efficiency in comparison withsequential provers. Finally, we report on experimental studies conducted in theareas condensed detachment, unfailing completion, and superposition.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
Case-based knowledge acquisition, learning and problem solving for diagnostic real world tasks
(1999)
Within this paper we focus on both the solution of real, complex problems using expert system technology and the acquisition of the necessary knowledge from a case-based reasoning point of view. The development of systems which can be applied to real world problems has to meet certain requirements. E.g., all available information sources have to be identified and utilized. Normally, this involves different types of knowledge for which several knowledge representation schemes are needed, because no scheme is equally natural for all sources. Facing empirical knowledge it is important to complement the use of manually compiled, statistic and otherwise induced knowledge by the exploitation of the intuitive understandability of case-based mechanisms. Thus, an integration of case-based and alternative knowledge acquisition and problem solving mechanisms is necessary. For this, the basis is to define the "role" which case-based inference can "play" within a knowledge acquisition workbench. We will discuss a concrete casebased architecture, which has been applied to technical diagnosis problems, and its integration into a knowledge acquisition workbench which includes compiled knowledge and explicit deep models, additionally.
Proof planning is an alternative methodology to classical automated theorem prov-ing based on exhausitve search that was first introduced by Bundy [8]. The goal ofthis paper is to extend the current realm of proof planning to cope with genuinelymathematical problems such as the well-known limit theorems first investigated for au-tomated theorem proving by Bledsoe. The report presents a general methodology andcontains ideas that are new for proof planning and theorem proving, most importantlyideas for search control and for the integration of domain knowledge into a general proofplanning framework. We extend proof planning by employing explicit control-rules andsupermethods. We combine proof planning with constraint solving. Experiments showthe influence of these mechanisms on the performance of a proof planner. For instance,the proofs of LIM+ and LIM* have been automatically proof planned in the extendedproof planner OMEGA.In a general proof planning framework we rationally reconstruct the proofs of limittheorems for real numbers (IR) that were first computed by the special-purpose programreported in [6]. Compared with this program, the rational reconstruction has severaladvantages: It relies on a general-purpose problem solver; it provides high-level, hi-erarchical representations of proofs that can be expanded to checkable ND-proofs; itemploys declarative contol knowledge that is modularly organized.
In this paper we present an extensional higher-order resolution calculus that iscomplete relative to Henkin model semantics. The treatment of the extensionality princi-ples - necessary for the completeness result - by specialized (goal-directed) inference rulesis of practical applicability, as an implentation of the calculus in the Leo-System shows.Furthermore, we prove the long-standing conjecture, that it is sufficient to restrict the orderof primitive substitutions to the order of input formulae.
We present a methodology for coupling several saturation-based theoremprovers (running on different computers). The methodology is well-suited for re-alizing cooperation between different incarnations of one basic prover. Moreover,also different heterogeneous provers - that differ from each other in the calculusand in the heuristic they employ - can be coupled. Cooperation between the dif-ferent provers is achieved by periodically interchanging clauses which are selectedby so-called referees. We present theoretic results regarding the completeness ofthe system of cooperating provers as well as describe concrete heuristics for de-signing referees. Furthermore, we report on two experimental studies performedwith homogeneous and heterogeneous provers in the areas superposition and un-failing completion. The results reveal that the occurring synergetic effects leadto a significant improvement of performance.
Die Entwicklung des Zusammenlebens der Menschen geht immer mehr den Weg zur Informations- und Mediengesellschaft. Nicht zuletzt aufgrund der weltweiten Vernetzung ist es uns in minutenschnelle möglich, fast alle erdenklichen Informationen zu Hause auf den Bildschirm geliefert zu bekommen. Es findet sich so jeder zwar in einer gewissen schützenden Anonymität, aber dennoch einer genauso gewollten, wie erschreckenden Transparenz wieder. Jeder klassifiziert in gewisser Weise Informationen, die er preisgibt etwa in öffentliche, persönliche und vertrauliche Nachrichten. Gerade hier müssen Techniken und Methoden bereitstehen, um in dieser anonymen Transparenz Informationen, die nur für spezielle Empfänger gedacht sind vor unbefugtem Zugriff zu schützen und nur denjenigen zugänglich zu machen, die dazu berechtigt sind. Diesen Wunsch hat nicht nur allgemein die Gesellschaft, sondern im speziellen wird die Entwicklung auf diesem Gebiet gerade von staatlichen und militärischen Einrichtungen gefordert und gefördert. So sind häufig eingesetzte Werkzeuge die Methoden der Kryptologie, aber solange es geheime Nachrichten gibt, wird es Angreifer geben, die versuchen, sich unberechtigten Zugang zu diesen Informationen zu verschaffen. Da die ständig wachsende Leistung von EDV-Anlagen das "Knacken" von Verschlüsselungsmethoden begünstigt, muß zu immer sichereren Chiffrierverfahren übergegangen werden. Dieser Umstand macht das Thema Kryptologie für den Moment hochaktuell und auf lange Sicht zu einem zeitlosen Forschungsgebiet der Mathematik und Informatik.
Versions- und Konfigurationsmanagement sind zentrale Instrumente zur intellektuellen Beherrschung komplexer Softwareentwicklungen. In stark wiederverwendungsorientierten Softwareentwicklungsansätzen -wie vom SFB bereitgestellt- muß der Begriff der Konfiguration von traditionell produktorientierten Artefakten auf Prozesse und sonstige Entwicklungserfahrungen erweitert werden. In dieser Veröffentlichung wird ein derartig erweitertes Konfigurationsmodell vorgestellt. Darüberhinau wird eine Ergänzung traditioneller Projektplanungsinformationen diskutiert, die die Ableitung maßgeschneiderter Versions- und Konfigurationsmanagementmechanismen vor Projektbeginn ermöglichen.
The development of complex software systems is driven by many diverse and sometimes contradictory requirements such as correctness and maintainability of resulting products, development costs, and time-to-market. To alleviate these difficulties, we propose a development method for distributed systems that integrates different basic approaches. First, it combines the use of the formal description technique SDL with software reuse concepts. This results in the definition of a use-case driven, incremental development method with SDL-patterns as the main reusable artifacts. Experience with this approach has shown that there are several other factors of influence, such as the quality of reuse artifacts or the experience of the development team. Therefore, we further combined our SDL-pattern approach with an improvement methodology known from the area of experimental software engineering. In order to demonstrate the validity of this integrating approach, we sketch some representative outcomings of a case study.
Manipulating deformable linear objects - Vision-based recognition of contact state transitions -
(1999)
A new and systematic approach to machine vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation of the object with respect to an obstacle and are derived from the object image and its features. Therefore, the object is segmented from a standard video frame using a fast segmentation algorithm. Several object features are presented which allow the state recognition of the object while being manipulated by the robot.
Using an experience factory is one possible concept for supporting and improving reuse in software development. (i.e., reuse of products, processes, quality models, ...). In the context of the Sonderforschungsbereich 501: "Development of Large Systems with Generic methods" (SFB501), the Software Engineering Laboratory (SE Lab) runs such an experience factory as part of the infrastructure services it offers. The SE Lab also provides several tools to support the planning, developing, measuring, and analyzing activities of software development processes. Among these tools, the SE Lab runs and maintains an experience base, the SFB-EB. When an experience factory is utilized, support for experience base maintenance is an important issue. Furthermore, it might be interesting to evaluate experience base usage with regard to the number of accesses to certain experience elements stored in the database. The same holds for the usage of the tools provided by the SE LAB. This report presents a set of supporting tools that were designed to aid in these tasks. These supporting tools check the experience base's consistency and gather information on the usage of SFB-EB and the tools installed in the SE Lab. The results are processed periodically and displayed as HTML result reports (consistency checking) or bar charts (usage profiles).
Struktur und Werkzeuge des experiment-spezifischen Datenbereichs der SFB501 Erfahrungsdatenbank
(1999)
Software-Entwicklungsartefakte müssen zielgerichtet während der Durchführung eines Software- Projekts erfasst werden, um für die Wiederverwendung aufbereitet werden zu können. Die methodische Basis hierzu bildet im Sonderforschungsbereich 501 das Konzept der Erfahrungsdatenbank. In ihrem experiment-spezifischen Datenbereich werden für jedes Entwicklungsprojekt alle Software-Entwicklungsartefakte abgelegt, die während des Lebenszyklus eines Projektes anfallen. In ihrem übergreifenden Datenbereich werden all die jenigen Artefakte aus dem experiment-spezifischen Datenbereich zusammengefasst, die für eine Wiederverwendung in nachfolgenden Projekten in Frage kommen. Es hat sich gezeigt, dass bereits zur Nutzung der Datenmengen im experiment- spezifischen Datenbereich der Erfahrungsdatenbank ein systematischer Zugriff notwendig ist. Ein systematischer Zugriff setzt jedoch eine normierte Struktur voraus. Im experiment-spezifischen Bereich werden zwei Arten von Experimenttypen unterschieden: "Kontrollierte Experimente" und "Fallstudien". Dieser Bericht beschreibt die Ablage- und Zugriffsstruktur für den Experimenttyp "Fallstudien". Die Struktur wurde aufgrund der Erfahrungen in ersten Fallstudien entwickelt und evaluiert.
Comprehensive reuse and systematic evolution of reuse artifacts as proposed by the Quality Improvement Paradigm (QIP) do not only require tool support for mere storage and retrieval. Rather, an integrated management of (potentially reusable) experience data as well as project-related data is needed. This paper presents an approach exploiting object-relational database technology to implement the QIP-driven reuse repository of the SFB 501. Requirements, concepts, and implementational aspects are discussed and illustrated through a running example, namely the reuse and continuous improvement of SDL patterns for developing distributed systems. Based on this discussion, we argue that object-relational database management systems (ORDBMS) are best suited to implement such a comprehensive reuse repository. It is demonstrated how this technology can be used to support all phases of a reuse process and the accompanying improvement cycle. Although the discussions of this paper are strongly related to the requirements of the SFB 501 experience base, the basic realization concepts, and, thereby, the applicability of ORDBMS, can easily be extended to similar applications, i. e., reuse repositories in general.
The task of handling non-rigid one-dimensional objects by a robot manipulation system is investigated. To distinguish between different non-rigid object behaviors, five classes of deformable objects from a robotic point of view are proposed. Additionally, an enumeration of all possible contact states of one-dimensional objects with polyhedral obstacles is provided. Finally, the qualitative motion behavior of linear objects is analyzed for stable point contacts. Experiments with different materials validate the analytical results.
We propose an approach to the problem of proof control for our new first-order inductive theorem prover QuodLibet that is characterized by a great deal of flexibility w.r.t. the forms of proof control the prover supports. The approach is based on so-called (proof) tactics, i.e. proof control routines written in a special proof control language named QML. QuodLibet provides a set of tactics (in addition to the elementary inference rules), which range from tactics for trivial simplification steps to tactics representing comprehensive inductive proof strategies. Moreover, QuodLibet allows new tactics that are written by the user in QML to be integrated into the system to dynamically extend its functionality.
Auf KLUEDO, dem Kaiserslauterer uniweiten elektronischen Dokumentenserver, können Angehörige der Universität Kaiserslautern wissenschaftliche Dokumente in elektronischer Form im Internet und damit weltweit veröffentlichen. Der Dokumentenserver entstand auf Initiative der Fachbereiche und der Universitätsbibliothek Kaiserslautern im Rahmen des vom Bundesministerium für Bildung und Forschung (BMBF) geförderten GLOBAL INFO - Vorprojektes. Hierbei basiert der Dokumentenserver stellenweise auf dem (während des MathNet-Projektes des deutschen Forschungsnetzes (DFN) und des MathBibNet-Projektes der deutschen Forschungsgemeinschaft (DFG) aufgebauten) Mathematik-Preprintserver der Universität Kaiserslautern. Dieser Artikel umreißt die Aspekte, die hinter der Entwicklung des Dokumentenservers in der vorliegenden Form gestanden haben und gibt einen groben Überblick über die Systemarchitektur. Detailinformationen und eine Beschreibung des Systems findet man in meiner Arbeit Propadeutik metadatenbasierter Publikationsserver: Ontologie und konkordante Implementierung.
This paper deals with the robust manipulation of deformable linear objects such as hoses or wires. We propose manipulation based on thequalitative contact state between the deformable workpiece and a rigid environment. First, we give an enumeration of possible contact states and discuss the main characteristics of each state. Second, we investigate the transitions which are possible between the contact states and derive criteria and conditions for each of them. Finally, we apply the concept of contact states and state transitions to the description of a typical assembly task.
This paper deals with the problem of picking-up deformable linear workpieces such as cables or ropes with an industrial robot. First, we give a motivation and problem definition. Based on a brief conceptual discussion of possible approaches we derive an algorithm for picking-up hanging deformable linear objects using two light barriers as sensor system. For this hardware, a skill-based approach is described and the parameters and major influence factors are discussed. In an experi- mental study, the feasibility and reliability under diverse conditions are investigated. The algorithm is found to be very reliable, if certain boundary conditions are met.
In this paper, we investigate the efficient simulation of deformable linear objects. Based on the state of the art, we extend the principle of minimizing the potential energy by considering plastic deformation and describe a novel approach for treating workpiece dynamics. The major influence factors on precision and computation time are identified and investigated experimentally. Finally, we discuss the usage of parallel processing in order to reduce the computation time.
A new problem for the automated off-line programming of industrial robot application is investigated. The Multi-Goal Path Planning is to find the collision-free path connecting a set of goal poses and minimizing e.g. the total path length. Our solution is based on an earlier reported path planner for industrial robot arms with 6 degrees-of-freedom in an on-line given 3D environment. To control the path planner, four different goal selection methods are introduced and compared. While the Random and the Nearest Pair Selection methods can be used with any path planner, the Nearest Goal and the Adaptive Pair Selection method are favorable for our planner. With the latter two goal selection methods, the Multi-Goal Path Planning task can be significantly accelerated, because they are able to automatically solve the simplest path planning problems first. Summarizing, compared to Random or Nearest Pair Selection, this new Multi-Goal Path Planning approach results in a further cost reduction of the programming phase.
Gerade in einer von Globalisierungstendenzen geprägten Zeit wie heute - in der sowohl das Angebot als auch die Nachfrage an Informationen exponentiell wächst - gewinnen Publikationsserver an zunehmender Bedeutung: Die weltweite Verfügbarkeit und Recherchierbarkeit von elektronisch gespeicherten Ressourcen (bei gleichzeitig minimalen Kosten) sind zentrale Forderungen der Informationsgesellschaft und Antriebsmotor für internationale Informationsstrukturen. Bei der Konzeption von modernen Publikationsservern sollte daher a priori globales Denken und international anerkannte Konzepte im Vordergrund stehen. Ziel der vorliegenden Arbeit war die Unterstützung des Aufbaus eines Dokumentenservers für die Universität Kaiserslautern, der diesen Ansprüchen genügt. Im Rahmen dieser von Herrn Dr. Wolfgang Lenski (Fachbereich Informatik) betreuten Arbeit habe ich daher im ersten Teil als Orientierungsmöglichkeit eine Studie über aktuelle Bedürfnisse und Lösungsansätze angefertigt und die wesentlichsten Erkenntnisse niedergeschrieben: Nach der Formulierung eines Anforderungskataloges für Server, Ressourcen und Retrieval wird das international verbreitete Konzept der Metadaten als Ressourcenbeschreibung vorgestellt. Dieses dient zur Unterstützung effizienter und qualifizierter Retrievalverfahren die ihre Anwendung in komplexeren Systemen, wie beispielsweise dem, an der University of Colorado entwickelten, Resource-Discovery-System Harvest finden. Aufbauend auf diesen Erkenntnissen zeigt der zweite Teil in exemplarischen Zügen die Applizierung der vorgestellten Konzepte anhand der konkreten Realisation des Dokumentenservers KLUEDO der Universität Kaiserslautern. Ich beschränke mich hierbei im Wesentlichen auf den von mir entworfenen Teil der Systemarchitektur und das entwickelte Metadatenmanagementsystem.
Rules are an important knowledge representation formalism in constructive problem solving. On the other hand, object orientation is an essential key technology for maintaining large knowledge bases as well as software applications. Trying to take advantage of the benefits of both paradigms, we integrated Prolog and Smalltalk to build a common base architecture for problem solving. This approach has proven to be useful in the development of two knowledge-based systems for planning and configuration design (CAPlan and Idax). Both applications use Prolog as an efficient computational source for the evaluation of knowledge represented as rules.
Problem specifications for classical planners based on a STRIPS-like representation typically consist of an initial situation and a partially defined goal state. Hierarchical planning approaches, e.g., Hierarchical Task Network (HTN) Planning, have not only richer representations for actions but also for the representation of planning problems. The latter are defined by giving an initial state and an initial task network in which the goals can be ordered with respect to each other. However, studies with a specification of the domain of process planning for the plan-space planner CAPlan (an extension of SNLP) have shown that even without hierarchical domain representation typical properties called goal orderings can be identified in this domain that allow more efficient and correct case retrieval strategies for the case-based planner CAPlan/CbC. Motivated by that, this report describes an extension of the classical problem specifications for plan-space planners like SNLP and descendants. These extended problem specifications allow to define a partial order on the planning goals which can interpreted as an order in which the solution plan should achieve the goals. These goal ordering can theoretically and empirically be shown to improve planning performance not only for case-based but also for generative planning. As a second but different way we show how goal orderings can be used to address the control problem of partial order planners. These improvements can be best understood with a refinement of Barrett's and Weld's extended taxonomy of subgoal collections.
Abstraction is one of the most promising approaches to improve the performance of problem solvers. In several domains abstraction by dropping sentences of a domain description - as used in most hierarchical planners - has proven useful. In this paper we present examples which illustrate significant drawbacks of abstraction by dropping sentences. To overcome these drawbacks, we propose a more general view of abstraction involving the change of representation language. We have developed a new abstraction methodology and a related sound and complete learning algorithm that allows the complete change of representation language of planning cases from concrete to abstract. However, to achieve a powerful change of the representation language, the abstract language itself as well as rules which describe admissible ways of abstracting states must be provided in the domain model. This new abstraction approach is the core of PARIS (Plan Abstraction and Refinement in an Integrated System), a system in which abstract planning cases are automatically learned from given concrete cases. An empirical study in the domain of process planning in mechanical engineering shows significant advantages of the proposed reasoning from abstract cases over classical hierarchical planning.^
We are going to present two methods that allow to exploit previous expe-rience in the area of automated deduction. The first method adapts (learns)the parameters of a heuristic employed for controlling the application of infer-ence rules in order to find a known proof with as little redundant search effortas possible. Adaptation is accomplished by a genetic algorithm. A heuristiclearned that way can then be profitably used to solve similar problems. Thesecond method attempts to re-enact a known proof in a flexible manner in orderto solve an unknown problem whose proof is believed to lie in (close) vicinity.The experimental results obtained with an equational theorem prover show thatthese methods not only allow for impressive speed-ups, but also make it possibleto handle problems that were out of reach before.
This technical report is a compilation of several papers on the task of solving diagnostic problems with the help of topology preserving maps. It first reviews the application of Kohonen's Self- Organizing Feature Map (SOFM) for a technical diagnosis task, namely the fault detection in CNC-Machines with the KoDiag system [RW93], [RW94]. For emergent problems with coding attribute values, we then introduce fuzzy coding, similarity assignment and weight updating schemes for three crucial data types (continuous values, ordered and unordered symbols). These techniques result in a SOFM type network based on user defined local similarities, thus being able to incorporate a priori knowledge about the domain [Rah95].
Real world planning tasks like manufacturing process planning often don't allow to formalize all of the relevant knowledge. Especially, preferences between alternatives are hard to acquire but have high influence on the efficiency of the planning process and the quality of the solution. We describe the essential features of the CAPlan planning architecture that supports cooperative problem solving to narrow the gap caused by absent preference and control knowledge. The architecture combines an SNLP-like base planner with mechanisms for explict representation and maintenance of dependencies between planning decisions. The flexible control interface of CAPlan allows a combination of autonomous and interactive planning in which a user can participate in the problem solving process. Especially, the rejection of arbitrary decisions by a user or dependency-directed backtracking mechanisms are supported by CAPlan.
In this report, we first propose a dichotomy of topology preserving network models based on the degree to which the structure of a network is determined by the given task. We then look closer at one of those groups and investigate the information that is contained in the graph structure of a topology preserving neural network. The task we have in mind is the usage of the network's topology for the retrieval of nearest neighbors of a neuron or a query, as it is of importance, e.g., in medical diagnosis systems. In general considerations, we propose certain properties of the structure and formulate the respective expectable results of network interpretation. From the results we conclude that both topology preservation as well as neuron distribution are highly influential for the network semantics. After a short survey on hierarchical models for data analysis, we propose a new network model that fits both needs. This so called SplitNet model dynamically constructs a hierarchically structured network that provides interpretability by neuron distribution, network topology and hierarchy of the network layers. We present empirical results for this new model and demonstrate its application in the medical domain of nerve lesion diagnosis. Further, we explain a view how the interpretation of the hierarchy in models like SplitNet can be understood in the context of integration of symbolic and connectionist learning.
Im Rahmen des Sonderforschungsbereichs SFB314, Projekt X9 "Lernen und Analogie in technischen Expertensystemen", wurde die Verwendbarkeit von Techniken des fallbasierten Schliessens in wissens- basierten Systemen untersucht. Als prototypische Anwendungsdomäne wurde die Arbeitsplanerstellung rotationssymmetrischer Werkstücke gewählt. Im vorliegenden Beitrag wird ein Modell der Arbeits- planerstellung unter Berücksichtigung der verschiedenen, bisher als unabhängig behandelten Planungsmethoden beschrieben. Auf der Basis einer modelbasierte Wissensakquistion aus in Unternehmen verfügbaren Arbeitsplänen wird ein Ausschnitt der Arbeitsplanerstellung, die Aufspannplanung, detailliert. Die Anwendbarkeit wurde durch eine prototypische Realisierung nachgewiesen.
Freivalds, Karpinski and Smith [8] explored a special type of learning in the limit: identification of an unknown concept (function) by eliminating (erasing) all but one possible hypothesis (this type of learning is called co-learning). The motivation behind learning by erasing lies in the process of human and automated computer learning: often we can discard incorrect solutions much easier than to come up with the correct one. In Gödel numberings any learnable family can be learned by an erasing strategy. In this paper we concentrate on co-learning minimal programs. We show that co-learning of minimal programs, as originally defined is significantly weaker than learning minimal programs in Gödel numberings. In order to enhance the learning power