Kaiserslautern - Fachbereich Informatik
Refine
Year of publication
- 1999 (267)
- 1996 (50)
- 1994 (49)
- 1995 (48)
- 1998 (38)
- 1997 (35)
- 2016 (25)
- 2021 (25)
- 2019 (23)
- 1993 (22)
- 2015 (22)
- 2022 (22)
- 2001 (21)
- 2023 (20)
- 2007 (19)
- 2013 (18)
- 2018 (18)
- 2002 (17)
- 2003 (16)
- 2020 (16)
- 2014 (15)
- 2012 (14)
- 1992 (13)
- 2000 (13)
- 2004 (12)
- 2006 (11)
- 2009 (11)
- 2008 (9)
- 2005 (8)
- 2017 (8)
- 1991 (7)
- 2010 (7)
- 2011 (5)
- 1979 (2)
- 1980 (1)
- 1983 (1)
- 1990 (1)
- 2024 (1)
Document Type
- Preprint (346)
- Doctoral Thesis (229)
- Report (139)
- Article (122)
- Master's Thesis (45)
- Study Thesis (13)
- Conference Proceeding (8)
- Bachelor Thesis (4)
- Part of a Book (2)
- Habilitation (2)
Has Fulltext
- yes (910)
Keywords
- AG-RESY (64)
- PARO (31)
- Case-Based Reasoning (20)
- Visualisierung (20)
- SKALP (16)
- CoMo-Kit (15)
- Fallbasiertes Schliessen (12)
- RODEO (12)
- Robotik (12)
- HANDFLEX (11)
Faculty / Organisational entity
This paper outlines an implemented system called PROVERB that explains machine -found natural deduction proofs in natural language. Different from earlier works, we pursue a reconstructive approach. Based on the observation that natural deduction proofs are at a too low level of abstraction compared with proofs found in mathematical textbooks, we define first the concept of so-called assertion level inference rules. Derivations justified by these rules can intuitively be understood as the application of a definition or a theorem. Then an algorithm is introduced that abstracts machine-found ND proofs using the assertion level inference rules. Abstracted proofs are then verbalized into natural language by a presentation module. The most significant feature of the presentation module is that it combines standard hierarchical text planning and techniques that locally organize argumentative texts based on the derivation relation under the guidance of a focus mechanism. The behavior of the system is demonstrated with the help of a concrete example throughout the paper.
We describe a technique to make application programs fault tolerant. This techADnique is based on the concept of checkpointing from an active program to one ormore passive backup copies which serve as an abstraction of stable memory. Ifthe primary copy fails, one of the backup copies takes over and resumes processADing service requests. After each failure a new backup copy is created in order torestore the replication degree of the service. All mechanisms necessary to achieveand maintain fault tolerance can be added automatically to the code of a nonADfaulttolerant server, thus making fault tolerance completely transparent for the applicaADtion programmer.
In diesem Aufsatz wird die Arbeitsweise eines Werkzeuges dargestellt, mit dessen Hilfedie Analyse von Feature-Interaktionen in Intelligenten (Telefon-)Netzwerken unterstütztwird. Dieses Werkzeug basiert auf einem von uns entwickelten formalen Lösungsansatz, deraus einem geeigneten Spezifikationsstil, aus einem formalen Kriterium zur Erkennung vonFeature-Interaktionen und aus einer Methode zur Auflösung der erkannten Feature- Interaktionen besteht. Das Werkzeug führt eine statische Analyse von Estelle-Spezifikationendurch und erkennt dabei potientielle Feature-Interaktionen sowie nichtausführbare Transitionen. Darüberhinaus kann es die erkannten nichtausführbaren Transitionen zur Optimierung aus der Spezifikation entfernen. Wir erläutern zunächst kurz den zugrundeliegendenAnsatz und beschreiben danach die Anwendung auf Estelle anhand der Funktionsweisedes Werkzeuges.
Like other industries, the aircraft industry is under high pressure to meet drastically increased customer goals for market price and flexibility. This while at the same time share holders request for short term profit guarantees. Daimler-Benz Aerospace Airbus has met this challenge using business process reengineering methods which led to total company restructuring from functional orientation to customer and product orientation. This paper will show how business process modelling techniques have been applied. Especially concurrent engineering methods are used to integrate the various disciplines involved from market analysts over design, commercial to industrialization staff.
In recent years the demand on business process modelling (BPM) became apparent in many different communities, e.g. information systems engineering, requirements engineering [KiB94], software engineering and knowledge engineering (e.g. [BrV94], [SWH+94]). This suggests to aim at a unifying view on business process modelling in all these disciplines. To achieve the business goals some problems which obstruct these goals must be solved. This can be done either by restructuring the business process, by application of standard software, or by developing individual software components such as knowledge based systems (KBSs). To be able to model business goals and to analyse problems occurring during the business processes these processes including organisational structures and activities have to be modelled. This is also true when building a KBS in an enterprise environment. Because the KBS is only a small part of the whole business organisation, it must be embedded into or at least linked to all relevant business processes, i.e. it should not be a stand-alone solution. For this purpose we extend the MIKE approach [AFS96] in the BMBF project WORKS (Work Oriented Design of Knowledge Systems) by offering business models for modelling relevant aspects of an enterprise. To be able to define an integrated framework with other possibilties to improve an enterprise (e.g. information systems engineering) we determine the standard views of an enterprise. Next we define the views, that are necessary for developing a KBS.
In this paper, we propose the PARIS approach for improving complex problem solving by learning from previous cases. In this approach, abstract planning cases are learned from given concrete cases. For this purpose, we have developed a new abstraction methodology that allows to completely change the representation language of a planning case, when the concrete and abstract languages are given by the user. Furthermore, we present a learning algorithm which is correct and complete with respect to the introduced model. An empirical study in the domain of process planning in mechanical engineering shows significant improvements in planning efficiency through learning abstract cases while an explanation-based learning method only causes a very slight improvement.
Case-based problem solving can be significantly improved by applying domain knowledge (in opposition to problem solving knowledge), which can be acquired with reasonable effort, to derive explanations of the correctness of a case. Such explanations, constructed on several levels of abstraction, can be employed as the basis for similarity assessment as well as for adaptation by solution refinement. The general approach for explanation-based similarity can be applied to different real world problem solving tasks such as diagnosis and planning in technical areas. This paper presents the general idea as well as the two specific, completely implemented realizations for a diagnosis and a planning task.
Complex problem solving can be substantially improved by the reuse of experience from previously solved problems. This requires that case libraries of successful problem solutions are transformed into problem solving knowledge with high utility, i.e. knowledge which causes high savings in search time, high application probability and low matching costs in a respective performance component. Planning can be improved by explanation-based learning (EBL) of abstract plans from detailed, successfully solved planning problems. Abstract plans, expressed in well-established terms of the domain, serve as useful problem decompositions which can drastically reduce the planning complexity. Abstractions which are valid for a class of planning cases rather than for a single case, ensure a successful application in a larger spectrum of new situations. The hierarchical organization of the learned shared abstractions causes low matching costs. The presented S-PABS procedure is an EBL-procedure in which abstraction, learning from multiple examples and hierarchical clustering are combined to automatically construct a hierarchy of shared abstract plans by analyzing concrete planning cases. A specific planning procedure has been designed to solve new planning problems guided by the knowledge learned by S-PABS. By allowing a feedback from this planning procedure to the learning component, the integrated system shows an increase in performance through past problem solving.
Although skeletal plan refinement is used in several planning systems, a procedure for the automatic acquisition of such high-level plans has not yet been developed. The proposed explanation- based knowledge acquisition procedure constructs a skeletal plan automatically from a sophisticated concrete planning case. The classification of that case into a well-described class of problems serves as an instrument for adjusting the applicability of the acquired skeletal plans to that class. The four phases of the proposed procedure are constituted as follows: In the first phase, the execution of the source plan is simulated, and explanations for the effects of the occurred operators are constructed. In the second phase, the generalization of these explanations is performed with respect to a criterion of operationality which specifies the vocabulary for defining abstract operators for the skeletal plan. The third phase, a dependency analysis of the resulting operator effects, unveils the interactions of the concrete plan which are substantial for the specified class. In the forth phase, the concept descriptions for the abstract operators of the skeletal plan are formed by collecting and normalizing the important constraints for each operation that were indicated by the dependencies. With this procedure sophisticated planning solutions from human experts can be generalized into skeletal plans and consequently be reused by a planning system in novel situations.
Abstraction is one of the most promising approaches to improve the performance of problem solvers. Abstraction by dropping sentences of a domain description - as used in most hierarchical planners - is known to be very representation dependent. To overcome these drawbacks, we propose a more general view of abstraction involving the change of representation language. We have developed a new abstraction methodology and a related sound and complete learning algorithm that allows the complete change of representation language of planning cases from concrete to abstract.
Recently, the use of abstraction in case-based reasoning (CBR) is getting more and more popular. The basic idea is to supply a CBR system with cases at many different levels of abstraction. When a new problem must be solved, one (or several) 'appropriate' concrete or abstract case are retrieved from the case base and the solution that the case contains is reused to derive a solution for the current problem, e.g. by filling in the details that a retrieved case at some higher level of abstraction does not contain. A major problem that occurs when using this approach is, that for a given new problem, usually several cases, e.g., from different levels of abstraction could be reused to solve the new problem. Choosing a wrong abstract case can slow down the problem solving process or even prevents the problem from being solved.
Hierachical planning can be improved by explanation-based learning (EBL) of abstract plans from detailed, successfully solved planning problems. Abstract plans, expressed in well-established terms of the domain, serve as useful problem decompositions which can drastically reduce the planning complexity. The learned plan abstraction must be valid for a class of planning cases rather than for a single case, to ensure their successful application in a larger spectrum of new situations. A hierarchical organization of the newly learned knowledge must be archieved to overcome the utility problem in EBL. This paper presents a new formal model of shared plan abstraction and the closely related explanation-based procedure S-PABS. Unlike other apporaches to plan abstraction, our model allows a total different terminology to be introduced at the abstract level. Finally, an unsupervised incremental procedure for constructing a hierachy of shared abstract plans is proposed, as a kind of concept formation over explanations.
A growing share of all software development project work is being done by geographically distributed teams. To satisfy shorter product design cycles, expert team members for a development project may need to be r ecruited globally. Yet to avoid extensive travelling or r eplacement costs, distributed project work is preferred. Current-generation software engineering tools and ass ociated systems, processes, and methods were for the most part developed to be used within a single enterprise. Major innovations have lately been introduced to enable groupware applications on the Internet to support global collaboration. However, their deployment for distributed software projects requires further research. In partic ular, groupware methods must seamlessly be integrated with project and product management systems to make them attractive for industry. In this position paper we outline the major challenges concerning distributed (virtual) software projects. Based on our experiences with software process modeling and enactment environments, we then propose approaches to solve those challenges.
In dieser Arbeit wird die Entwicklung eines Werkzeugs dargestellt, mit des-sen Hilfe die Analyse von Feature-Interaktionen in Intelligenten Netzwerkenunterstützt wird. Es basiert auf der formalen Beschreibungstechnik Estelle, wo-bei durch einen speziellen Spezifikationsstil Feature-Interaktionen anhand vonbestimmten Wechselwirkungen zwischen Transitionen verschiedener Features(u.a. Indeterminismus) erkannt werden können. Das Ziel ist dabei die statischeErkennung und Protokollierung dieser Wechselwirkungen sowie die Entfernungvon nicht ausführbaren Transitionen zur Laufzeitoptimierung.Dazu werden zunächst die theoretischen Möglichkeiten zur Erkennung dieserWechselwirkungen untersucht. Danach werden anhand der Implementierung desAnalysewerkzeugs die eingesetzten Methoden und Algorithmen dargestellt undschließlich der Einsatz des Werkzeugs erläutert, das auf dem Estelle-Compiler PET basiert.
Coordinating distributed processes, especially engineering and software design processes, has been a research topic for some time now. Several approaches have been published that aim at coordinating large projects in general, and large software development processes in specific. However, most of these approaches focus on the technical part of the design process and omit management activities like planning and scheduling the project, or monitoring it during execution. In this paper, we focus on coordinating the management activities that accompany the technical software design process. We state the requirements for a Software Engineering Environm ent (SEE) accommodating management, and we describe a possible architecture for such an SEE.
This paper describes the architecture and concept of operation of a Framework for Adaptive Process Modeling and Execution (FAME). The research addresses the absence of robust methods for supporting the software process management life cycle. FAME employs a novel, model-based approach in providing automated support for different activities in the software development life cycle including project definition, process design, process analysis, process enactment, process execution status monitoring, and execution status-triggered process redesign. FAME applications extend beyond the software development domain to areas such as agile manufacturing, project management, logistics planning, and business process reengineering.
CORBA Lacks Venom
(1999)
Distributed objects bring to distributed computing such desirable properties of modularisation, abstraction and reuse easing the burden of development and maintenance by diminishing the gap between implementation and real-world objects. Distributed objects, however, need a consistent framework in which inter-object communication may take place. The Common Object Request Broker Architecture (CORBA) is a distributed object standard. CORBA's primary protocol is the Internet Interoperable Object Protocol limited to blocked synchronous remote procedure calls, over TCP/IP which is inappropriate for systems requiring timely guarantees.
Multi-User Dimensions (MUDs) [3], and their Object-Oriented versions (MOOs) [6], are geographically distributed, programmable client-server systems that support the cooperation of multiple users according to the virtual environment metaphor. In this metaphor, users are allowed to concurrently navigate in a set of "virtual" rooms. Rooms are interconnected through doors and may contain objects. Users are allowed to explore the contents of rooms, create and manipulate objects, and contact other users visiting the same room.
This paper investigates the suitability of the mobile agents approach to the problem of integrating a collection of local DBMS into a single heterogeneous large-scale distributed DBMS. The paper proposes a model of distributed transactions as a set of mobile agents and presents the relevant execution semantics. In addition, the mechanisms which are needed to guarantee the ACID properties in the considered environment are discussed.
Although work processes, like software processes, include a number of process aspects such as defined phases and deadlines, they are not plannable in detail. However, the advantages of today's process management, such as effective document routing and timeliness, can only be achieved with detailed models of work processes. This paper suggests a concept that uses detailed process models in conjunction with the possibility of defining the way a process model determines the work of individuals. Based on the WAM approach1, which allows workers to choose methods for their tasks according to the situation, we describe features to carry out planned parts of a process with workers always being able to start exceptional mechanisms. These mechanisms are based on the modelling paradigm of linked abstraction workflows (LAWs) that describe workflows at different levels of abstraction and classify refinements of tasks by the way lower tasks can be used.
Concept mapping is a simple and intuitive visual form of knowledge representation. Concept maps can be categorized as informal or formal, where the latter is characterized by implementing a semantics model constraining their components. Software engineering is a domain that has successfully adopted formal concept maps to visualize and specify complex systems. Automated tools have been implemented to support these models although their semantic constraints are hardcoded within the systems and hidden from users. This paper presents the Constraint Graphs and jKSImapper systems. Constraint Graphs is a flexible and powerful graphical system interface for specifying concept mapping notations. In addition, jKSImapper is a multi-user concept mapping editor for the Internet and the World Wide Web. Together, these systems aim to support user-definable formal concept mapping notations and distributed collaboration on the Internet and the World Wide Web.
We present two techniques for reasoning from cases to solve classification tasks: Induction and case-based reasoning. We contrast the two technologies (that are often confused) and show how they complement each other. Based on this, we describe how they are integrated in one single platform for reasoning from cases: The Inreca system.
While symbolic learning approaches encode the knowledge provided by the presentation of the cases explicitly into a symbolic representation of the concept, e.g. formulas, rules, or decision trees, case-based approaches describe learned concepts implicitly by a pair (CB; d), i.e. by a set CB of cases and a distance measure d. Given the same information, symbolic as well as the case-based approach compute a classification when a new case is presented. This poses the question if there are any differences concerning the learning power of the two approaches. In this work we will study the relationship between the case base, the measure of distance, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the conjecture of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
This paper presents a brief overview of the INRECA-II methodology for building and maintaining CBR applications. It is based on the experience factory and the software process modeling approach from software engineering. CBR development and maintenance experience is documented using software process models and stored in a three-layered experience packet.
Although several systematic analyses of existing approaches to adaptation have been published recently, a general formal adaptation framework is still missing. This paper presents a step into the direction of developing such a formal model of transformational adaptation. The model is based on the notion of the quality of a solution to a problem, while quality is meant in a more general sense and can also denote some kind of appropriateness, utility, or degree of correctness. Adaptation knowledge is then defined in terms of functions transforming one case into a successor case. The notion of quality provides us with a semantics for adaptation knowledge and allows us to define terms like soundness, correctness and completeness. In this view, adaptation (and even the whole CBR process) appears to be a special instance of an optimization problem.
For defining attribute types to be used in the case representation, taxonomies occur quite often. The symbolic values at any node of the taxonomy tree are used as attribute values in a case or a query. A taxonomy type represents a relationship between the symbols through their position within the taxonomy-tree which expresses knowledge about the similarity between the symbols. This paper analyzes several situations in which taxonomies are used in different ways and proposes a systematic way of specifying local similarity measures for taxonomy types. The proposed similarity measures have a clear semantics and are easy to compute at runtime.
This paper motivates the necessity for support for negotiation during Sales Support on the Internet within Case-Based Reasoning solutions. Different negotiation approaches are discussed and a general model of the sales process is presented. Further, the tradition al CBR-cycle is modified in such a way that iterative retrieval during a CBR consulting session is covered by the new model. Several gen eral characteristics of negotiation are described and a case study is shown where preliminary approaches are used to negotiate with a cu stomer about his demands and available products in a 'CBR-based' Electronic Commerce solution.
Zur schnellen Kommunikation zwischen Rechnern werden laufzeiteffiziente Implementationen von Protokoll-Spezifikationen benötigt. Die herkömmliche Schichten-Aufteilung verursacht hohe Kosten. In dieser Projektarbeit wurde eine andere Spezifikationsform, die Methode des strukturierten Produktautomaten, am Beispiel der OSI-Schichten 5 und 6 untersucht. Der Aufwand zur Erstellung und Wartung der Spezifikation und die Laufzeiteffizienz der daraus entstandenen Inplementation wurden mit mehreren anderen Spezifikationsformen verglichen und bewertet. Die Methode des strukturierten Produktautomaten erwies sich dabei als ein geeigneter Spezifikationsstil.
Case-Based Reasoning for Decision Support and Diagnostic Problem Solving: The INRECA Approach
(1995)
INRECA offers tools and methods for developing, validating, and maintaining decision support systems. INRECA's basic technologies are inductive and case-based reasoning, namely KATE -INDUCTION (cf., e.g., Manago, 1989; Manago, 1990) and S3-CASE, a software product based on PATDEX (cf., e.g., Wess,1991; Richter & Wess, 1991; Althoff & Wess, 1991). Induction extracts decision knowledge from case databases. It brings to light patterns among cases and helps monitoring trends over time. Case-based rea -soning relates the engineer's current problem to past experiences.
Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Dynamic Lambda Calculus
(1999)
The goal of this paper is to lay a logical foundation for discourse theories by providing analgebraic foundation of compositional formalisms for discourse semantics as an analogon tothe simply typed (lambda)-calculus. Just as that can be specialized to type theory by simply providinga special type for truth values and postulating the quantifiers and connectives as constantswith fixed semantics, the proposed dynamic (lambda)-calculus DLC can be specialized to (lambda)-DRT byessentially the same measures, yielding a much more principled and modular treatment of(lambda)-DRT than before; DLC is also expected to eventually provide a conceptually simple basisfor studying higher-order unification for compositional discourse theories.Over the past few years, there have been a series of attempts [Zee89, GS90, EK95, Mus96,KKP96, Kus96] to combine the Montagovian type theoretic framework [Mon74] with dynamicapproaches, such as DRT [Kam81]. The motivation for these developments is to obtain a generallogical framework for discourse semantics that combines compositionality and dynamic binding.Let us look at an example of compositional semantics construction in (lambda)-DRT which is one ofthe above formalisms [KKP96, Kus96]. By the use of fi-reduction we arrive at a first-order DRTrepresentation of the sentence A i man sleeps. (i denoting an index for anaphoric binding.)
The introduction of sorts to first-order automated deduc-tion has brought greater conciseness of representation and a considerablegain in efficiency by reducing search spaces. This suggests that sort in-formation can be employed in higher-order theorem proving with similarresults. This paper develops a sorted (lambda)-calculus suitable for automatictheorem proving applications. It extends the simply typed (lambda)-calculus by ahigher-order sort concept that includes term declarations and functionalbase sorts. The term declaration mechanism studied here is powerfulenough to subsume subsorting as a derived notion and therefore gives ajustification for the special form of subsort inference. We present a set oftransformations for sorted (pre-) unification and prove the nondetermin-istic completeness of the algorithm induced by these transformations.
Higher-Order Tableaux
(1999)
Even though higher-order calculi for automated theorem prov-ing are rather old, tableau calculi have not been investigated yet. Thispaper presents two free variable tableau calculi for higher-order logicthat use higher-order unification as the key inference procedure. Thesecalculi differ in the treatment of the substitutional properties of equival-ences. The first calculus is equivalent in deductive power to the machine-oriented higher-order refutation calculi known from the literature, whereasthe second is complete with respect to Henkin's general models.
Many mathematical proofs are hard to generate forhumans and even harder for automated theoremprovers. Classical techniques of automated theoremproving involve the application of basic rules, of built-in special procedures, or of tactics. Melis (Melis 1993)introduced a new method for analogical reasoning inautomated theorem proving. In this paper we showhow the derivational analogy replay method is relatedand extended to encompass analogy-driven proof planconstruction. The method is evaluated by showing theproof plan generation of the Pumping Lemma for con-text free languages derived by analogy with the proofplan of the Pumping Lemma for regular languages.This is an impressive evaluation test for the analogicalreasoning method applied to automated theorem prov-ing, as the automated proof of this Pumping Lemmais beyond the capabilities of any of the current auto-mated theorem provers.
This paper addresses the decomposition of proofs as a means of constructingmethods in plan-based automated theorem proving. It shows also, howdecomposition can beneficially be applied in theorem proving by analogy.Decomposition is also useful for human-style proof presentation. We proposeseveral decomposition techniques that were found to be useful in automatedtheorem proving and give examples of their application.
This paper analyzes how mathematicians prove the-orems. The analysis is based upon several empiricalsources such as reports of mathematicians and math-ematical proofs by analogy. In order to combine thestrength of traditional automated theorem provers withhuman-like capabilities, the questions arise: Whichproblem solving strategies are appropriate? Which rep-resentations have to be employed? As a result of ouranalysis, the following reasoning strategies are recog-nized: proof planning with partially instantiated meth-ods, structuring of proofs, the transfer of subproofs andof reformulated subproofs. We discuss the represent-ation of a component of these reasoning strategies, aswell as its properties. We find some mechanisms neededfor theorem proving by analogy, that are not providedby previous approaches to analogy. This leads us to acomputational representation of new components andprocedures for automated theorem proving systems.
This paper shows how a new approach to theorem provingby analogy is applicable to real maths problems. This approach worksat the level of proof-plans and employs reformulation that goes beyondsymbol mapping. The Heine-Borel theorem is a widely known result inmathematics. It is usually stated in R 1 and similar versions are also truein R 2 , in topology, and metric spaces. Its analogical transfer was proposedas a challenge example and could not be solved by previous approachesto theorem proving by analogy. We use a proof-plan of the Heine-Boreltheorem in R 1 as a guide in automatically producing a proof-plan of theHeine-Borel theorem in R 2 by analogy-driven proof-plan construction.
This paper addresses a model of analogy-driven theorem proving that is more general and cognitively more adequate than previous approaches. The model works at the level ofproof-plans. More precisely, we consider analogy as a control strategy in proof planning that employs a source proof-plan to guide the construction of a proof-plan for the target problem. Our approach includes a reformulation of the source proof-plan. This is in accordance with the well known fact that constructing ananalogy in maths often amounts to first finding the appropriate representation which brings out the similarity of two problems, i.e., finding the right concepts and the right level of abstraction. Several well known theorems were processed by our analogy-driven proof-plan construction that could not be proven analogically by previous approaches.
Software Products As Objects
(1997)
This paper describes our experiences in modeling entire software products (trees of software files) as objects. Container pnodes (product nodes) have user-defined Internetunique names, data types, and methods (operations). Pnodes can contain arbitrary collections of software files that represent programs, libraries, documents, or other software products. Pnodes can contain multiple software products, so that header files, libraries, and program products may all be stored within one pnode. Pnodes can contain views that list other pnodes in order to form large conceptual structures of pnodes. Typical pnode -object methods include: fetching and storing into version controlled repositories; dynamic analysis of pnode contents to generate makefiles of arbitrary complexity; local automated build operations; Internet-scalable distributed repository synchroni- zations; Internet-scalable, multi-platform, distributed build operations; extraction and generation of online API documen- tation, spell checking of document pnodes, and so on. Since methods are user-defined, they can be arbitrarily complex. Modelling software products as objects provides a large amount of effort leverage, since one person can define the methods and many people can use them in extensively automated ways.
We have developed a middleware framework for workgroup environments that can support distributed software development and a variety of other application domains requiring document management and change management for distributed projects. The framework enables hypermedia-based integration of arbitrary legacy and new information resources available via a range of protocols, not necessarily known in advance to us as the general framework developers nor even to the environment instance designers. The repositories in which such information resides may be dispersed across the Internet and/or an organizational intranet. The framework also permits a range of client models for user and tool interaction, and applies an extensible suite of collaboration services, including but not limited to multi-participant workflow and coordination, to their information retrievals and updates. That is, the framework is interposed between clients, services and repositories - thus "middleware". We explain how our framework makes it easy to realize a comprehensive collection of workgroup and workflow features we culled from a requirements survey conducted by NASA.
Independent development of system components may cause integration problems if their interaction is faulty. This problem may be solved by enforcing required component interactions at the system level. We have developed a system that automatically integrates control-oriented components, to make them consistent with aggregate system behavior re- quirements. Ourmethod is based on the automated synchronization method that modifies independently designed compo-nents to make them satisfy a set of user defined receptive safety properties. The automated synchroniza-tion allows us to design the compo nents as independent controllers that satisfy their individual requirements and to compose a correct executable system by combining the components and enforcing their interaction constraints. This approach gives component designers the freedom to design independently, and produce a functional system by combining the components and specifying their interaction requirements.
The paper presents a process-oriented view on knowledge management in software development. We describe requirements on knowledge management systems from a process-oriented perspective, introduce a process modeling language MILOS and its use for knowledge management. Then we explain how a process-oriented knowledge management system can be implemented using advanced but available information technologies.
Techniques for modular software design are presented applying software agents. The conceptual designs are domain independent and make use of specificdomain aspects applying Multiagent AI. The stages of conceptualization, design and implementation are defined by new techniques coordinated by objects. Software systemsare designed by knowledge acquisition, specification, and multiagent implementations.
An agent-based approach to managing distributed, multi-platform software development projects
(1999)
This paper describes work undertaken within the context of the P3 (Project and Process Prompter) Project which aims to develop the Prompter tool, a 'decision-support tool to assist in the planning and managing of a software development project'. Prompter will have the ability to help software project managers to assimilate best practice and 'know how' in the field of software project management and incorporate expert critiquing to assist with solving the complex problems associated with software project management. This paper focuses on Prompters agent- based approach to tackling the problems of distributed, platform independent support.
Tomorrow's ways of doing business are likely to be far more challenging and interesting than today's due to technological advances that allow people to operate or cooperate anytime, anywhere. Today's workers are becoming mobile without the need of a work home base. Organizations are evolving from the hierarchical lines of control and information flow into more dynamic and flexible structures, where "teams" and individuals are the building blocks for forming task forces and work groups to deal with short and long term project tasks, issues and opportunities. Those individuals and teams will collaborate from their mobile desktops, whether at their offices, home or on the road. A revised paradigm for conducting small and large-scale development and integration is emerging, sometimes called the "virtual enterprise", both in the military and industrial environments. This new paradigm supports communication, cooperation and collaboration of geographically dispersed teams. In this paper we discuss experiences with specific technologies that were investigated by TRW's Infrastructure for Collaboration among Distributed Teams (ICaDT) project; an Independent Research and Development (IR&D) effort.
The term enterprise modelling, synonymous with enterprise engineering, refers to methodologies developed for modelling activities, states, time, and cost within an enterprise architecture. They serve as a vehicle for evaluating and modelling activities resources etc. CIM - OSA (Computer Integrated Manufacturing Open Systems Architecture) is a methodology for modelling computer integrated environments, and its major objective is the appropriate integration of enterprise operations by means of efficient information exchange within the enterprise. PERA is another methodology for developing models of computer integrated manufacturing environments. The department of industrial engineering in Toronto proposed the development of ontologies as a vehicle for enterprise integration. The paper reviews the work carried out by various researchers and computing departments on the area of enterprise modelling and points out other modelling problems related to enterprise integration.
The term enterprise modeling, synonymous with enterprise engineering, often refers to methodologies, developed for modeling activities, states, time, and cost within an enterprise architecture. They serve as a vehicle for evaluating and modeling activities resources and so on. CIM - OSA (Computer Integrated Manufacturing Open Systems Architecture) is a methodology for modeling computer integrated environments, and its major objective is the appropriate integration of enterprise operations by means of efficient information exchange within the enterprise. Although there are other methodo- logies in the industry that serve the same purpose, most of them concentrate on the internal aspect of an enterprise. The paper is concerned with the modeling of the links between enterprises. The aim is to examine these relationships or links in detail and suggest a method for modeling enterprise networks drawing on the methodologies currently used in the industry and extending with the method proposed here.
This paper addresses analogy-driven auto-mated theorem proving that employs a sourceproof-plan to guide the search for a proof-planof the target problem. The approach presen-ted uses reformulations that go beyond symbolmappings and that incorporate frequently usedre-representations and abstractions. Severalrealistic math examples were successfully pro-cessed by our analogy-driven proof-plan con-struction. One challenge example, a Heine-Borel theorem, is discussed here. For this ex-ample the reformulaitons are shown step bystep and the modifying actions are demon-strated.
Analogy in CLAM
(1999)
CL A M is a proof planner, developed by the Dream group in Edinburgh,that mainly operates for inductive proofs. This paper addresses the questionhow an analogy model that I developed independently of CL A M can beapplied to CL A M and it presents analogy-driven proof plan construction as acontrol strategy of CL A M . This strategy is realized as a derivational analogythat includes the reformulation of proof plans. The analogical replay checkswhether the reformulated justifications of the source plan methods hold inthe target as a permission to transfer the method to the target plan. SinceCL A M has very efficient heuristic search strategies, the main purpose ofthe analogy is to suggest lemmas, to replay not commonly loaded methods,to suggest induction variables and induction terms, and to override controlrather than to construct a target proof plan that can be built by CL A Mitself more efficiently.
As global networks are being used by more and more people,they are becoming increasingly interesting for commercial appli-cations. The recent success and change in direction of the World-Wide Web is a clear indication for this. However, this success meta largely unprepared communications infrastructure. The Inter-net as an originally non-profit network did neither offer the secu-rity, nor the globally available accounting infrastructure byitself.These problems were addressed in the recent past, but in aseemingly ad-hoc manner. Several different accounting schemessensible for only certain types of commercial transactions havebeen developed, which either seem to neglect the problems ofscalability, or trade security for efficiency. Finally, some propos-als aim at achieving near perfect security at the expense of effi-ciency, thus rendering those systems to be of no practical use.In contrast, this paper presents a suitably configurable schemefor accounting in a general, widely distributed client/server envi-ronment. When developing the protocol presented in this paper,special attention has been paid to make this approach work wellin the future setting of high-bandwidth, high-latency internets.The developed protocol has been applied to a large-scale distrib-uted application, a WWW-based software development environ-ment.
In this paper, we compare the BERKOM globally ac-cessible services project (GLASS) with the well-knownWorld-Wide Web with respect to the ease of development,realization, and distribution of multimedia presentations.This comparison is based on the experiences we gainedwhen implementing a gateway between GLASS and theWorld-Wide Web. Since both systems are shown to haveobvious weaknesses, we are concluding this paper with apresentation of a better way to multimedia document en-gineering and distribution. This concept is based on awell-accepted approach to function-shipping in the Inter-net: the Java language, permitting for example a smoothintegration of GLASS92 MHEG objects and WWW HTMLpages within one common environment.
Distributed systems are an alternative to shared-memorymultiprocessors for the execution of parallel applications.PANDA is a runtime system which provides architecturalsupport for efficient parallel and distributed program-ming. PANDA supplies means for fast user-level threads,and for a transparent and coordinated sharing of objectsacross a homogeneous network. The paper motivates themajor architectural choices that guided our design. Theproblem of sharing data in a distributed environment isdiscussed, and the performance of appropriate mecha-nisms provided by the PANDA prototype implementation isassessed.
In this paper, a framework for globally distributed soft-ware development and management environments, whichwe call Booster is presented. Additionally, the first experi-ences with WebMake, an application developed to serve asan experimental platform for a software developmentenvironment based on the World Wide Web and theBooster framework is introduced. Booster encompasses thebasic building blocks and mechanisms necessary tosupport a truly cooperative distributed softwaredevelopment from the very beginning to the last steps in asoftware life cycle. It is thus a precursor of the GlobalSoftware Highway, in which providers and users can meetfor the development, management, exchange and usage ofall kind of software.
In order to reduce the elapsed time of a computation, a pop-ular approach is to decompose the program into a collection of largelyindependent subtasks which are executed in parallel. Unfortunately, it isoften observed that tightly-coupled parallel programs run considerablyslower than initially expected. In this paper, a framework for the anal-ysis of parallel programs and their potential speedup is presented. Twoparameters which strongly affect the scalability of parallelism are iden-tified, namely the grain of synchronization, and the degree to which thetarget hardware is available. It is shown that for certain classes of appli-cations speedup is inherently poor, even if the program runs under theidealized conditions of perfect load balance, unbounded communicationbandwidth and negligible communication and parallelization overhead.Upper bounds are derived for the speedup that can be obtained in threedifferent types of computations. An example illustrates the main find-ings.
The paper explores the role of artificial intelligence techniques in the development of an enhanced software project management tool, which takes account of the emerging requirement for support systems to address the increasing trend towards distributed multi-platform software development projects. In addressing these aims this research devised a novel architecture and framework for use as the basis of an intelligent assistance system for use by software project managers, in the planning and managing of a software project. This paper also describes the construction of a prototype system to implement this architecture and the results of a series of user trials on this prototype system.
In this paper, we present an approach to support distributed planning and scheduling, as well as the subsequent (also distributed) plan execution, in one system. The system will support the distributed planners and schedulers by providing task agendas for them, stating who needs to plan which tasks, and sending change notifications and warnings, if a planning or scheduling decision needs to be updated. The plan built using these mechanisms is then enacted by a workflow engine in the same system. This approach enables us to support interleaved planning and plan enactment, allowing the user to change the plan and schedule while the project is already under way. Deviations of the actual project enactment from the plan and schedule can automatically be detected, and necessary notifications will be sent to the concerned planner(s). This again facilitates the task of keeping the plan up to date, avoiding the complete invalidation of the plan as is often the case in conventional projects soon after enactment has started.
Emerging technologies such as the Internet, the World Wide Web, JavaTM technology, and software components, are changing the software business. Activities that have in the past been constrained by the need for intense information management increasingly involve cooperating organizations. Information management tools and techniques do not scale well in the face of this organizational complexity. An informal approach to information sharing, based largely on manual copying of information, cannot meet the demands of the task as size and complexity increase. Formal approaches to sharing information are based on groupware tools, but cooperating organizations do not always enjoy the trust or commonality of sophisticated infrastructure, methods, and skills that this approach requires. Bridging the gap requires a simple, loosely coupled, highly flexible strategy for information sharing. Extensive information relevant to different parts of the software life cycle should be interconnected in a simple, easily described way; such connections should permit selective information sharing by a variety of tools and in a variety of collaboration modes that vary in the amount of organizational coupling they require.
This paper outlines the microplanner of PROVERB , a system that generates multilingual text from machine-found mathematical proofs. The main representational vehicle is the text structure proposed by Meteer. Following Panaget, we also distinguish between the ideational and the textual semantic categories, and use the upper model to replace the former. Based on this, a further extension is made to support aggregation before realization decisions are made. While our the framework of our macroplanner is kept languageindependent, our microplanner draws on language specific linguistic sources such as realization classes and lexicon. Since English and German are the first two languages to be generated and because the sublanguage of our mathematical domain is relatively limited, the upper model and the textual semantic categories are designed to cope with both languages. Since the work reported is still in progress, we also discuss open problems we are facing.
This paper describes the linguistic part of a system called PROVERB, which transforms, abstracts,and verbalizes machine-found proofs into formatedtexts. Linguistically, the architecture of PROVERB follows most application oriented systems, and is a pipelined control of three components. Its macroplanner linearizes a proof and plans mediating communicative acts by employing a combination of hierarchical planning and focus-guided navigation. The microplanner then maps communicative acts and domain concepts into linguistic resources, paraphrases and aggregates such resources to producethe final Text Structure. A Text Structure contains all necessary syntactic information, and can be executed by our realizer into grammatical sentences. The system works fully automatically and performs particularly well for textbook size examples.
This paper outlines the linguistic part of an implemented system namedPROVERB[3] that transforms, abstracts, and verbalizes machine-found proofs innatural language. It aims to illustrate, that state-of-the-art techniques of natural language processing are necessary to produce coherent texts that resemble those found in typical mathematical textbooks, in contrast to the belief that mathematical texts are only schematic and mechanical.The verbalization module consists of a content planner, a sentence planner, and a syntactic generator. Intuitively speaking, the content planner first decides the order in which proof steps should be conveyed. It also some messages to highlight global proof structures. Subsequently, thesentence planner combines and rearranges linguistic resources associated with messages produced by the content planner in order to produce connected text. The syntactic generat or finally produces the surface text.
This paper describes a declarative approach forencoding the plan operators in proof planning,the so-called methods. The notion of methodevolves from the much studied concept of a tac-tic and was first used by A. Bundy. Signific-ant deductive power has been achieved withthe planning approach towards automated de-duction; however, the procedural character ofthe tactic part of methods hinders mechanicalmodification. Although the strength of a proofplanning system largely depends on powerfulgeneral procedures which solve a large class ofproblems, mechanical or even automated modi-fication of methods is necessary, since methodsdesigned for a specific type of problems willnever be general enough. After introducing thegeneral framework, we exemplify the mechan-ical modification of methods via a particularmeta-method which modifies methods by trans-forming connectives to quantifiers.
Die Beweisentwicklungsumgebung Omega-Mkrp soll Mathematiker bei einer ihrer Haupttätigkeiten, nämlich dem Beweisen mathematischer Theoreme unterstützen. Diese Unterstützung muß so komfortabel sein, daß die Beweise mit vertretbarem Aufwand formal durchgeführt werden können und daß die Korrektheit der so erzeugten Beweise durch das System sichergestellt wird. Ein solches System wird sich nur dann wirklich durchsetzen, wenn die rechnergestützte Suche nach formalen Beweisen weniger aufwendig und leichter ist, als ohne das System. Um dies zu erreichen, ergeben sich verschiedene Anforderungen an eine solche Entwicklungsumgebung, die wir im einzelnen beschreiben. Diese betreffen insbesondere die Ausdruckskraft der verwendeten Objektsprache, die Möglichkeit, abstrakt über Beweispläne zu reden, die am Menschen orientierte Präsentation der gefundenen Beweise, aber auch die effiziente Unterstützung beim Füllen von Beweislücken. Das im folgenden vorgestellte Omega-Mkrp-System ist eine Synthese der Ansätze des vollautomatischen, des interaktiven und des planbasierten Beweisens und versucht erstmalig die Ergebnisse dieser drei Forschungsrichtungen in einem System zu vereinigen. Dieser Artikel soll eine Übersicht über unsere Arbeit an diesem System geben.
The multiple-view modeling of a product in a design context is discussed in this paper. We study the existing approaches for multiple-view modeling of a product and we give a brief analysis of them. Then we propose our approach which incorporates the multiple-model approach in STEP standard current works based on a single model. We propose a meta-model inspired by this approach for a multiple-view design environment. Next, we validate this meta-model with a case study. Finally we conclude and give some perspectives of this work. Keywords: product data modeling, multiple-view modeling, product data integration, STEP, functional model.
Software development is becoming a more and more distributed process, which urgently needs supporting tools in the field of configuration management, software process/w orkflow management, communication and problem tracking. In this paper we present a new distributed software configuration management framework COMAND. It offers high availabilit y through replication and a mechanism to easily change and adapt the project structure to new business needs. To better understand and formally prove some properties of COMAND, we have modeled it in a formal technique based on distributed graph transformations. This formalism provides an intuitive rule-based description technique mainly for the dynamic behavior of the system on an abstract level. We use it here to model the replication subsystem.
The concept of the Virtual Software Corporation ( VSC) has recently become a practical reality as a result of advances in communication and distributed technologies. However, there are significant difficulties with the management of the software development process within a VSC. The main problem is the significantly increased communicational complexity of the process model for such developments. The more classic managerial hierarchy is generally replaced by a "flatter" network of commitments. Therefore new solution approaches are required to provide the necessary process support. The purpose of this paper is to present a solution approach which models the process based on deontic logic. The approach has been validated against a case study where it was used to model commitments and inter-human communications within the software development process of a VSC. The use of the formalism is exemplified through a prototype system using a layered multi-agent architecture.
This paper describes a system that supports softwaredevelopment processes in virtual software corporations. A virtual software corporation consists of a set of enterprisesthat cooperate in projects to fulfill customer needs. Contracts are negotiated in the whole lifecycle of asoftware development project. The negotiations really influence the performance of a company. Therefore, it isuseful to support negotiations and planning decisions with software agents. Our approach integrates software agentapproaches for negotiation support with flexible multiserver workflow engines.
In the recent years a form of software development that was previously dismissed as too ad-hoc and chaotic for serious projects has suddenly taken the front stage. With products such as Apache, Linux, Perl, and others, open-source software has emerged as a viable alternative to traditional approaches to software development. With its globally distributed developer force and extremely rapid code evolution, open source is arguably the extreme in "virtual software projects" [1], and exemplifies many of the advantages and challenges of distributed software development.
This paper outlines an implemented system named PROVERBthat transforms and abstracts machine-found proofs to natural deduction style proofs at an adequate level of abstraction and then verbalizesthem in natural language. The abstracted proofs, originally employedonly as an intermediate representation, also prove to be useful for proofplanning and proving by analogy.
Caching has long been used to reduce average access latency, from registers and memory pages cached by hardware, to the application level such as a web browser retaining retrieved documents. We focus here on the high-level caching of potentially shared networked documents and define two terms in relation to this type of caching: Zero latency refers to the condition where access to a document produces a cache hit on the local machine, that is, there is little or no latency due to the network (we assume that latency due to local disk and memory access is insignificant in comparison to network latency). A document with zero latency usually has been placed in the cache after a previous access, or has been pulled there through some prefetching mechanism. Negative latency refers to automatic presentation, or push, of a document to a user based on a prediction that the user will want that document. With an ideal system, a user would be presented with documents either that she was about to request, or that she would not know to request but that would be immediately useful to her.
This paper describes the design and implementation of a process support system (PROSYT), which is intended to provide guidance in performing business processes and cooperation among people over a local or geographically distributed architecture. In particular, it can be used as a Process-centered Software Engineering Environment (PSEE) to support distributed software development. Our main purpose is to describe how complex applications of this kind can be developed systematically. In particular, how the requirements of high flexibility, reconfigurability, scalability, and efficiency demanded by these applications can be met through appropriate design choices.
There are two general approaches to providing for isochronous streams in the current Internet. The first approach is the resource reservation approach through protocols such as RSVP, or ATM technology. This provides bandwidth guarantees, however, it also requires significant upgrading of resources in the underlying network. The other common approach is adaptive rate control where the end-system has control of its rate according to feedback from the client population. This approach cannot guarantee timely delivery and raises some scaling questions, however a properly implemented scheme does improve quality and it requires no changes to the underlying IP network. Hence, there exists a dichotomy of requirements ; 1. To cater for reservation protocols or 'hooks' for future reservation components, and 2. To provide an architecture which provides an application controlled QoS scheme, which scales to the size of the current Internet in a best- effort architecture.
This paper focuses on the issues involved when multiple mobile agents interact in multiagent systems. The application is an intelligent agent market place, where buyer and seller agents cooperate and compete to process sales transactions for their owners. The market place manager acts as afacilitator by giving necessary information to agents and managing communication between agents, and also as a mediator by proposing solutions to agents or stopping them to get into infinite loops bargaining back and forth.The buyer and seller agents range from using hardcoded logic to rule-based inferencing in their negotiation strategies. However these agents must support some communication skills using KQML or FIPA-ACL.So in contrast with other approaches to multiagent negotiation, we introduce an explicit mediator (market place manager) into the negotiation, and we propose a negotiation strategy based on dependence theory [1] implemented by our best buyers and best sellers.
For transferring existing knowledge into new projects, reuse has become an important factor in today's software industry. However, to set reuse into practice, reusable artifacts have to be stored somewhere, and must be offered to (re-)users on demand. For this purpose, advanced reuse repository systems like, for instance, instantiations of the Experience Base concept, are quite frequently used. Many people, from different projects, have to access such a repository at various phases of software development processes to retrieve or store reusable data. In order to fulfill the given tasks, each of these user has specific needs. Taking this into account, a reuse repository has to offer tailored user interfaces and functions for different user groups. Furthermore, since the contents of such a repository usually represent the state of the art of an organization's (core) competencies, not everyone should be allowed to freely access each and every repository entry. This isespecially true for persons that are not part of the organization. This report discusses role concepts that can be applied to reuse repository systems to overcome some of the stated access problems. Commonly used roles for software development and reuse repository management are listed. Based on these roles, a basic set of roles, as implemented in the SFB 501 Experience Base, is introduced.
Beim Greifen deformierbarer oder zerbrechlicher Werkstücke kommen der Greifgeschwindigkeit sowie der Greifkraft besondere Bedeutung zu. In dieser Arbeit wird eine universelle Steuerung für pneumatische Greifer beschrieben, die eine einfache Einstellung dieser Größen über zwei spannungsgesteuerte Proportionalventile gestattet. Diese Anordnung wird für eine Einflußanalyse von Greifkraft und Greifgeschwindigkeit beim Greifen von Kabeln und Kabelbäumen genutzt, welche sich als robust und unproblematisch erwiesen haben.
This paper deals with the robust manipulation of deformable linear objects such as hoses or wires. We propose manipulation based on thequalitative contact state between the deformable workpiece and a rigid environment. First, we give an enumeration of possible contact states and discuss the main characteristics of each state. Second, we investigate the transitions which are possible between the contact states and derive criteria and conditions for each of them. Finally, we apply the concept of contact states and state transitions to the description of a typical assembly task.
This paper deals with the problem of picking-up deformable linear workpieces such as cables or ropes with an industrial robot. First, we give a motivation and problem definition. Based on a brief conceptual discussion of possible approaches we derive an algorithm for picking-up hanging deformable linear objects using two light barriers as sensor system. For this hardware, a skill-based approach is described and the parameters and major influence factors are discussed. In an experi- mental study, the feasibility and reliability under diverse conditions are investigated. The algorithm is found to be very reliable, if certain boundary conditions are met.
In this paper, we investigate the efficient simulation of deformable linear objects. Based on the state of the art, we extend the principle of minimizing the potential energy by considering plastic deformation and describe a novel approach for treating workpiece dynamics. The major influence factors on precision and computation time are identified and investigated experimentally. Finally, we discuss the usage of parallel processing in order to reduce the computation time.
Enhancing the quality of surgical interventions is one of the main goals of surgical robotics. Thus we have devised a surgical robotic system for maxillofacial surgery which can be used as an intelligent intraoperative surgical tool. Up to now a surgeon preoperatively plans an intervention by studying twodimensional X-rays, thus neglecting the third dimension. In course of the special research programme "Computer and Sensor Aided Surgery" a planning system has been developed at our institute, which allows the surgeon to plan an operation on a threedimensional computer model of the patient . Transposing the preoperatively planned bone cuts, bore holes, cavities, and milled surfaces during surgery still proves to be a problem, as no adequate means are at hand: the actual performance of the surgical intervention and the surgical outcome solely depend on the experience and the skill of the operating surgeon. In this paper we present our approach of a surgical robotic system to be used in maxillofacial surgery. Special stress is being laid upon the modelling of the environment in the operating theatre and the motion planning of our surgical robot .
This paper presents fill algorithms for boundary-defined regions in raster graphics. The algorithms require only a constant size working memory. The methods presented are based on the so-called "seed fill" algorithms using the internal connectivity of the region with a given inner point. Basic methods as well as additional heuristics for speeding up the algorithm are described and verified. For different classes of regions, the time complexity of the algorithms is compared using empirical results.
Four different initialization methods for parallel Branch-and-bound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search tree. Furthermore, the fourth method offers the best efficiency of the overall algorithm when a centralized OPEN set is used. Experimental results by a PRAM simulation support these statements.
One of the many features needed to support the activities of autonomous systems is the ability of motion planning. It enables robots to move in their environment securely and to accomplish given tasks. Unfortunately, the control loop comprising sensing, planning, and acting has not yet been closed for robots in dynamic environments. One reason involves the long execution times of the motion planning component. A solution for this problem is offered by the use of highly computational parallelism. Thus, an important task is the parallelization of existing motion planning algorithms for robots so that they are suitable for highly computational parallelism. In several cases, completely new algorithms have to be designed, so that a parallelization is feasible. In this survey, we review recent approaches to motion planning using parallel computation.
For the online collision detection with a multi-arm robot a fast method for computing the so-called collision vector is presented. Manipulators and obstacles are modelled by sets of convex polytopes. Known distance algorithms serve as a foundation. To speed up the collision detection dynamic obstacles are approximated by geometric primitives and organized in hierarchies. On-line, the here introduced Dynamic Hierarchies are adjusted to the current arm configuration. A comparison with previous methods shows an increased acceleration of the computations.
A new problem for the automated off-line programming of industrial robot application is investigated. The Multi-Goal Path Planning is to find the collision-free path connecting a set of goal poses and minimizing e.g. the total path length. Our solution is based on an earlier reported path planner for industrial robot arms with 6 degrees-of-freedom in an on-line given 3D environment. To control the path planner, four different goal selection methods are introduced and compared. While the Random and the Nearest Pair Selection methods can be used with any path planner, the Nearest Goal and the Adaptive Pair Selection method are favorable for our planner. With the latter two goal selection methods, the Multi-Goal Path Planning task can be significantly accelerated, because they are able to automatically solve the simplest path planning problems first. Summarizing, compared to Random or Nearest Pair Selection, this new Multi-Goal Path Planning approach results in a further cost reduction of the programming phase.
One of the many features needed to support the activities of autonomous systems is the ability of motion planning. It enables robots to move in their environment securely and to accomplish given tasks. Unfortunately, the control loop comprising sensing, planning, and acting has not yet been closed for robots in dynamic environments. One reason involves the long execution times of the motion planning component. A solution for this problem is offered by the use of highly computational parallelism. Thus, an important task is the parallelization of existing motion planning algorithms for robots so that they are suitable for highly computational parallelism. In several cases, completely new algorithms have to be designed, so that a parallelization is feasible. In this survey, we review recent approaches to motion planning using parallel computation. As a classification scheme, we use the structure given by the different approaches to the robot's motion planning. For each approach, the available parallel processing methods are discussed. Each approach is uniquely assigned a class. Finally, for each referenced research work, a list of keywords is given.
We have presented a novel approach to parallel motion planning for robot manipulators in 3D workspaces. The approach is based on arandomized parallel search algorithm and focuses on solving the path planning problem for industrial robot arms working in a reasonably cluttered workspace. The path planning system works in the discretized con guration space, which needs not to be represented explicitly. The parallel search is conducted by a number of rule-based sequential search processes, which work to find a path connecting the initial con guration to the goal via a number of randomly generated subgoal con gurations. Since the planning performs only on-line collision tests with proper proximity information without using pre-computed information, the approach is suitable for planning problems with multirobot or dynamic environments. The implementation has been carried outontheparallel virtual machine (PVM) of a cluster of SUN4 workstations and SGI machines. The experimental results have shown that the approach works well for a 6-dof robot arm in a reasonably cluttered environment, and that parallel computation increases the e ciency of motion planning signi cantly.
Auf KLUEDO, dem Kaiserslauterer uniweiten elektronischen Dokumentenserver, können Angehörige der Universität Kaiserslautern wissenschaftliche Dokumente in elektronischer Form im Internet und damit weltweit veröffentlichen. Der Dokumentenserver entstand auf Initiative der Fachbereiche und der Universitätsbibliothek Kaiserslautern im Rahmen des vom Bundesministerium für Bildung und Forschung (BMBF) geförderten GLOBAL INFO - Vorprojektes. Hierbei basiert der Dokumentenserver stellenweise auf dem (während des MathNet-Projektes des deutschen Forschungsnetzes (DFN) und des MathBibNet-Projektes der deutschen Forschungsgemeinschaft (DFG) aufgebauten) Mathematik-Preprintserver der Universität Kaiserslautern. Dieser Artikel umreißt die Aspekte, die hinter der Entwicklung des Dokumentenservers in der vorliegenden Form gestanden haben und gibt einen groben Überblick über die Systemarchitektur. Detailinformationen und eine Beschreibung des Systems findet man in meiner Arbeit Propadeutik metadatenbasierter Publikationsserver: Ontologie und konkordante Implementierung.
We propose an approach to the problem of proof control for our new first-order inductive theorem prover QuodLibet that is characterized by a great deal of flexibility w.r.t. the forms of proof control the prover supports. The approach is based on so-called (proof) tactics, i.e. proof control routines written in a special proof control language named QML. QuodLibet provides a set of tactics (in addition to the elementary inference rules), which range from tactics for trivial simplification steps to tactics representing comprehensive inductive proof strategies. Moreover, QuodLibet allows new tactics that are written by the user in QML to be integrated into the system to dynamically extend its functionality.
Der Trend zu einer immer stärkeren Kopplung von Systemen bei gleichzeitiger Dezentralisierung durch Vernetzung hat dazu geführt, daß Computernutzern auf Wunsch enorme Datenmengen zur Verfügung stehen, die sich einer sinnvollen Bearbeitung durch den Nutzer allein völlig entziehen. Unterschiedliche Repräsentationsformalismen für Informationen, Mehrdeutigkeiten, Redundanz sowie eingeschränkte Verfügbarkeit sowohl von Informationen als auch von Rechenleistung machen konventionelle Suchverfahren unanwendbar. Stattdessen werden Suchverfahren und Programme benötigt, die sich intelligent an unterschiedliche Formalismen anpassen, ihre Handlungen ständig evaluieren und fähig sind, ihre Benutzer individuell zu unterstützen. Schlagwörter wie Knowbots, Search-Engines oder Data-Miningsind deshalb zur Zeit in aller Munde. Ein umfassendes Buch, das die hinter diesen und ähnlichen Schlagwörtern verborgenen Ideen und Konzepte präsentiert, existiert jedoch zur Zeit noch nicht. Dies war für uns die Motivation, das Thema "Intelligente Suche im Internet mit Lernenden Systemen" in einem Seminar zu behandeln. Wir haben damit ein Forschungsgebiet aufgegriffen, das sowohl für alle am LSA beteiligten Gruppen von Interesse ist, aber darüber hinaus aktuell von vielen Seiten aufmerksam beobachtet wird. Daher haben wir uns entschlossen, die Ausarbeitungen, die im Rahmen dieses Seminars von den TeilmehmerInnen erstellt wurden, durch den vorliegenden Bericht einer breiteren Öffentlichkeit zugänglich zu machen.
We present an overview of various learning techniques used in automated theorem provers. We characterize the main problems arising in this context and classify the solutions to these problems from published approaches. We analyze the suitability of several combinations of solutions for different approaches to theorem proving and place these combinations in a spectrum ranging from provers using very specialized learning approaches to optimally adapt to a small class of proof problems, to provers that learn more general kinds of knowledge, resulting in systems that are less efficient in special cases but show improved performance for a wide range of problems. Finally, we suggest combinations of solutions for various proof philosophies.