Refine
Year of publication
- 1996 (113) (remove)
Document Type
- Preprint (70)
- Report (23)
- Article (9)
- Master's Thesis (7)
- Doctoral Thesis (2)
- Diploma Thesis (1)
- Working Paper (1)
Keywords
- AG-RESY (4)
- COMOKIT (4)
- Case-Based Reasoning (3)
- CoMo-Kit (3)
- PARO (3)
- Fallbasiertes Planen (2)
- SKALP (2)
- case-based planning (2)
- Abstraction (1)
- Boltzmann Equation (1)
Faculty / Organisational entity
Es wird das Lernen uniform rekursiv aufzählbarer Sprachfamilien anhand guter Beispiele untersucht und Unterschiede und Gemeinsamkeiten zum Lernen von rekursiven Sprachfamilien und rekursiven Funktionen aufgezeigt. Dem verwendeten Modell liegt das Lernen von Schülern mit einem Lehrer zugrunde. Es werden verschiedene Varianten vorgestellt, verglichen und teilweise auch charakterisiert, und versucht, mit Beispielen und anderen typischen Eigenschaften ein Gefühl für die Leistungsfähigkeit zu vermitteln. Unter anderem wird gezeigt, dass es nicht immer "universelle" gute Beispiele gibt, mit denen eine Sprachklasse in allen Situationen erklärt werden kann.
We present a similarity criterion based on feature weighting. Feature weights are recomputed dynamically according to the performance of cases during problem solving episodes. We will also present a novel algorithm to analyze and explain the performance of the retrieved cases and to determine the features whose weights need to be recomputed. We will perform experiments and show that the integration in a feature weighting model of our similarity criterion with our analysis algorithm improves the adaptability of the retrieved cases by converging to best weights for the features over a period of multiple problem solving episodes.
Planning for manufacturing workpieces is a complex task that requires the interaction of a domain-specific reasoner and a generic planning mechanism. In this paper we present an architecture for organizing the case base that is based on the information provided by a generic problem solver. A retrieval procedure is then presented that uses the information provided by the domain-specific reasoner in order to improve the accuracy of the cases retrieved. However, it is not realistic to suppose that the case retrieved will entirely fit into the new problem. We present a replay procedure to obtain a partial solution that replays not only the valid decisions taken for solving the case, but also justifications of rejected decisions made during the problem solving process. As a result, those completion alternatives of the partial solution are discarded that are already known to be invalid from the case.
Complete Eager Replay
(1996)
We present an algorithm for completely replaying previous problem solving experiences for plan-space planners. In our approach not only the solution trace is replayed, but also the explanations of failed attempts made by the first-principle planner. In this way, the capability of refitting previous solutions into new problems is improved.
Fallbasiertes Schliessen (engl.: Case-based Reasoning) hat in den vergangenen Jahren zunehmende Bedeutung für den praktischen Einsatz in realen Anwendungsbereichen erlangt. In dieser Arbeit werden zunächst die allgemeine Vorgehensweise und die verschiedenen Teilaufgaben des fallbasierten Schliessens vorgestellt. Anschliessend wird auf die charakteristischen Eigenschaften eines Anwendungsbereiches eingegangen und an der konkreten Aufgabe der Kreditwürdigkeitsprüfung die Realisierung eines fallbasierten Ansatzes in der Finanzwelt beschrieben.
This paper addresses the role of abstraction in case-based reasoning. We develop a general framework for reusing cases at several levels of abstraction, which is particularly suited for describing and analyzing existing and designing new approaches of this kind. We show that in synthetic tasks (e.g. configuration, design, and planning), abstraction can be successfully used to improve the efficiency of similarity assessment, retrieval, and adaptation. Furthermore, a case-based planning system, called Paris, is described and analyzed in detail using this framework. An empirical study done with Paris demonstrates significant advantages concerning retrieval and adaptation efficiency as well as flexibility of adaptation. Finally, we show how other approaches from the literature can be classified according to the developed framework.
This paper is to present a new algorithm, called KNNcost, for learning feature weights for CBR systems used for classification. Unlike algorithms known so far, KNNcost considers the profits of a correct and the cost of a wrong decision. The need for this algorithm is motivated from two real-world applications, where cost and profits of decisions play a major role. We introduce a representation of accuracy, cost and profits of decisions and define the decision cost of a classification system. To compare accuracy optimization with cost optimization, we tested KNNacc against KNNcost. The first one optimizes classification accuracy with a conjugate gradient algorithm. The second one optimizes the decision cost of the CBR system, respecting cost and profits of the classifications. We present experiments with these two algorithms in a real application to demonstrate the usefulness of our approach.
When problems are solved through reasoning from cases, the primary kind of knowledge is contained in the specific cases which are stored in the case base. However, in many situations additional background-knowledge is required to cope with the requirements of an application. We describe an approach to integrate such general knowledge into the reasoning process in a way that it complements the knowledge contained in the cases. This general knowledge itself is not sufficient to perform any kind of model-based problem solving, but it is required to interpret the available cases appropriately. Background knowledge is expressed by two different kinds of rules that both must be formalized by the knowledge engineer: Completion rules describe how to infer additional features out of known features of an old case or the current query case. Adaptation rules describe how an old case can be adapted to fit the current query. This paper shows how these kinds of rules can be integrated into an object-oriented case representation.
We present an approach to systematically describing case-based reasoning systems bydifferent kinds of criteria. One main requirement was the practical relevance of these criteria and their usability for real-life applications. We report on the results we achieved from a case study carried out in the INRECA1 Esprit project.
We present a novel approach to classification, based on a tight coupling of instancebased learning and a genetic algorithm. In contrast to the usual instance-based learning setting, we do not rely on (parts of) the given training set as the basis of a nearestneighbor classifier, but we try to employ artificially generated instances as concept prototypes. The extremely hard problem of finding an appropriate set of concept prototypes is tackled by a genetic search procedure with the classification accuracy on the given training set as evaluation criterion for the genetic fitness measure. Experiments with artificial datasets show that - due to the ability to find concise and accurate concept descriptions that contain few, but typical instances - this classification approach is considerably robust against noise, untypical training instances and irrelevant attributes. These favorable (theoretical) properties are corroborated using a number of hard real-world classification problems.
This article will discuss a qualitative, topological and robust world-modelling technique with special regard to navigation-tasks for mobile robots operating in unknownenvironments. As a central aspect, the reliability regarding error-tolerance and stability will be emphasized. Benefits and problems involved in exploration, as well as in navigation tasks, are discussed. The proposed method demands very low constraints for the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot
In this paper we describe how explicit models of software or knowledge engineering processes can be used to guide and control the distributed development of complex systems. The paper focuses on techniques which automatically infer dependencies between decisions from a process model and methods which allow to integrate planning and execution steps. Managing dependencies between decisions is a basis for improving the traceability of develop- ment processes. Switching between planning and execution of subprocesses is an inherent need in the development of complex systems. The paper concludes with a description of the CoMo-Kit system which implements the technolo- gies mentioned above and which uses WWW technology to coordinate development processes. An on-line demonstration of the system can be found via the CoMo-Kit homepage:
Paris (Plan Abstraction and Refinement in an Integrated System) [4, 2] is a domain independent case-based planning system which allows the flexible reuse of planning cases by abstraction and refinement. This approach is mainly inspired by the observation that reuse of plans must not be restricted to a single description level. In domains with a high variation in the problems, the reuse of past solutions must be achieved at various levels of abstraction.
EADOCS (Expert Assisted Design of Composite Structures) is the implementation of a multi-level approach to conceptual design. Constraint-, case- and rule-based reasoning techniques are applied in different design phases to assemble and adapt designs at increasing levels of detail. This paper describes a strategic approach to decomposition, formulation of target design problems, and incremental retrieval and adaptation. Design problems considered, cannot be decomposed dynamically into tractable subproblems. Design cases are retrieved for requirements and preferences on both functionality and the solution. Cases are adapted in three phases: adaptation, modification and optimisation.
Erstellung eines Software-Monitors zur Analyse automatisch generierte Protokollimplementierungen
(1996)
In dieser Arbeit wird ein fallbasiertes System entwickelt, das Angaben über existiertende fallbasierte Anwendungen und Werkzeuge verwaltet. Mit diesem System kann ein Entwickler von fallbasierten Systemen sich einen Überblick über den Stand der Technik verschaffen und vor allem Informationen über Systeme erhalten, die dem System, das er selbst entwickeln will, ähnlich sind.
Problemspezifikation für die Arbeitsplanerstellung rotationssymmetrischer Drehteile mit AutoCAD
(1996)
Um den Anforderungen am industriellen Markt gerecht werden zu können, sind Unternehmer gezwungen, immer komplexere, speziell auf den Kunden abgestimmte Produkte möglichst schnell in kleinen Losgrössen herzustellen. Dabei umfasst die Herstellung eines neuen Produkts eine Vielzahl von Arbeitsschritten. Um den Marktanforderungen zu genügen wird bereits heute ein grosser Teil dieser Arbeitsvorgänge computerunterstützt durchgeführt.
Das Lernen ist für den Menschen ein wichtiger Teil des Entwicklungsprozesses und erlaubt ihm, aus positiven und negativen Erfahrungen Konsequenzen für sein weiteres Verhalten abzuleiten, insbesondere für seine Entscheidungsfindung. Diese Art des Lernens, das Lernen aus Erfahrung, kann jedoch nur stattfinden, wenn diese Erfahrungen erklärt werden können. Auf diesem Ansatz aufbauend, werden seit einigen Jahren Verfahren zur Übertragung erklärungsbasierter Lernprozesse auf Computersysteme untersucht.
Ein umfangreiches Gebiet der Künstlichen Intelligenz beschäftigt sich mit dem Bereich Planung. Im wesentlichen gibt es zwei Planungsansätze, zum einen nicht-hierarchische und zum anderen hierarchische arbeitende Verfahren. Als Beispiel für einen nicht-hierarchischen Ansatz kann SNLP1 genannt werden. Die nachfolgende Ausarbeitung ist auf dem zweiten Gebiet, der hierarchischen Planung, angesiedelt: Der Planungsassistent CAPlan2, der eigentlich auf dem nicht-hierarchischen Planungsansatz SNLP beruht, soll um die Möglichkeiten der hierarchischen Planung erweitert werden.
Die Entwicklung und Wartung von Software-Systemen wird ständig komplexer, da die entwickelte Software selbst immer komplexer und umfangreicher wird. Daher bietet sich zur Entlastung der Projektleiter, Projektmanager und weiterer Projektmitarbeiter eine Rechnerunterstützung der Software-Entwicklung und -wartung an. So können sie einen Überblick über den gesamten Prozess bekommen und diesen optimieren. Eine Möglichkeit der Unterstützung liefert die Modellierung des Software-Entwicklungsprozesses. Um einen Software-Entwicklungsprozess modellieren zu können, müssen die notwendigen Basisstrukturen identifiziert und bereitgestellt werden, was Thema dieser Arbeit ist.
Der Bereich der Workflow-Management-Systeme (WFMS - z.B. [Jab95ab]) wird in jüngerer Zeit in verschiedenen Bereichen der Informatik genauer erforscht. Ziel der Bemu"hungen ist es, die besonderen Anforderungen , die WFMS an Rechner- und Programmsysteme stellen, zu ermitteln und zu befriedigen. In dieser Arbeit untersuchen wir Aspekte des Umplanens ("Replanning" bzw. "Remodeling") während der Abarbeitung eines Workflows. Sie entstand im Rahmen des Projektes CoMo-Kit, im Rahmen dessen Methoden und Werkzeuge entwickelt werden, die die Planung und das Management komplexer Arbeitsabläufe, insbesondere im Entwurfsbereich, unterstützen. Der CoMo-Kit wird seit 1989 am Lehrstuhl für Expertensysteme der Universität Kaiserslautern unter der Leitung von Prof. Dr. M.M. Richter entwickelt.
Representations of activities dealing with the development or maintenance of software are called software process models. Process models allow for communication, reasoning, guidance, improvement, and automation. Two approaches for building, instantiating, and managing processes, namely CoMo-Kit and MVP-E, are combined to build a more powerful one. CoMo-Kit is based on AI/KE technology; it was developed for supporting complex design processes and is not specialized to software development processes. MVP-E is a process-sensitive software engineering environment for modeling and analyzing software development processes, and guides software developers. Additionally, it provides services to establish and run measurement programmes in software organizations. Because both approaches were developed completely independently major integration efforts are to be made to combine their both advantages. This paper concentrates on the resulting language concepts and their operationalization necessary for building automated process support.
In the past years, development and production processes in many companies have changed in a revolutionary way, leading to new demands in information and CAx technology. The R&D-departments of the German automotive industry installed a working group to develop a common long term CAD/CAM strategy1. A preliminary result is the concept for an open CAx system architecture as a basis for realizing industrial requirements on CAD/ CAM and for the cooperation with system vendors. The project ANICA was started in cooperation with five international CAD/CAM -suppliers in order to show the feasibility of this architecture. The access interfaces of different system kernels are analysed with the aim of developing a concept for a cooperating CAx system network. The concept will be put into practice with a software prototype basing on CORBA and OLE. The communication elements within such an architecture have to go far beyond conventional CAD data. This will lead to an extension of "feature" concepts including CAx functionality and dynamic information about the process chain of a product. The impact on modern concepts for user interfaces, on reverse engineering methods and on product data models will be discussed to finally close the loop to industrial CAx application.
In this report we give an overview of the development of our new Waldmeisterprover for equational theories. We elaborate a systematic stepwise design process, startingwith the inference system for unfailing Knuth - Bendix completion and ending up with animplementation which avoids the main diseases today's provers suffer from: overindulgencein time and space.Our design process is based on a logical three - level system model consisting of basicoperations for inference step execution, aggregated inference machine, and overall controlstrategy. Careful analysis of the inference system for unfailing completion has revealed thecrucial points responsible for time and space consumption. For the low level of our model,we introduce specialized data structures and algorithms speeding up the running system andcutting it down in size - both by one order of magnitude compared with standard techniques.Flexible control of the mid - level aggregation inside the resulting prover is made possible by acorresponding set of parameters. Experimental analysis shows that this flexibility is a pointof high importance. We go on with some implementation guidelines we have found valuablein the field of deduction.The resulting new prover shows that our design approach is promising. We compare oursystem's throughput with that of an established system and finally demonstrate how twovery hard problems could be solved by Waldmeister.
A combination of a state-based formalism and a temporal logic is proposed to get an expressive language for various descriptions of reactive systems. Thereby it is possible to use a model as well as a property oriented specification style in one description. The descriptions considered here are those of the environment, the specification, and the design of a reactive system. It is possible to express e.g. the requirements of a reactive system by states and transitions between them together with further temporal formulas restricting the behaviors of the statecharts. It is shown, how this combined formalism can be used: The specification of a small example is given and a designed controller is proven correct with respect to this specification. The combination of the langugages is based on giving a temporal semantics of a state-based formalism (statecharts) using a temporal logic (TLA).
t is well-known that for the integral group ring of a polycyclic group several decision problems are decidable. In this paper a technique to solve themembership problem for right ideals originating from Baumslag, Cannonito and Miller and studied by Sims is outlined. We want to analyze, how thesedecision methods are related to Gröbner bases. Therefore, we define effective reduction for group rings over Abelian groups, nilpotent groups and moregeneral polycyclic groups. Using these reductions we present generalizations of Buchberger's Gröbner basis method by giving an appropriate definition of"Gröbner bases" in the respective setting and by characterizing them using concepts of saturation and s-polynomials.
Structure and Construction of Instanton Bundles on P3
Significance of zero modes in path-integral quantization of solitonic theories with BRST invariance
(1996)
The significance of zero modes in the path-integral quantization of some solitonic models is investigated. In particular a Skyrme-like theory with topological vortices in (1 + 2) dimensions is studied, and with a BRST invariant gauge fixing a well defined transition amplitude is obtained in the one loop approximation. We also present an alternative method which does not necessitate evoking the time-dependence in the functional integral, but is equivalent to the original one in dealing with the quantization in the background of the static classical solution of the non-linear field equations. The considerations given here are particularly useful in - but also limited to -the one-loop approximation.
The constraint structure of the induced 2D-gravity with the Weyl and area-preserving diffeomorphism invariances is analysed in the ADM formulation. It is found that when the area-preserving diffeomorphism constraints are kept, the usual conformal gauge does not exist, whereas there is the possibility to choose the so-called "quasi-light-cone" gauge, in which besides the area-preserving diffeomorphism invariance, the reduced Lagrangian also possesses the SL(2,R) residual symmetry. This observation indicates that the claimed correspondence between the SL(2,R) residual symmetry and the area-preserving diffeomorphism invariance in both regularisation approaches does not hold. The string-like approach is then applied to quantise this model, but a fictitious non-zero central charge in the Virasoro algebra appears. When a set of gauge-independent SL(2,R) current-like fields is introduced instead of the string-like variables, a consistent quantum theory is obtained, which means that the area-preserving diffeomorphism invariance can be maintained at the quantum level.
The Lagrangian field-antifield formalism of Batalin and Vilkovisky (BV) is used to investigate the application of the collec- tive coordinate method to soliton quantisation. In field theories with soliton solutions, the Gaussian fluctuation operator has zero modes due to the breakdown of global symmetries of the Lagrangian in the soliton solutions. It is shown how Noether identities and local symmetries of the Lagrangian arise when collective coordinates are introduced in order to avoid divergences related to these zero modes. This transformation to collective and fluctuation degrees of freedom is interpreted as a canonical transformation in the symplectic field-antifield space which induces a time-local gauge symmetry. Separating the corresponding Lagrangian path integral of the BV scheme in lowest order into harmonic quantum fluctuations and a free motion of the collective coordinate with the classical mass of the soliton, we show how the BV approach clarifies the relation between zero modes, collective coordinates, gauge invariance and the center- of-mass motion of classical solutions in quantum fields. Finally, we apply the procedure to the reduced nonlinear O(3) oe-model.^L
A new look at the RST model
(1996)
The RST model is augmented by the addition of a scalar field and a boundary term so that it is well-posed and local. Expressing the RST action in terms of the ADM formulation, the constraint structure can be analysed completely. It is shown that from the view point of local field theories, there exists a hidden dynamical field 1 in the RST model. Thanks to the presence of this hidden dynamical field, we can reconstruct the closed algebra of the constraints which guarantee the general invariance of the RST action. The resulting stress tensors TSigma Sigma are recovered to be true tensor quantities. Especially, the part of the stress tensors for the hidden dynamical field 1 gives the precise expression for tSigma . At the quantum level, the cancellation condition for the total central charge is reexamined. Finally, with the help of the hidden dynamical field 1, the fact that the semi-classical static soluti on of the RST model has two independent parameters (P,M), whereas for the classical CGHS model there is only one, can be explained.
Quantum tunneling between degenerate ground states through the central barrier of a potential is extended to excited states with the instanton method. This extension is achieved with the help of an LSZ reduction technique as in field theory and may be of importance in the study of macroscopic quantum phenomena in magnetic systems.
Starting from the coherent state representation of the evolution operator with the help of the path-integral, we derive a formula for the low-lying levels E = ffl0 Gamma 24ffl cos(s + ,)ss of a quantum spin system. The quenching of macroscopic quantum coherence is understood as the vanishing of cos(s + ,)ss in disagreement with the suppression of tunneling (i.e. 4ffl = 0) as claimed in the literature. A new configuration called the macroscopic Fermi-particle is suggested by the character of its wave function. The tunne- ling rate ( 24fflss ) does not vanish, not for integer spin s nor for a half-integer value of s, and is calculated explicitly (for the position dependent mass) up to the one-loop approximation.
The static deformation of the surface of the earth caused by surface pressure like the water load of an ocean or an artificial lake is discussed. First a brief mention is made on the solution of the Boussenesq problem for an infinite halfspace with the elastic medium to be assumed as homogeneous and isotropic. Then the elastic response for realistic earth models is determinied by spline interpolation using Navier splines. Major emphasis is on the derteminination of the elastic field caused by water loads from surface tractions on the (real) earth" s surface. Finally the elastic deflection of an artificial lake assuming a homogeneous isotropic crust is compared for both evaluation methods.
The purpose of this paper is to present the state of the art in singular optimal control. If the Hamiltonian in an interval \([t_1,t_2]\) is independent of the control we call the control in this interval singular. Singular optimal controls appear in many applications so that research has been motivated since the 1950s. Often optimal controls consist of nonsingular and singular parts where the junctions between these parts are mostly very difficult to find. One section of this work shows the actual knowledge about the location of the junctions and the behaviour of the control at the junctions. The definition and the properties of the orders (problem order and arc order), which are important in this context, are given, too. Another chapter considers multidimensional controls and how they can be treated. An alternate definition of the orders in the multidimensional case is proposed and a counterexample, which confirms a remark given in the 1960s, is given. A voluminous list of optimality conditions, which can be found in several publications, is added. A strategy for solving optimal control problems numerically is given, and the existing algorithms are compared with each other. Finally conclusions and an outlook on the future research is given.
A continuous version of spherical multiresolution is described, starting from continuous wavelet transform on the sphere. Scale discretization enables us to construct spherical counterparts to Daubechies wavelets and wavelet packets (known from Euclidean theory). Essential tool is the theory of singular integrals on the sphere. It is shown that singular integral operators forming a semigroup of contraction operators of class (Co) (like Abel-Poisson or Gauß-Weierstraß operators) lead in canonical way to (pyramidal) algorithms.
The paper discusses the approximation of scattered data on the sphere which is one of the major tasks in geomathematics. Starting from the discretization of singular integrals on the sphere the authors devise a simple approximation method that employs locally supported spherical polynomials and does not require equidistributed grids. It is the basis for a hierarchical approximation algorithm using differently scaled basis functions, adaptivity and error control. The method is applied to two examples one of which is a digital terrain model of Australia.
This paper considers a transmission boundary-value problem for the time-harmonic Maxwell equations neglecting displacement currents which is frequently used for the numerical computation of eddy-currents. Across material boundaries the tangential components of the magnetic field H and the normal component of the magnetization müH are assumed to be continuous. this problem admits a hyperplane of solutions if the domains under consideration are multiply connected. Using integral equation methods and singular perturbation theory it is shown that this hyperplane contains a unique point which is the limit of the classical electromagnetic transmission boundary-value problem for vanishing displacement currents. Considering the convergence proof, a simple contructive criterion how to select this solution is immediately derived.