### Refine

#### Year of publication

- 1999 (397) (remove)

#### Document Type

- Preprint (397) (remove)

#### Keywords

- Case-Based Reasoning (10)
- Fallbasiertes Schliessen (5)
- Location Theory (5)
- case-based problem solving (5)
- Abstraction (4)
- Fallbasiertes Schließen (4)
- Knowledge Acquisition (4)
- Internet (3)
- Knowledge acquisition (3)
- Maschinelles Lernen (3)

#### Faculty / Organisational entity

Abstract: Random matrix theory (RMT) is a powerful statistical tool to model spectral fluctuations. In addition, RMT provides efficient means to separate different scales in spectra. Recently RMT has found application in quantum chromodynamics (QCD). In mesoscopic physics, the Thouless energy sets the universal scale for which RMT applies. We try to identify the equivalent of a Thouless energy in complete spectra of the QCD Dirac operator with staggered fermions and SU_(2) lattice gauge fields. Comparing lattice data with RMT predictions we find deviations which allow us to give an estimate for this scale.

Beyond the Thouless energy
(1999)

Abstract: The distribution and the correlations of the small eigenvalues of the Dirac operator are described by random matrix theory (RMT) up to the Thouless energy E_= 1 / sqrt (V), where V is the physical volume. For somewhat larger energies, the same quantities can be described by chiral perturbation theory (chPT). For most quantities there is an intermediate energy regime, roughly 1/V < E < 1/sqrt (V), where the results of RMT and chPT agree with each other. We test these predictions by constructing the connected and disconnected scalar susceptibilities from Dirac spectra obtained in quenched SU(2) and SU(3) simulations with staggered fermions for a variety of lattice sizes and coupling constants. In deriving the predictions of chPT, it is important totake into account only those symmetries which are exactly realized on the lattice.

Abstract: Recently, the chiral logarithms predicted by quenched chiral perturbation theory have been extracted from lattice calculations of hadron masses. We argue that the deviations of lattice results from random matrix theory starting around the so-called Thouless energy can be understood in terms of chiral perturbation theory as well. Comparison of lattice data with chiral perturbation theory formulae allows us to compute the pion decay constant. We present results from a calculation for quenched SU(2) with Kogut-Susskind fermions at ß = 2.0 and 2.2.

Abstract: Recently, the contributions of chiral logarithms predicted by quenched chiral perturbation theory have been extracted from lattice calculations of hadron masses. We argue that a detailed comparison of random matrix theory and lattice calculations allows for a precise determination of such corrections. We estimate the relative size of the m log(m), m, and m^2 corrections to the chiral condensate for quenched SU(2).

Abstract: We describe a general technique that allows for an ideal transfer of quantum correlations between light fields and metastable states of matter. The technique is based on trapping quantum states of photons in coherently driven atomic media, in which the group velocity is adiabatically reduced to zero. We discuss possible applications such as quantum state memories, generation of squeezed atomic states, preparation of entangled atomic ensembles and quantum information processing.

Abstract: We show that it is possible to "store" quantum states of single-photon fields by mapping them onto collective meta-stable states of an optically dense, coherently driven medium inside an optical resonator. An adiabatic technique is suggested which allows to transfer non-classical correlations from traveling-wave single-photon wave-packets into atomic states and vise versa with nearly 100% efficiency. In contrast to previous approaches involving single atoms, the present technique does not require the strong coupling regime corresponding to high-Q micro-cavities. Instead, intracavity Electromagnetically Induced Transparency is used to achieve a strong coupling between the cavity mode and the atoms.

Mirrorless oscillation based on resonantly enhanced 4-wave mixing: All-order analytic solutions
(1999)

Abstract: The phase transition to mirrorless oscillation in resonantly enhanced four-wave mixing in double-A systems are studied analytically for the ideal case of infinite lifetimes of ground-state coherences. The stationary susceptibilities are obtained in all orders of the generated fields and analytic solutions of the coupled nonlinear differential equations for the field amplitudes are derived and discussed.

Abstract: We utilize the generation of large atomic coherence to enhance the resonant nonlinear magneto-optic effect by several orders of magnitude, thereby eliminating power broadening and improving the fundamental signal-to-noise ratio. A proof-of-principle experiment is carried out in a dense vapor of Rb atoms. Detailed numerical calculations are in good agreement with the experimental results. Applications such as optical magnetometry or the search for violations of parity and time reversal symmetry are feasible.

Abstract: Spontaneous emission and Lamb shift of atoms in absorbing dielectrics are discussed. A Green's-function approach is used based on the multipolar interaction Hamiltonian of a collection of atomic dipoles with the quantised radiation field. The rate of decay and level shifts are determined by the retarded Green's-function of the interacting electric displacement field, which is calculated from a Dyson equation describing multiple scattering. The positions of the atomic dipoles forming the dielectrics are assumed to be uncorrelated and a continuum approximation is used. The associated unphysical interactions between different atoms at the same location is eliminated by removing the point-interaction term from the free-space Green's-function (local field correction). For the case of an atom in a purely dispersive medium the spontaneous emission rate is altered by the well-known Lorentz local-field factor. In the presence of absorption a result different from previously suggested expressions is found and nearest-neighbour interactions are shown to be important.

Abstract: We aim to establish a link between path-integral formulations of quantum and classical field theories via diagram expansions. This link should result in an independent constructive characterisation of the measure in Feynman path integrals in terms of a stochastic differential equation (SDE) and also in the possibility of applying methods of quantum field theory to classical stochastic problems. As a first step we derive in the present paper a formal solution to an arbitrary c-number SDE in a form which coincides with that of Wick's theorem for interacting bosonic quantum fields. We show that the choice of stochastic calculus in the SDE may be regarded as a result of regularisation, which in turn removes ultraviolet divergences from the corresponding diagram series.

We show that the solution to an arbitrary c-number stochastic differential equation (SDE) can be represented as a diagram series. Both the diagram rules and the properties of the graphical elements reflect causality properties of the SDE and this series is therefore called a causal diagram series. We also discuss the converse problem, i.e. how to construct an SDE of which a formal solution is a given causal diagram series. This then allows for a nonperturbative summation of the diagram series by solving this SDE, numerically or analytically.

Abstract: We propose a simple method for measuring the populations and the relative phase in a coherent superposition of two atomic states. The method is based on coupling the two states to a third common (excited) state by means of two laser pulses, and measuring the total fluorescence from the third state for several choices of the excitation pulses.

Abstract: We present experimental and theoretical results of a detailed study of laser-induced continuum structures (LICS) in the photoionization continuum of helium out of the metastable state 2s^1 S_0. The continuum dressing with a 1064 nm laser, couples the same region of the continuum to the 4s^1 S_0 state. The experimental data, presented for a range of intensities, show pronounced ionization suppression (by asmuch as 70% with respect to the far-from-resonance value) as well as enhancement, in a Beutler-Fano resonance profile. This ionization suppression is a clear indication of population trapping mediated by coupling to a contiuum. We present experimental results demonstrating the effect of pulse delay upon the LICS, and for the behavior of LICS for both weak and strong probe pulses. Simulations based upon numerical solution of the Schrödinger equation model the experimental results. The atomic parameters (Rabi frequencies and Stark shifts) are calculated using a simple model-potential method for the computation of the needed wavefunctions. The simulations of the LICS profiles are in excellent agreement with experiment. We also present an analytic formulation of pulsed LICS. We show that in the case of a probe pulse shorter than the dressing one the LICS profile is the convolution of the power spectra of the probe pulse with the usual Fano profile of stationary LICS. We discuss some consequences of deviation from steady-state theory.

We present results from a study of the coherence properties of a system involving three discrete states coupled to each other by two-photon processes via a common continuum. This tripod linkage is an extension of the standard laser-induced continuum structure (LICS) which involves two discrete states and two lasers. We show that in the tripod scheme, there exist two population trapping conditions; in some cases these conditions are easier to satisfy than the single trapping condition in two-state LICS. Depending on the pulse timing, various effects can be observed. We derive some basic properties of the tripod scheme, such as the solution for coincident pulses, the behaviour of the system in the adiabatic limit for delayed pulses, the conditions for no ionization and for maximal ionization, and the optimal conditions for population transfer between the discrete states via the continuum. In the case when one of the discrete states is strongly coupled to the continuum, the population dynamics reduces to a standard two-state LICS problem (involving the other two states) with modified parameters; this provides the opportunity to customize the parameters of a given two-state LICS system.

Abstract: In this paper we present a renormalizability proof for spontaneously broken SU (2) gauge theory. It is based on Flow Equations, i.e. on the Wilson renormalization group adapted to perturbation theory. The power counting part of the proof, which is conceptually and technically simple, follows the same lines as that for any other renormalizable theory. The main difficulty stems from the fact that the regularization violates gauge invariance. We prove that there exists a class of renormalization conditions such that the renormalized Green functions satisfy the Slavnov-Taylor identities of SU (2) Yang-Mills theory on which the gauge invariance of the renormalized theory is based.

Spektralsequenzen
(1999)

Magnetic anisotropies of MBE-grown fcc Co(110)-films on Cu(110) single crystal substrates have been determined by using Brillouin light scattering(BLS) and have been correlated with the structural properties determined by low energy electron diffraction (LEED) and scanning tunneling microscopy (STM). Three regimes of film growth and associated anisotropy behavior are identified: coherent growth in the Co film thickness regime of up to 13 Å, in-plane anisotropic strain relaxation between 13 Å and about 50 Å and inplane isotropic strain relaxation above 50 Å. The structural origin of the transition between anisotropic and isotropic strain relaxation was studied using STM. In the regime of anisotropic strain relaxation long Co stripes with a preferential [ 110 ]-orientation are observed, which in the isotropic strain relaxation regime are interrupted in the perpendicular in-plane direction to form isotropic islands. In the Co film thickness regime below 50 Å an unexpected suppression of the magnetocrystalline anisotropy contribution is observed. A model calculation based on a crystal field formalism and discussed within the context of band theory, which explicitly takes tetragonal misfit strains into account, reproduces the experimentally observed anomalies despite the fact that the thick Co films are quite rough.

Absract: We report on measurements of the two-dimensional intensity distribtion of linear and non-linear spin wave excitations in a LuBiFeO film. The spin wave intensity was detected with a high-resolution Brillouinlight scatteringspectroscopy setup. The observed snake-like structure of the spin wave intensity distribution is understood as a mode beating between modes with different lateral spin wave intensity distributions. The theoretical treatment of the linear regime is performed analytically, whereas the propagation of non-linear spin waves is simulated by a numerical solution of a non-linear Schrödinger equation with suitable boundary conditions.

Das Skript vermittelt die für eine Beschlagwortung nach den Regeln für den Schlagwortkatalog (RSWK) notwendigen Grundkenntnisse. Darüber hinaus wird beschrieben, wie die Schlagwortdaten in der Verbunddatenbank des Südwestdeutschen Bibliotheksverbund (SWB) strukturiert sind, welche Prinzipien bei der kooperativen Beschlagwortung im SWB einzuhalten sind und wie die Daten erfasst werden müssen . Des weiteren werden die in der Datenbank des SWB realisierten Suchmöglichkeiten aufgezeigt und aufgelistet, wie die dazugehörigen Suchbefehle lauten. Für Fragen der Organisation des Geschäftsgang der Teilnehmerbibliotheken wird exemplarisch der Arbeitsablauf an der UB Kaiserslautern dargestellt.

Abstract: The periodic bounce configurations responsible for quantum tunneling are obtained explicitly and are extended to the finite energy case for minisuperspace models of the Universe. As a common feature of the tunneling models at finite energy considered here we observe that the period of the bounce increases with energy monotonically. The periodic bounces do not have bifurcations and make no contribution to the nucleation rate except the one with zero energy. The sharp first order phase transition from quantum tunneling to thermal activation is verified with the general criterions.

We consider a (2 + 1)-dimensional mechanical system with the Lagrangian linear in the torsion of a light-like curve. We give Hamiltonian formulation of this system and show that its mass and spin spectra are defined by one-dimensional nonrelativistic mechanics with a cubic potential. Consequently, this system possesses the properties typical of resonance-like particles.

Starting from the Hamiltonian operator of the noncompensated two-sublattice model of a small antiferromagnetic particle, we derive the e effective Lagrangian of a biaxial antiferromagnetic particle in an external magnetic field with the help of spin-coherent-state path integrals. Two unequal level-shifts induced by tunneling through two types of barriers are obtained using the instanton method. The energy spectrum is found from Bloch theory regarding the periodic potential as a superlattice. The external magnetic field indeed removes Kramers' degeneracy, however a new quenching of the energy splitting depending on the applied magnetic field is observed for both integer and half-integer spins due to the quantum interference between transitions through two types of barriers.

Die Entwicklung des Zusammenlebens der Menschen geht immer mehr den Weg zur Informations- und Mediengesellschaft. Nicht zuletzt aufgrund der weltweiten Vernetzung ist es uns in minutenschnelle möglich, fast alle erdenklichen Informationen zu Hause auf den Bildschirm geliefert zu bekommen. Es findet sich so jeder zwar in einer gewissen schützenden Anonymität, aber dennoch einer genauso gewollten, wie erschreckenden Transparenz wieder. Jeder klassifiziert in gewisser Weise Informationen, die er preisgibt etwa in öffentliche, persönliche und vertrauliche Nachrichten. Gerade hier müssen Techniken und Methoden bereitstehen, um in dieser anonymen Transparenz Informationen, die nur für spezielle Empfänger gedacht sind vor unbefugtem Zugriff zu schützen und nur denjenigen zugänglich zu machen, die dazu berechtigt sind. Diesen Wunsch hat nicht nur allgemein die Gesellschaft, sondern im speziellen wird die Entwicklung auf diesem Gebiet gerade von staatlichen und militärischen Einrichtungen gefordert und gefördert. So sind häufig eingesetzte Werkzeuge die Methoden der Kryptologie, aber solange es geheime Nachrichten gibt, wird es Angreifer geben, die versuchen, sich unberechtigten Zugang zu diesen Informationen zu verschaffen. Da die ständig wachsende Leistung von EDV-Anlagen das "Knacken" von Verschlüsselungsmethoden begünstigt, muß zu immer sichereren Chiffrierverfahren übergegangen werden. Dieser Umstand macht das Thema Kryptologie für den Moment hochaktuell und auf lange Sicht zu einem zeitlosen Forschungsgebiet der Mathematik und Informatik.

We consider three applications of impulse control in financial mathematics, a cash management problem, optimal control of an exchange rate, and portfolio optimisation under transaction costs. We sketch the different ways of solving these problems with the help of quasi-variational inequalities. Further, some viscosity solution results are presented.

Continuous and discrete superselection rules induced by the interaction with the environment are investigated for a class of exactly soluble Hamiltonian models. The environment is given by a Boson field. Stable superselection sectors can only emerge if the low frequences dominate and the ground state of the Boson field disappears due to infrared divergence. The models allow uniform estimates of all transition matrix elements between different superselection sectors.

In this paper we deal with the determination of the whole set of Pareto-solutions of location problems with respect to Q general criteria.These criteria include median, center or cent-dian objective functions as particular instances.The paper characterizes the set of Pareto-solutions of a these multicriteria problems. An efficient algorithm for the planar case is developed and its complexity is established. Extensions to higher dimensions as well as to the non-convexcase are also considered.The proposed approach is more general than the previously published approaches to multi-criteria location problems and includes almost all of them as particular instances.

In a discrete-time financial market setting, the paper relates various concepts introduced for dynamic portfolios (both in discrete and in continuous time). These concepts are: value preserving portfolios, numeraire portfolios, interest oriented portfolios, and growth optimal portfolios. It will turn out that these concepts are all associated with a unique martingale measure which agrees with the minimal martingale measure only for complete markets.

Facility Location Problems are concerned with the optimal location of one or several new facilities, with respect to a set of existing ones. The objectives involve the distance between new and existing facilities, usually a weighted sum or weighted maximum. Since the various stakeholders (decision makers) will have different opinions of the importance of the existing facilities, a multicriteria problem with several sets of weights, and thus several objectives, arises. In our approach, we assume the decision makers to make only fuzzy comparisons of the different existing facilities. A geometric mean method is used to obtain the fuzzy weights for each facility and each decision maker. The resulting multicriteria facility location problem is solved using fuzzy techniques again. We prove that the final compromise solution is weakly Pareto optimal and Pareto optimal, if it is unique, or under certain assumptions on the estimates of the Nadir point. A numerical example is considered to illustrate the methodology.

Value Preserving Strategies and a General Framework for Local Approaches to Optimal Portfolios
(1999)

We present some new general results on the existence and form of value preserving portfolio strategies in a general semimartingale setting. The concept of value preservation will be derived via a mean-variance argument. It will also be embedded into a framework for local approaches to the problem of portfolio optimisation.

Discretizations for the Incompressible Navier-Stokes Equations based on the Lattice Boltzmann Method
(1999)

A discrete velocity model with spatial and velocity discretization based on a lattice Boltzmann method is considered in the low Mach number limit. A uniform numerical scheme for this model is investigated. In the limit, the scheme reduces to a finite difference scheme for the incompressible Navier-Stokes equation which is a projection method with a second order spatial discretization on a regular grid. The discretization is analyzed and the method is compared to Chorin's original spatial discretization. Numerical results supporting the analytical statements are presented.

In this paper we derive fluid dynamic equations byperforming asymptotic analysis for the generalized Boltzmann equationfor polyatomic gases. In particular, we consider the steady state,one-dimensional Boltzmann equation with one additional internal energyand different relaxation times. Moreover, we present a new approachto define coupling procedures for the Boltzmann equation and Navier-Stokesequations based on the 14-moments expansion of Levermore. These coupledmodels are validated by numerical simulations.

We consider a scale discrete wavelet approach on the sphere based on spherical radial basis functions. If the generators of the wavelets have a compact support, the scale and detail spaces are finite-dimensional, so that the detail information of a function is determined by only finitely many wavelet coefficients for each scale. We describe a pyramid scheme for the recursive determination of the wavelet coefficients from level to level, starting from an initial approximation of a given function. Basic tools are integration formulas which are exact for functions up to a given polynomial degree and spherical convolutions.

Moment inequalities for the Boltzmann equation and applications to spatially homogeneous problems
(1999)

Some inequalities for the Boltzmann collision integral are proved. These inequalities can be considered as a generalization of the well-known Povzner inequality. The inequalities are used to obtain estimates of moments of solution to the spatially homogeneous Boltzmann equation for a wide class of intermolecular forces. We obtained simple necessary and sufficient conditions (on the potential) for the uniform boundedness of all moments. For potentials with compact support the following statement is proved. .....

The paper shows that characterizing the causal relationship between significant events is an important but non-trivial aspect for understanding the behavior of distributed programs. An introduction to the notion of causality and its relation to logical time is given; some fundamental results concerning the characterization of causality are pre- sented. Recent work on the detection of causal relationships in distributed computations is surveyed. The relative merits and limitations of the different approaches are discussed, and their general feasibility is analyzed.

The Hamiltonian of the \(N\)-particle Calogero model can be expressed in terms of generators of a Lie algebra for a definite class of representations. Maintaining this Lie algebra, its representations, and the flatness of the Riemannian metric belonging to the second order differential operator, the set of all possible quadratic Lie algebra forms is investigated. For \(N = 3\) and \(N = 4\) such forms are constructed explicitly and shown to correspond to exactly solvable Sutherland models. The results can be carried over easily to all \(N\).

Trigonometric invariants are defined for each Weyl group orbit on the root lattice. They are real and periodic on the coroot lattice. Their polynomial algebra is spanned by a basis which is calculated by means of an algorithm. The invariants of the basis can be used as coordinates in any cell of the coroot space and lead to an exactly solvable model of Sutherland type. We apply this construction to the \(F_4\) case.

We present an approach to learning cooperative behavior of agents. Our ap-proach is based on classifying situations with the help of the nearest-neighborrule. In this context, learning amounts to evolving a set of good prototypical sit-uations. With each prototypical situation an action is associated that should beexecuted in that situation. A set of prototypical situation/action pairs togetherwith the nearest-neighbor rule represent the behavior of an agent.We demonstrate the utility of our approach in the light of variants of thewell-known pursuit game. To this end, we present a classification of variantsof the pursuit game, and we report on the results of our approach obtained forvariants regarding several aspects of the classification. A first implementationof our approach that utilizes a genetic algorithm to conduct the search for a setof suitable prototypical situation/action pairs was able to handle many differentvariants.

The common wisdom that goal orderings can be used to improve planning performance is nearly as old as planning itself. During the last decades of research several approaches emerged that computed goal orderings for different planning paradigms, mostly in the area of state-space planning. For partial-order, plan-space planners goal orderings have not been investigated in much detail. Mechanisms developed for statespace planning are not directly applicable because partial-order planners do not have a current (world) state. Further, it is not completely clear how plan-space planners should make use of goal orderings. This paper describes an approach to extract goal orderings to be used by the plan-space planner CAPlan. The extraction of goal orderings is based on the analysis of an extended version of operator graphs which previously have been found useful for the analysis of interactions and recursion of plan-space planners.

Die Verwendung von existierenden Planungsansätzen zur Lösung von realen Anwendungs- problemen führt meist schnell zur Erkenntnis, dass eine vorliegende Problemstellung im Prinzip zwar lösbar ist, der exponentiell anwachsende Suchraum jedoch nur die Behandlung relativ kleiner Aufgabenstellungen erlaubt. Beobachtet man jedoch menschliche Planungsexperten, so sind diese in der Lage bei komplexen Problemen den Suchraum durch Abstraktion und die Verwendung bekannter Fallbeispiele als Heuristiken, entscheident zu verkleinern und so auch für schwierige Aufgabenstellungen zu einer akzeptablen Lösung zu gelangen. In dieser Arbeit wollen wir am Beispiel der Arbeitsplanung ein System vorstellen, das Abstraktion und fallbasierte Techniken zur Steuerung des Inferenzprozesses eines nichtlinearen, hierarchischen Planungssystems einsetzt und so die Komplexität der zu lösenden Gesamtaufgabe reduziert.

We describe a hybrid architecture supporting planning for machining workpieces. The archi- tecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.

We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is build on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been realized that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.

In den letzten Jahren wurden Methoden des fallbasierten Schliessens häufig in Bereichen verwendet, in denen traditionell symbolische Verfahren zum Einsatz kommen, beispielsweise in der Klassifikation. Damit stellt sich zwangsläufig die Frage nach den Unterschieden bzw. der Mächtigkeit dieser Lernverfahren. Jantke [Jantke, 1992] hat bereits Gemeinsamkeiten von Induktiver Inferenz und fallbasierter Klassifikation untersucht. In dieser Arbeit wollen wir einige Zusammenhänge zwischen der Fallbasis, dem Ähnlichkeitsmass und dem zu erlernenden Begriff verdeutlichen. Zu diesem Zweck wird ein einfacher symbolischer Lernalgorithmus (der Versionenraum nach [Mitchell, 1982]) in eine äquivalente, fallbasiert arbeitende Variante transformiert. Die vorgestellten Ergebnisse bestätigen die Äquivalenz von symbolischen und fallbasierten Ansätzen und zeigen die starke Abhängigkeit zwischen dem im System verwendeten Mass und dem zu lernenden Begriff.

Die Mehrzahl aller CBR-Systeme in der Diagnostik verwendet für das Fallretrieval ein numerisches Ähnlichkeitsmass. In dieser Arbeit wird ein Ansatz vorgestellt, bei dem durch die Einführung eines an den Komponenten des zu diagnostizierenden technischen Systems orientierten Ähnlichkeitsbegriffs nicht nur das Retrieval wesentlich verbessert werden kann, sondern sich auch die Möglichkeit zu einer echten Fall- und Lösungstransformation bietet. Dies führt wiederum zu einer erheblichen Verkleinerung der Fallbasis. Die Ver- wendung dieses Ähnlichkeitsbegriffes setzt die Integration von zusätzlichem Wissen voraus, das aus einem qualitativem Modell der Domäne (im Sinne der modellbasierten Diagnostik) gewonnen wird.

Patdex is an expert system which carries out case-based reasoning for the fault diagnosis of complex machines. It is integrated in the Moltke workbench for technical diagnosis, which was developed at the university of Kaiserslautern over the past years, Moltke contains other parts as well, in particular a model-based approach; in Patdex where essentially the heuristic features are located. The use of cases also plays an important role for knowledge acquisition. In this paper we describe Patdex from a principal point of view and embed its main concepts into a theoretical framework.

In nebenläufigen Systemen erleichtert das Konzept der Atomarität vonOperationen, konkurrierende Zugriffe in größere, leichter beherrschbareAbschnitte zu unterteilen. Wenn wir aber Spezifikationen in der forma-len Beschreibungstechnik Estelle betrachten, erweist es sich, daß es un-ter bestimmten Umständen schwierig ist, die Atomarität der sogenanntenTransitionen bei Implementationen exakt einzuhalten, obwohl diese Ato-marität eine konzeptuelle Grundlage der Semantik von Estelle ist. Es wirdaufgezeigt, wie trotzdem sowohl korrekte als auch effiziente nebenläufigeImplementationen erreicht werden können. Schließlich wird darauf hinge-wiesen, daß die das Problem auslösenden Aktionen oft vom Spezifiziererleicht von vorneherein vermieden werden können; und dies gilt auch überden Kontext von Estelle hinaus.

Bestimmung der Ähnlichkeit in der fallbasierten Diagnose mit simulationsfähigen Maschinenmodellen
(1999)

Eine Fallbasis mit bereits gelösten Diagnoseproblemen Wissen über die Struktur der Maschine Wissen über die Funktion der einzelnen Bauteile (konkret und abstrakt) Die hier vorgestellte Komponente setzt dabei auf die im Rahmen des Moltke-Projektes entwickelten Systeme Patdex[Wes91] (fallbasierte Diagnose) und iMake [Sch92] bzw. Make [Reh91] (modellbasierte Generierung von Moltke- Wissensbasen) auf.

The feature interaction problem in telecommunications systems increasingly obstructsthe evolution of such systems. We develop formal detection criteria which render anecessary (but less than sufficient) condition for feature interactions. It can be checkedmechanically and points out all potentially critical spots. These have to be analyzedmanually. The resulting resolution decisions are incorporated formally. Some prototypetool support is already available. A prerequisite for formal criteria is a formal definitionof the problem. Since the notions of feature and feature interaction are often used in arather fuzzy way, we attempt a formal definition first and discuss which aspects can beincluded in a formalization (and therefore in a detection method). This paper describeson-going work.

Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.

Collecting Experience on the Systematic Development of CBR Applications using the INRECA Methodology
(1999)

This paper presents an overview of the INRECA methodology for building and maintaining CBR applications. This methodology supports the collection and reuse of experience on the systematic development of CBR applications. It is based on the experience factory and the software process modeling approach from software engineering. CBR development experience is documented using software process models and stored in different levels of generality in a three-layered experience base. Up to now, experience from 9 industrial projects enacted by all INRECA II partners has been collected.

Automata-Theoretic vs. Property-Oriented Approaches for the Detection of Feature Interactions in IN
(1999)

The feature interaction problem in Intelligent Networks obstructs more and morethe rapid introduction of new features. Detecting such feature interactions turns out to be a big problem. The size of the systems and the sheer computational com-plexity prevents the system developer from checking manually any feature against any other feature. We give an overview on current (verification) approaches and categorize them into property-oriented and automata-theoretic approaches. A comparisonturns out that each approach complements the other in a certain sense. We proposeto apply both approaches together in order to solve the feature interaction problem.

Planning means constructing a course of actions to achieve a specified set of goals when starting from an initial situation. For example, determining a sequence of actions (a plan) for transporting goods from an initial location to some destination is a typical planning problem in the transportation domain. Many planning problems are of practical interest.

MOLTKE is a research project dealing with a complex technical application. After describing the domain of CNCmachining centers and the applied KA methods, we summarize the concrete KA problems which we have to handle. Then we describe a KA mechanism which supports an engineer in developing a diagnosis system. In chapter 6 weintroduce learning techniques operating on diagnostic cases and domain knowledge for improving the diagnostic procedure of MOLTKE. In the last section of this chapter we outline some essential aspects of organizationalknowledge which is heavily applied by engineers for analysing such technical systems (Qualitative Engineering). Finally we give a short overview of the actual state of realization and our future plans.

Most automated theorem provers suffer from the problem that theycan produce proofs only in formalisms difficult to understand even forexperienced mathematicians. Efforts have been made to transformsuch machine generated proofs into natural deduction (ND) proofs.Although the single steps are now easy to understand, the entire proofis usually at a low level of abstraction, containing too many tedioussteps. Therefore, it is not adequate as input to natural language gen-eration systems.To overcome these problems, we propose a new intermediate rep-resentation, called ND style proofs at the assertion level . After illus-trating the notion intuitively, we show that the assertion level stepscan be justified by domain-specific inference rules, and that these rulescan be represented compactly in a tree structure. Finally, we describea procedure which substantially shortens ND proofs by abstractingthem to the assertion level, and report our experience with furthertransformation into natural language.

In this paper we show that distributing the theorem proving task to several experts is a promising idea. We describe the team work method which allows the experts to compete for a while and then to cooperate. In the cooperation phase the best results derived in the competition phase are collected and the less important results are forgotten. We describe some useful experts and explain in detail how they work together. We establish fairness criteria and so prove the distributed system to be both, complete and correct. We have implementedour system and show by non-trivial examples that drastical time speed-ups are possible for a cooperating team of experts compared to the time needed by the best expert in the team.

Constructing an analogy between a known and already proven theorem(the base case) and another yet to be proven theorem (the target case) oftenamounts to finding the appropriate representation at which the base and thetarget are similar. This is a well-known fact in mathematics, and it was cor-roborated by our empirical study of a mathematical textbook, which showedthat a reformulation of the representation of a theorem and its proof is in-deed more often than not a necessary prerequisite for an analogical inference.Thus machine supported reformulation becomes an important component ofautomated analogy-driven theorem proving too.The reformulation component proposed in this paper is embedded into aproof plan methodology based on methods and meta-methods, where the latterare used to change and appropriately adapt the methods. A theorem and itsproof are both represented as a method and then reformulated by the set ofmetamethods presented in this paper.Our approach supports analogy-driven theorem proving at various levels ofabstraction and in principle makes it independent of the given and often acci-dental representation of the given theorems. Different methods can representfully instantiated proofs, subproofs, or general proof methods, and hence ourapproach also supports these three kinds of analogy respectively. By attachingappropriate justifications to meta-methods the analogical inference can oftenbe justified in the sense of Russell.This paper presents a model of analogy-driven proof plan construction andfocuses on empirically extracted meta-methods. It classifies and formally de-scribes these meta-methods and shows how to use them for an appropriatereformulation in automated analogy-driven theorem proving.

Following Buchberger's approach to computing a Gröbner basis of a poly-nomial ideal in polynomial rings, a completion procedure for finitely generatedright ideals in Z[H] is given, where H is an ordered monoid presented by a finite,convergent semi - Thue system (Sigma; T ). Taking a finite set F ' Z[H] we get a(possibly infinite) basis of the right ideal generated by F , such that using thisbasis we have unique normal forms for all p 2 Z[H] (especially the normal formis 0 in case p is an element of the right ideal generated by F ). As the orderingand multiplication on H need not be compatible, reduction has to be definedcarefully in order to make it Noetherian. Further we no longer have p Delta x ! p 0for p 2 Z[H]; x 2 H. Similar to Buchberger's s - polynomials, confluence criteriaare developed and a completion procedure is given. In case T = ; or (Sigma; T ) is aconvergent, 2 - monadic presentation of a group providing inverses of length 1 forthe generators or (Sigma; T ) is a convergent presentation of a commutative monoid ,termination can be shown. So in this cases finitely generated right ideals admitfinite Gröbner bases. The connection to the subgroup problem is discussed.

This case study examines in detail the theorems and proofs that are shownby analogy in a mathematical textbook on semigroups and automata, thatis widely used as an undergraduate textbook in theoretical computer scienceat German universities (P. Deussen, Halbgruppen und Automaten, Springer1971). The study shows the important role of restructuring a proof for findinganalogous subproofs, and of reformulating a proof for the analogical trans-formation. It also emphasizes the importance of the relevant assumptions ofa known proof, i.e., of those assumptions actually used in the proof. In thisdocument we show the theorems, the proof structure, the subproblems andthe proofs of subproblems and their analogues with the purpose to providean empirical test set of cases for automated analogy-driven theorem proving.Theorems and their proofs are given in natural language augmented by theusual set of mathematical symbols in the studied textbook. As a first step weencode the theorems in logic and show the actual restructuring. Secondly, wecode the proofs in a Natural Deduction calculus such that a formal analysisbecomes possible and mention reformulations that are necessary in order toreveal the analogy.

We provide an overview of UNICOM, an inductive theorem prover for equational logic which isbased on refined rewriting and completion techniques. The architecture of the system as well as itsfunctionality are described. Moreover, an insight into the most important aspects of the internalproof process is provided. This knowledge about how the central inductive proof componentof the system essentially works is crucial for human users who want to solve non-trivial prooftasks with UNICOM and thoroughly analyse potential failures. The presentation is focussedon practical aspects of understanding and using UNICOM. A brief but complete description ofthe command interface, an installation guide, an example session, a detailed extended exampleillustrating various special features and a collection of successfully handled examples are alsoincluded.

While most approaches to similarity assessment are oblivious of knowledge and goals, there is ample evidence that these elements of problem solving play an important role in similarity judgements. This paper is concerned with an approach for integrating assessment of similarity into a framework of problem solving that embodies central notions of problem solving like goals, knowledge and learning.

To prove difficult theorems in a mathematical field requires substantial know-ledge of that field. In this thesis a frame-based knowledge representation formal-ism including higher-order sorted logic is presented, which supports a conceptualrepresentation and to a large extent guarantees the consistency of the built-upknowledge bases. In order to operationalize this knowledge, for instance, in anautomated theorem proving system, a class of sound morphisms from higher-orderinto first-order logic is given, in addition a sound and complete translation ispresented. The translations are bijective and hence compatible with a later proofpresentation.In order to prove certain theorems the comprehension axioms are necessary,(but difficult to handle in an automated system); such theorems are called trulyhigher-order. Many apparently higher-order theorems (i.e. theorems that arestated in higher-order syntax) however are essentially first-order in the sense thatthey can be proved without the comprehension axioms: for proving these theoremsthe translation technique as presented in this thesis is well-suited.

We transform a user-friendly formulation of aproblem to a machine-friendly one exploiting the variabilityof first-order logic to express facts. The usefulness of tacticsto improve the presentation is shown with several examples.In particular it is shown how tactical and resolution theoremproving can be combined.

There are well known examples of monoids in literature which do not admit a finite andcanonical presentation by a semi-Thue system over a fixed alphabet, not even over an arbi-trary alphabet. We introduce conditional Thue and semi-Thue systems similar to conditionalterm rewriting systems as defined by Kaplan. Using these conditional semi-Thue systems wegive finite and canonical presentations of the examples mentioned above. Furthermore weshow, that each finitely generated monoid with decidable word problem is embeddable in amonoid which has a finite canonical conditional presentation.

Typical examples, that is, examples that are representative for a particular situationor concept, play an important role in human knowledge representation and reasoning.In real life situations more often than not, instead of a lengthy abstract characteriza-tion, a typical example is used to describe the situation. This well-known observationhas been the motivation for various investigations in experimental psychology, whichalso motivate our formal characterization of typical examples, based on a partial orderfor their typicality. Reasoning by typical examples is then developed as a special caseof analogical reasoning using the semantic information contained in the correspondingconcept structures. We derive new inference rules by replacing the explicit informa-tion about connections and similarity, which are normally used to formalize analogicalinference rules, by information about the relationship to typical examples. Using theseinference rules analogical reasoning proceeds by checking a related typical example,this is a form of reasoning based on semantic information from cases.

This paper concerns a knowledge structure called method , within a compu-tational model for human oriented deduction. With human oriented theoremproving cast as an interleaving process of planning and verification, the body ofall methods reflects the reasoning repertoire of a reasoning system. While weadopt the general structure of methods introduced by Alan Bundy, we make anessential advancement in that we strictly separate the declarative knowledgefrom the procedural knowledge. This is achieved by postulating some stand-ard types of knowledge we have identified, such as inference rules, assertions,and proof schemata, together with corresponding knowledge interpreters. Ourapproach in effect changes the way deductive knowledge is encoded: A newcompound declarative knowledge structure, the proof schema, takes the placeof complicated procedures for modeling specific proof strategies. This change ofparadigm not only leads to representations easier to understand, it also enablesus modeling the even more important activity of formulating meta-methods,that is, operators that adapt existing methods to suit novel situations. In thispaper, we first introduce briefly the general framework for describing methods.Then we turn to several types of knowledge with their interpreters. Finally,we briefly illustrate some meta-methods.

We present a framework for the integration of the Knuth-Bendix completion algorithm with narrowing methods, compiled rewrite rules, and a heuristic difference reduction mechanism for paramodulation. The possibility of embedding theory unification algorithms into this framework is outlined. Results are presented and discussed for several examples of equality reasoning problems in the context of an actual implementation of an automated theorem proving system (the Mkrp-system) and a fast C implementation of the completion procedure. The Mkrp-system is based on the clause graph resolution procedure. The thesis shows the indispensibility of the constraining effects of completion and rewriting for equality reasoning in general and quantifies the amount of speed-up caused by various enhancements of the basic method. The simplicity of the superposition inference rule allows to construct an abstract machine for completion, which is presented together with computation times for a concrete implementation.

This report presents the main ideas underlyingtheOmegaGamma mkrp-system, an environmentfor the development of mathematical proofs. The motivation for the development ofthis system comes from our extensive experience with traditional first-order theoremprovers and aims to overcome some of their shortcomings. After comparing the benefitsand drawbacks of existing systems, we propose a system architecture that combinesthe positive features of different types of theorem-proving systems, most notably theadvantages of human-oriented systems based on methods (our version of tactics) andthe deductive strength of traditional automated theorem provers.In OmegaGamma mkrp a user first states a problem to be solved in a typed and sorted higher-order language (called POST ) and then applies natural deduction inference rules inorder to prove it. He can also insert a mathematical fact from an integrated data-base into the current partial proof, he can apply a domain-specific problem-solvingmethod, or he can call an integrated automated theorem prover to solve a subprob-lem. The user can also pass the control to a planning component that supports andpartially automates his long-range planning of a proof. Toward the important goal ofuser-friendliness, machine-generated proofs are transformed in several steps into muchshorter, better-structured proofs that are finally translated into natural language.This work was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D2, D3)

An important property and also a crucial point ofa term rewriting system is its termination. Transformation or-derings, developed by Bellegarde & Lescanne strongly based on awork of Bachmair & Dershowitz, represent a general technique forextending orderings. The main characteristics of this method aretwo rewriting relations, one for transforming terms and the otherfor ensuring the well-foundedness of the ordering. The centralproblem of this approach concerns the choice of the two relationssuch that the termination of a given term rewriting system can beproved. In this communication, we present a heuristic-based al-gorithm that partially solves this problem. Furthermore, we showhow to simulate well-known orderings on strings by transformationorderings.

This report presents a methodology to guide equational reasoningin a goal directed way. Suggested by rippling methods developed inthe field of inductive theorem proving we use attributes of terms andheuristics to determine bridge lemmas, i.e. lemmas which have tobe used during the proof of the theorem. Once we have found sucha bridge lemma we use the techniques of difference unification andrippling to enable its use.

This paper develops a sound and complete transformation-based algorithm forunification in an extensional order-sorted combinatory logic supporting constantoverloading and a higher-order sort concept. Appropriate notions of order-sortedweak equality and extensionality - reflecting order-sorted fij-equality in thecorresponding lambda calculus given by Johann and Kohlhase - are defined, andthe typed combinator-based higher-order unification techniques of Dougherty aremodified to accommodate unification with respect to the theory they generate. Thealgorithm presented here can thus be viewed as a combinatory logic counterpartto that of Johann and Kohlhase, as well as a refinement of that of Dougherty, andprovides evidence that combinatory logic is well-suited to serve as a framework forincorporating order-sorted higher-order reasoning into deduction systems aimingto capitalize on both the expressiveness of extensional higher-order logic and theefficiency of order-sorted calculi.

We consider the problem of verifying confluence and termination of conditionalterm rewriting systems (TRSs). For unconditional TRSs the critical pair lemmaholds which enables a finite test for confluence of (finite) terminating systems.And for ensuring termination of unconditional TRSs a couple of methods forconstructing appropiate well-founded term orderings are known. If however ter-mination is not guaranteed then proving confluence is much more difficult. Re-cently we have obtained some interesting results for unconditional TRSs whichprovide sufficient criteria for termination plus confluence in terms of restrictedtermination and confluence properties. In particular, we have shown that anyinnermost terminating and locally confluent overlay system is complete, i.e. ter-minating and confluent. Here we generalize our approach to the conditional caseand show how to solve the additional complications due to the presence of con-ditions in the rules. Our main result can be stated as follows: Any conditionalTRS which is an innermost terminating semantical overlay system such that all(conditional) critical pairs are joinable is complete.

We will answer a question posed in [DJK91], and will show that Huet's completion algorithm [Hu81] becomes incomplete, i.e. it may generate a term rewriting system that is not confluent, if it is modified in a way that the reduction ordering used for completion can be changed during completion provided that the new ordering is compatible with the actual rules. In particular, we will show that this problem may not only arise if the modified completion algorithm does not terminate: Even if the algorithm terminates without failure, the generated finite noetherian term rewriting system may be non-confluent. Most existing implementations of the Knuth-Bendix algorithm provide the user with help in choosing a reduction ordering: If an unorientable equation is encountered, then the user has many options, especially, the one to orient the equation manually. The integration of this feature is based on the widespread assumption that, if equations are oriented by hand during completion and the completion process terminates with success, then the generated finite system is a maybe non terminating but locally confluent system (see e.g. [KZ89]). Our examples will show that this assumption is not true.

Even though it is not very often admitted, partial functions do play asignificant role in many practical applications of deduction systems. Kleenehas already given a semantic account of partial functions using three-valuedlogic decades ago, but there has not been a satisfactory mechanization. Recentyears have seen a thorough investigation of the framework of many-valuedtruth-functional logics. However, strong Kleene logic, where quantificationis restricted and therefore not truth-functional, does not fit the frameworkdirectly. We solve this problem by applying recent methods from sorted logics.This paper presents a resolution calculus that combines the proper treatmentof partial functions with the efficiency of sorted calculi.

The team work method is a concept for distributing automated theoremprovers and so to activate several experts to work on a given problem. We haveimplemented this for pure equational logic using the unfailing KnuthADBendixcompletion procedure as basic prover. In this paper we present three classes ofexperts working in a goal oriented fashion. In general, goal oriented experts perADform their job "unfair" and so are often unable to solve a given problem alone.However, as a team member in the team work method they perform highly effiADcient, even in comparison with such respected provers as Otter 3.0 or REVEAL,as we demonstrate by examples, some of which can only be proved using teamwork.The reason for these achievements results from the fact that the team workmethod forces the experts to compete for a while and then to cooperate by exADchanging their best results. This allows one to collect "good" intermediate resultsand to forget "useless" ones. Completion based proof methods are frequently reADgarded to have the disadvantage of being not goal oriented. We believe that ourapproach overcomes this disadvantage to a large extend.

In 1978, Klop demonstrated that a rewrite system constructed by adding the untyped lambda calculus, which has the Church-Rosser property, to a Church-Rosser first-order algebraic rewrite system may not be Church-Rosser. In contrast, Breazu-Tannen recently showed that argumenting any Church-Rosser first-order algebraic rewrite system with the simply-typed lambda calculus results in a Church-Rosser rewrite system. In addition, Breazu-Tannen and Gallier have shown that the second-order polymorphic lambda calculus can be added to such rewrite systems without compromising the Church-Rosser property (for terms which can be provably typed). There are other systems for which a Church-Rosser result would be desirable, among them being X^t+SP+FIX, the simply-typed lambda calculus extended with surjective pairing and fixed points. This paper will show that Klop's untyped counterexample can be lifted to a typed system to demonstrate that X^t+SP+FIX is not Church-Rosser.

Over the past thirty years there have been significant achievements in the field of auto-mated theorem proving with respect to the reasoning power of the inference engines.Although some effort has also been spent to facilitate more user friendliness of the de-duction systems, most of them failed to benefit from more recent developments in therelated fields of artificial intelligence (AI), such as natural language generation and usermodeling. In particular, no model is available which accounts both for human deductiveactivities and for human proof presentation. In this thesis, a reconstructive architecture issuggested which substantially abstracts, reorganizes and finally translates machine-foundproofs into natural language. Both the procedures and the intermediate representationsof our architecture find their basis in computational models for informal mathematicalreasoning and for proof presentation. User modeling is not incorporated into the currenttheory, although we plan to do so later.

This paper presents a new way to use planning in automated theorem provingby means of distribution. To overcome the problem that often subtasks fora proof problem can not be detected a priori (which prevents the use of theknown planning and distribution techniques) we use a team of experts that workindependently with different heuristics on the problem. After a certain amount oftime referees judge their results using the impact of the results on the behaviourof the expert and a supervisor combines the selected results to a new startingpoint.This supervisor also selects the experts that can work on the problem inthe next round. This selection is a reactive planning task. We outline whichinformation the supervisor can use to fulfill this task and how this informationis processed to result in a plan or to revise a plan. We also show that the useof planning for the assignment of experts to the team allows the system to solvemany different examples in an acceptable time with the same start configurationand without any consultation of the user.Plans are always subject to changeShin'a'in proverb

The background of this paper is the area of case-based reasoning. This is a reasoning technique where one tries to use the solution of some problem which has been solved earlier in order to obta in a solution of a given problem. As example of types of problems where this kind of reasoning occurs very often is the diagnosis of diseases or faults in technical systems. In abstract terms this reduces to a classification task. A difficulty arises when one has not just one solved problem but when there are very many. These are called "cases" and they are stored in the case-base. Then one has to select an appropriate case which means to find one which is "similar" to the actual problem. The notion of similarity has raised much interest in this context. We will first introduce a mathematical framework and define some basic concepts. Then we will study some abstract phenomena in this area and finally present some methods developed and realized in a system at the University of Kaiserslautern.

The introduction of sorts to first-order automated deduction has broughtgreater conciseness of representation and a considerable gain in efficiency byreducing the search space. It is therefore promising to treat sorts in higherorder theorem proving as well.In this paper we present a generalization of Huet's Constrained Resolutionto an order-sorted type theory SigmaT with term declarations. This system buildscertain taxonomic axioms into the unification and conducts reasoning withthem in a controlled way. We make this notion precise by giving a relativizationoperator that totally and faithfully encodes SigmaT into simple type theory.

In this report we present a case study of employing goal-oriented heuristics whenproving equational theorems with the (unfailing) Knut-Bendix completion proce-dure. The theorems are taken from the domain of lattice ordered groups. It will bedemonstrated that goal-oriented (heuristic) criteria for selecting the next critical paircan in many cases significantly reduce the search effort and hence increase per-formance of the proving system considerably. The heuristic, goalADoriented criteriaare on the one hand based on so-called "measures" measuring occurrences andnesting of function symbols, and on the other hand based on matching subterms.We also deal with the property of goal-oriented heuristics to be particularly helpfulin certain stages of a proof. This fact can be addressed by using them in a frame-work for distributed (equational) theorem proving, namely the "teamwork-method".

Planverfahren
(1999)

We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.

A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.

One of the problems of autonomous mobile systems is the continuous tracking of position and orientation. In most cases, this problem is solved by dead reckoning, based on measurement of wheel rotations or step counts and step width. Unfortunately dead reckoning leads to accumulation of drift errors and is very sensitive against slippery. In this paper an algorithm for tracking position and orientation is presented being nearly independent from odometry and its problems with slippery. To achieve this results, a rotating range-finder is used, delivering scans of the environmental structure. The properties of this structure are used to match the scans from different locations in order to find their translational and rotational displacement. For this purpose derivatives of range-finder scans are calculated which can be used to find position and orientation by crosscorrelation.

Dynamic Lambda Calculus
(1999)

The goal of this paper is to lay a logical foundation for discourse theories by providing analgebraic foundation of compositional formalisms for discourse semantics as an analogon tothe simply typed (lambda)-calculus. Just as that can be specialized to type theory by simply providinga special type for truth values and postulating the quantifiers and connectives as constantswith fixed semantics, the proposed dynamic (lambda)-calculus DLC can be specialized to (lambda)-DRT byessentially the same measures, yielding a much more principled and modular treatment of(lambda)-DRT than before; DLC is also expected to eventually provide a conceptually simple basisfor studying higher-order unification for compositional discourse theories.Over the past few years, there have been a series of attempts [Zee89, GS90, EK95, Mus96,KKP96, Kus96] to combine the Montagovian type theoretic framework [Mon74] with dynamicapproaches, such as DRT [Kam81]. The motivation for these developments is to obtain a generallogical framework for discourse semantics that combines compositionality and dynamic binding.Let us look at an example of compositional semantics construction in (lambda)-DRT which is one ofthe above formalisms [KKP96, Kus96]. By the use of fi-reduction we arrive at a first-order DRTrepresentation of the sentence A i man sleeps. (i denoting an index for anaphoric binding.)

This paper shows how a new approach to theorem provingby analogy is applicable to real maths problems. This approach worksat the level of proof-plans and employs reformulation that goes beyondsymbol mapping. The Heine-Borel theorem is a widely known result inmathematics. It is usually stated in R 1 and similar versions are also truein R 2 , in topology, and metric spaces. Its analogical transfer was proposedas a challenge example and could not be solved by previous approachesto theorem proving by analogy. We use a proof-plan of the Heine-Boreltheorem in R 1 as a guide in automatically producing a proof-plan of theHeine-Borel theorem in R 2 by analogy-driven proof-plan construction.

This paper addresses a model of analogy-driven theorem proving that is more general and cognitively more adequate than previous approaches. The model works at the level ofproof-plans. More precisely, we consider analogy as a control strategy in proof planning that employs a source proof-plan to guide the construction of a proof-plan for the target problem. Our approach includes a reformulation of the source proof-plan. This is in accordance with the well known fact that constructing ananalogy in maths often amounts to first finding the appropriate representation which brings out the similarity of two problems, i.e., finding the right concepts and the right level of abstraction. Several well known theorems were processed by our analogy-driven proof-plan construction that could not be proven analogically by previous approaches.

This paper addresses analogy-driven auto-mated theorem proving that employs a sourceproof-plan to guide the search for a proof-planof the target problem. The approach presen-ted uses reformulations that go beyond symbolmappings and that incorporate frequently usedre-representations and abstractions. Severalrealistic math examples were successfully pro-cessed by our analogy-driven proof-plan con-struction. One challenge example, a Heine-Borel theorem, is discussed here. For this ex-ample the reformulaitons are shown step bystep and the modifying actions are demon-strated.

Analogy in CLAM
(1999)

CL A M is a proof planner, developed by the Dream group in Edinburgh,that mainly operates for inductive proofs. This paper addresses the questionhow an analogy model that I developed independently of CL A M can beapplied to CL A M and it presents analogy-driven proof plan construction as acontrol strategy of CL A M . This strategy is realized as a derivational analogythat includes the reformulation of proof plans. The analogical replay checkswhether the reformulated justifications of the source plan methods hold inthe target as a permission to transfer the method to the target plan. SinceCL A M has very efficient heuristic search strategies, the main purpose ofthe analogy is to suggest lemmas, to replay not commonly loaded methods,to suggest induction variables and induction terms, and to override controlrather than to construct a target proof plan that can be built by CL A Mitself more efficiently.

Distributed systems are an alternative to shared-memorymultiprocessors for the execution of parallel applications.PANDA is a runtime system which provides architecturalsupport for efficient parallel and distributed program-ming. PANDA supplies means for fast user-level threads,and for a transparent and coordinated sharing of objectsacross a homogeneous network. The paper motivates themajor architectural choices that guided our design. Theproblem of sharing data in a distributed environment isdiscussed, and the performance of appropriate mecha-nisms provided by the PANDA prototype implementation isassessed.

AbstractOne main purpose for the use of formal description techniques (FDTs) is formal reasoningand verification. This requires a formal calculus and a suitable formal semantics of theFDT. In this paper, we discuss the basic verification requirements for Estelle, and howthey can be supported by existing calculi. This leads us to the redefinition of the stanADdard Estelle semantics using Lamport's temporal logic of actions and Dijkstra's predicatetransformers.

The increasing use of distributed computer systems leads to an increasingneed for distributed applications. Their development in various domains like of-fice automation or computer integrated manufacturing is not sufficiently sup-ported by current techniques. New software engineering concepts are needed inthe three areas 'languages', 'tools', and 'environments'. We believe that object-oriented techniques and graphics support are key approaches to major achieve-ments in all three areas. As a consequence, we developed a universal object-oriented graphical editor ODE as one of our basic tools (tool building tool).ODE is based on the object-oriented paradigm, with some important extensionslike built-in object relations. It has an extensible functional language which al-lows for customization of the editor. ODE was developed as part of DOCASE, asoftware production environment for distributed applications. The basic ideas ofDOCASE will be presented and the requirements for ODE will be pointed out.Then ODE will be described in detail, followed by a sample customization ofODE: the one for the DOCASE design language.

Der ständig zunehmende Einsatz verteilter DV-Systeme führt zu einem stark steigendenBedarf an verteilten Anwendungen. Deren Entwicklung in den verschiedensten Anwen-dungsfeldern wie Fabrik- und Büroautomatisierung ist für die Anwender bislang kaum zuhandhaben. Neue Konzepte des Software Engineering sind daher notwendig, und zwar inden drei Bereichen 'Sprachen', 'Werkzeuge' und 'Umgebungen'. Objekt-orientierte Me-thoden und graphische Unterstützung haben sich bei unseren Arbeiten als besonders taug-lich herausgestellt, um in allen drei Bereichen deutliche Fortschritte zu erzielen. Entspre-chend wurde ein universeller objektorientierter graphischer Editor, ODE, als einesunserer zentralen Basis-Werkzeuge ('tool building tool') entwickelt. ODE basiert aufdem objekt-orientierten Paradigma sowie einer leicht handhabbaren funktionalen Sprachefür Erweiterungen; außerdem erlaubt ODE die einfache Integration mit anderen Werk-zeugen und imperativ programmierten Funktionen. ODE entstand als Teil von DOCASE,einer Software-Produktionsumgebung für verteilte Anwendungen. Grundzüge von DO-CASE werden vorgestellt, Anforderungen an ODE abgeleitet. Dann wird ODE detaillier-ter beschrieben. Es folgt eine exemplarische Beschreibung einer Erweiterung von ODE,nämlich der für die DOCASE-Entwurfssprache.