### Refine

#### Year of publication

- 1996 (70) (remove)

#### Document Type

- Preprint (70) (remove)

#### Keywords

- COMOKIT (2)
- CoMo-Kit (2)
- Abstraction (1)
- Boltzmann Equation (1)
- CAx Technology (1)
- CAx-Technik (1)
- CORBA (1)
- Cantor sets (1)
- Case-Based Reasoning (1)
- Collision Operator (1)

#### Faculty / Organisational entity

In this report we give an overview of the development of our new Waldmeisterprover for equational theories. We elaborate a systematic stepwise design process, startingwith the inference system for unfailing Knuth - Bendix completion and ending up with animplementation which avoids the main diseases today's provers suffer from: overindulgencein time and space.Our design process is based on a logical three - level system model consisting of basicoperations for inference step execution, aggregated inference machine, and overall controlstrategy. Careful analysis of the inference system for unfailing completion has revealed thecrucial points responsible for time and space consumption. For the low level of our model,we introduce specialized data structures and algorithms speeding up the running system andcutting it down in size - both by one order of magnitude compared with standard techniques.Flexible control of the mid - level aggregation inside the resulting prover is made possible by acorresponding set of parameters. Experimental analysis shows that this flexibility is a pointof high importance. We go on with some implementation guidelines we have found valuablein the field of deduction.The resulting new prover shows that our design approach is promising. We compare oursystem's throughput with that of an established system and finally demonstrate how twovery hard problems could be solved by Waldmeister.

Mit der schnellen Verbreitung der CAx-Techniken in der deutschen Automobilindustrie wächst die Notwendigkeit einer besseren Integration der CAx-Systeme in die Prozeßketten und der Beherrschung der Produktinformationsflüsse. Aufgrund dieser Tatsachen ist in den letzten Jah-ren ein Wandel der CAx-Systemarchitekturen von geschloßenen, monolithischen zu offen inte-grierten Systemen erkennbar. Im folgenden wird dieser Prozeß sowie dessen Implikationen auf die Anwendung und auf die Systemhersteller analysiert. Ausgehend von der Initiative der deutschen Automobilindustrie wurde das Projekt ANICA (Analysis of Interfaces of various CAD/CAM-Systems) gestartet. In diesem Projekt werden die Schnittstellen zu den Systemkernen einiger CAx-Hersteller untersucht und ein Konzept für kooperierende CAx-Systeme in der Automobilindustrie wird entwickelt.

Toying with Jordan matrices
(1996)

We present first steps towards fully automated deduction that merely requiresthe user to submit proof problems and pick up results. Essentially, this necessi-tates the automation of the crucial step in the use of a deduction system, namelychoosing and configuring an appropriate search-guiding heuristic. Furthermore,we motivate why learning capabilities are pivotal for satisfactory performance.The infrastructure for automating both the selection of a heuristic and integra-tion of learning are provided in form of an environment embedding the "core"deduction system.We have conducted a case study in connection with a deduction system basedon condensed detachment. Our experiments with a fully automated deductionsystem 'AutoCoDe' have produced remarkable results. We substantiate Au-toCoDe's encouraging achievements with a comparison with the renowned the-orem prover Otter. AutoCoDe outperforms Otter even when assuming veryfavorable conditions for Otter.

Abstract: We study the roughening transition of an interface in an Ising system on a 3D simple cubic lattice using a finite size scaling method. The particular method has recently been proposed and successfully tested for various solid on solid models. The basic idea is the matching of the renormalization-groupflow of the interface with that of the exactly solvable body centered cubic solid on solid model. We unambiguously confirm the Kosterlitz-Thouless nature of the roughening transition of the Ising interface. Our result for the inverse transition temperature K_R = 0.40754(5) is almost by two orders of magnitude more accurate than the estimate of Mon, Landau and Stauffer [9].

Tangent measure distributions were introduced by Bandt and Graf as a means to describe the local geometry of self-similar sets generated by iteration of contractive similitudes. In this paper we study the tangent measure distributions of hyperbolic Cantor sets generated by contractive mappings, which are not similitudes. We show that the tangent measure distributions of these sets equipped with either Hausdorff or Gibbs measure are unique almost everywhere and give an explicit formula describing them as probability distributions on the set of limit models of Bedford and Fisher.

Suppression of the magnetocrystalline bulk anisotropy in thin epitaxial Co(110) films on Cu(110)
(1996)

We report on an unexpected suppression of the magnetocrystalline anisotropy contribution in epitaxial fcc Co(110) films on Cu(110) below a thickness of dc=(50 +/- 10) Å. For film thicknesses larger than dc the measured anisotropy value agrees with published data. Measurements on films with reduced strain indicate a large strain dependence of dc . A model calculation based on a crystal-field formalism and discussed within the context of band theory, which explicitly takes tetragonal misfit strains into account, reproduces the experimen-tally observed anomalies. Our results indicate that the usually applied phenomenological description of anisotropies, assuming additive free energy terms for each anisotropy contribution, fails in this case.

The magnetic anisotropy of Co/Cu~001! films has been investigated by the magneto-optical Kerr effect, both in the pseudomorphic growth regime and above the critical thickness where strain relaxation sets in. A clear correlation between the onset of strain relaxation as measured by means of reflection high-energy electron diffraction and changes of the magnetic anisotropy has been found.

A continuous version of spherical multiresolution is described, starting from continuous wavelet transform on the sphere. Scale discretization enables us to construct spherical counterparts to Daubechies wavelets and wavelet packets (known from Euclidean theory). Essential tool is the theory of singular integrals on the sphere. It is shown that singular integral operators forming a semigroup of contraction operators of class (Co) (like Abel-Poisson or Gauß-Weierstraß operators) lead in canonical way to (pyramidal) algorithms.

Some formulae, containing logarithmic derivatives of (smooth) measures on infinitedimensional spaces, arise in quite different situations. In particular, logarithmic derivatives of a measure are inserted in the Schr"odinger equastion in the space consisting of functions that are square integrable with respect to this measure, what allows us to describe very simply a procedure of (canonical) quantization of infinite-dimensional Hamiltonian systems with the linear phase space. Further, the problem of reconstructing of a measure by its logarithmic derivative (that was posed in [1] independently of any applications) can be equivalent either to the problem of finding the "ground state" (considered as some measure) for infinite-dimensional Schr"odinger equation, or to the problem of finding an invariant measure for a stochastic differential equation (that is a central question of so-called stochastic quantization), or to the problem of recenstruc ting "Gibbsian measure by its specification" (i.e. by a collection of finite-dimensional conditional distributions). Logarithmic derivatives of some measure appear in Cameron-Martin-Girsanov-Maruyama formulae and in its generalizations related to arbitrary smooth measures; they allow also to connect these formulae and the Feynman-Kac formulae. This note discusses all these topics. Of course due to its shortness the presentation is formal in main, and precise analitical assumptions are usually absent. Actually only a list of formulae with small comments is given. Let us mention also that we do not consider at all so-called Dirichlet forms to which a great deal of literature is devoted (cf. [3] and references therein to the works of S. Alberion and others).

The paper presents some new estimates on the gain term of the Boltzmann collision operator. For Maxwellian molecules, it is shown that the L -norm of the gain term can be bounded in terms of the L1 and L -norm of the density function f. In the case of more general collision kernels, like the hard-sphere interaction potential, the gain term is estimated pointwise by the L -norm of the density function and the loss term of the Boltzmann collision operator.

The purpose of this paper is to present the state of the art in singular optimal control. If the Hamiltonian in an interval \([t_1,t_2]\) is independent of the control we call the control in this interval singular. Singular optimal controls appear in many applications so that research has been motivated since the 1950s. Often optimal controls consist of nonsingular and singular parts where the junctions between these parts are mostly very difficult to find. One section of this work shows the actual knowledge about the location of the junctions and the behaviour of the control at the junctions. The definition and the properties of the orders (problem order and arc order), which are important in this context, are given, too. Another chapter considers multidimensional controls and how they can be treated. An alternate definition of the orders in the multidimensional case is proposed and a counterexample, which confirms a remark given in the 1960s, is given. A voluminous list of optimality conditions, which can be found in several publications, is added. A strategy for solving optimal control problems numerically is given, and the existing algorithms are compared with each other. Finally conclusions and an outlook on the future research is given.

Significance of zero modes in path-integral quantization of solitonic theories with BRST invariance
(1996)

The significance of zero modes in the path-integral quantization of some solitonic models is investigated. In particular a Skyrme-like theory with topological vortices in (1 + 2) dimensions is studied, and with a BRST invariant gauge fixing a well defined transition amplitude is obtained in the one loop approximation. We also present an alternative method which does not necessitate evoking the time-dependence in the functional integral, but is equivalent to the original one in dealing with the quantization in the background of the static classical solution of the non-linear field equations. The considerations given here are particularly useful in - but also limited to -the one-loop approximation.

For the case of the single-O(N)-vector linear sigma models the critical behaviour following from any A_k singularity in the action is worked out in the double scaling limit N->infinity, f_r -> f_r^c, 2 <= r <= k. After an exact elimination of Gaussian degrees of freedom, the critical objects such as coupling constants, indices and susceptibility matrix are derived for all A_k and spacetime dimensions 0 <= D <= 4. There appear exceptional spacetime dimensions where the degree k of the singularity A_k is more strongly constrained than by the renormalizability requirement.

This article will discuss a qualitative, topological and robust world-modelling technique with special regard to navigation-tasks for mobile robots operating in unknownenvironments. As a central aspect, the reliability regarding error-tolerance and stability will be emphasized. Benefits and problems involved in exploration, as well as in navigation tasks, are discussed. The proposed method demands very low constraints for the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot

Es wird das Lernen uniform rekursiv aufzählbarer Sprachfamilien anhand guter Beispiele untersucht und Unterschiede und Gemeinsamkeiten zum Lernen von rekursiven Sprachfamilien und rekursiven Funktionen aufgezeigt. Dem verwendeten Modell liegt das Lernen von Schülern mit einem Lehrer zugrunde. Es werden verschiedene Varianten vorgestellt, verglichen und teilweise auch charakterisiert, und versucht, mit Beispielen und anderen typischen Eigenschaften ein Gefühl für die Leistungsfähigkeit zu vermitteln. Unter anderem wird gezeigt, dass es nicht immer "universelle" gute Beispiele gibt, mit denen eine Sprachklasse in allen Situationen erklärt werden kann.