Refine
Year of publication
- 1996 (70) (remove)
Document Type
- Preprint (70) (remove)
Keywords
- COMOKIT (2)
- CoMo-Kit (2)
- Abstraction (1)
- Boltzmann Equation (1)
- CAx Technology (1)
- CAx-Technik (1)
- CORBA (1)
- Cantor sets (1)
- Case-Based Reasoning (1)
- Collision Operator (1)
Faculty / Organisational entity
We present first steps towards fully automated deduction that merely requiresthe user to submit proof problems and pick up results. Essentially, this necessi-tates the automation of the crucial step in the use of a deduction system, namelychoosing and configuring an appropriate search-guiding heuristic. Furthermore,we motivate why learning capabilities are pivotal for satisfactory performance.The infrastructure for automating both the selection of a heuristic and integra-tion of learning are provided in form of an environment embedding the "core"deduction system.We have conducted a case study in connection with a deduction system basedon condensed detachment. Our experiments with a fully automated deductionsystem 'AutoCoDe' have produced remarkable results. We substantiate Au-toCoDe's encouraging achievements with a comparison with the renowned the-orem prover Otter. AutoCoDe outperforms Otter even when assuming veryfavorable conditions for Otter.
Evolving Combinators
(1996)
One of the many abilities that distinguish a mathematician from an auto-mated deduction system is to be able to offer appropriate expressions based onintuition and experience that are substituted for existentially quantified variablesso as to simplify the problem at hand substantially. We propose to simulate thisability with a technique called genetic programming for use in automated deduc-tion. We apply this approach to problems of combinatory logic. Our experimen-tal results show that the approach is viable and actually produces very promisingresults. A comparison with the renowned theorem prover Otter underlines theachievements.This work was supported by the Deutsche Forschungsgemeinschaft (DFG).
We present a concept for an automated theorem prover that employs a searchcontrol based on ideas from several areas of artificial intelligence (AI). The combi-nation of case-based reasoning, several similarity concepts, a cooperation conceptof distributed AI and reactive planning enables a system using our concept tolearn form previous successful proof attempts. In a kind of bootstrapping processeasy problems are used to solve more and more complicated ones.We provide case studies from two domains of interest in pure equationaltheorem proving taken from the TPTP library. These case studies show thatan instantiation of our architecture achieves a high grade of automation andoutperforms state-of-the-art conventional theorem provers.
We investigate the usage of so-called inference rights. We point out the prob-lems arising from the inflexibility of existing approaches to heuristically controlthe search of automated deduction systems, and we propose the application ofinference rights that are well-suited for controlling the search more flexibly. More-over, inference rights allow for a mechanism of "partial forgetting" of facts thatis not realizable in the most controlling aproaches. We study theoretical founda-tions of inference rights as well as the integration of inference rights into alreadyexisting inference systems. Furthermore, we present possibilities to control suchmodified inference systems in order to gain efficiency. Finally, we report onexperimental results obtained in the area of condensed detachment.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
Planning for realistic problems in a static and deterministic environment with complete information faces exponential search spaces and, more often than not, should produce plans comprehensible for the user. This article introduces new planning strategies inspired by proof planning examples in order to tackle the search-space-problem and the structured-plan-problem. Island planning and refinement as well as subproblem refinement are integrated into a general planning framework and some exemplary control knowledge suitable for proof planning is given.
Toying with Jordan matrices
(1996)
Some formulae, containing logarithmic derivatives of (smooth) measures on infinitedimensional spaces, arise in quite different situations. In particular, logarithmic derivatives of a measure are inserted in the Schr"odinger equastion in the space consisting of functions that are square integrable with respect to this measure, what allows us to describe very simply a procedure of (canonical) quantization of infinite-dimensional Hamiltonian systems with the linear phase space. Further, the problem of reconstructing of a measure by its logarithmic derivative (that was posed in [1] independently of any applications) can be equivalent either to the problem of finding the "ground state" (considered as some measure) for infinite-dimensional Schr"odinger equation, or to the problem of finding an invariant measure for a stochastic differential equation (that is a central question of so-called stochastic quantization), or to the problem of recenstruc ting "Gibbsian measure by its specification" (i.e. by a collection of finite-dimensional conditional distributions). Logarithmic derivatives of some measure appear in Cameron-Martin-Girsanov-Maruyama formulae and in its generalizations related to arbitrary smooth measures; they allow also to connect these formulae and the Feynman-Kac formulae. This note discusses all these topics. Of course due to its shortness the presentation is formal in main, and precise analitical assumptions are usually absent. Actually only a list of formulae with small comments is given. Let us mention also that we do not consider at all so-called Dirichlet forms to which a great deal of literature is devoted (cf. [3] and references therein to the works of S. Alberion and others).
Mit der schnellen Verbreitung der CAx-Techniken in der deutschen Automobilindustrie wächst die Notwendigkeit einer besseren Integration der CAx-Systeme in die Prozeßketten und der Beherrschung der Produktinformationsflüsse. Aufgrund dieser Tatsachen ist in den letzten Jah-ren ein Wandel der CAx-Systemarchitekturen von geschloßenen, monolithischen zu offen inte-grierten Systemen erkennbar. Im folgenden wird dieser Prozeß sowie dessen Implikationen auf die Anwendung und auf die Systemhersteller analysiert. Ausgehend von der Initiative der deutschen Automobilindustrie wurde das Projekt ANICA (Analysis of Interfaces of various CAD/CAM-Systems) gestartet. In diesem Projekt werden die Schnittstellen zu den Systemkernen einiger CAx-Hersteller untersucht und ein Konzept für kooperierende CAx-Systeme in der Automobilindustrie wird entwickelt.
Representations of activities dealing with the development or maintenance of software are called software process models. Process models allow for communication, reasoning, guidance, improvement, and automation. Two approaches for building, instantiating, and managing processes, namely CoMo-Kit and MVP-E, are combined to build a more powerful one. CoMo-Kit is based on AI/KE technology; it was developed for supporting complex design processes and is not specialized to software development processes. MVP-E is a process-sensitive software engineering environment for modeling and analyzing software development processes, and guides software developers. Additionally, it provides services to establish and run measurement programmes in software organizations. Because both approaches were developed completely independently major integration efforts are to be made to combine their both advantages. This paper concentrates on the resulting language concepts and their operationalization necessary for building automated process support.
In this paper we consider the problem of finding in a given graph a minimal weight subtree of connected subgraph, which has a given number of edges. These NP-hard combinatorial optimization problems have various applications in the oil industry, in facility layout and graph partitioning. We will present different heuristic approaches based on spanning tree and shortest path methods and on an exact algorithm solving the problem in polynomial time if the underlying graph is a tree. Both the edge- and node weighted case are investigated and extensive numerical results on the behaviour of the heuristics compared to optimal solutions are presented. The best heuristic yielded results within an error margin of less than one percent from optimality for most cases. In a large percentage of tests even optimal solutions have been found.
The purpose of this paper is to present the state of the art in singular optimal control. If the Hamiltonian in an interval \([t_1,t_2]\) is independent of the control we call the control in this interval singular. Singular optimal controls appear in many applications so that research has been motivated since the 1950s. Often optimal controls consist of nonsingular and singular parts where the junctions between these parts are mostly very difficult to find. One section of this work shows the actual knowledge about the location of the junctions and the behaviour of the control at the junctions. The definition and the properties of the orders (problem order and arc order), which are important in this context, are given, too. Another chapter considers multidimensional controls and how they can be treated. An alternate definition of the orders in the multidimensional case is proposed and a counterexample, which confirms a remark given in the 1960s, is given. A voluminous list of optimality conditions, which can be found in several publications, is added. A strategy for solving optimal control problems numerically is given, and the existing algorithms are compared with each other. Finally conclusions and an outlook on the future research is given.
This paper considers a transmission boundary-value problem for the time-harmonic Maxwell equations neglecting displacement currents which is frequently used for the numerical computation of eddy-currents. Across material boundaries the tangential components of the magnetic field H and the normal component of the magnetization müH are assumed to be continuous. this problem admits a hyperplane of solutions if the domains under consideration are multiply connected. Using integral equation methods and singular perturbation theory it is shown that this hyperplane contains a unique point which is the limit of the classical electromagnetic transmission boundary-value problem for vanishing displacement currents. Considering the convergence proof, a simple contructive criterion how to select this solution is immediately derived.
Satellite gradiometry and its instrumentation is an ultra-sensitive detection technique of the space gravitational gradient (i.e. the Hesse tensor of the gravitational potential). Gradeometry will be of great significance in inertial navigation, gravity survey, geodynamics and earthquake prediction research. In this paper, satellite gradiometry formulated as an inverse problem of satellite geodesy is discussed from two mathematical aspects: Firstly, satellite gradiometry is considered as a continuous problem of harmonic downward continuation. The space-borne gravity gradients are assumed to be known continuously over the satellite (orbit) surface. Our purpose is to specify sufficient conditions under which uniqueness and existence can be guaranteed. It is shown that, in a spherical context, uniqueness results are obtainable by decomposition of the Hesse matrix in terms of tensor spherical harmonics. In particular, the gravitational potential is proved to be uniquely determined if second order radial derivatives are prescribed at satellite height. This information leads us to a reformulation of satellite gradiometry as a (Fredholm) pseudodifferential equation of first kind. Secondly, for a numerical realization, we assume the gravitational gradients to be known for a finite number of discrete points. The discrete problem is dealt with classical regularization methods, based on filtering techniques by means of spherical wavelets. A spherical singular integral-like approach to regularization methods is established, regularization wavelets are developed which allow the regularization in form of a multiresolution analysis. Moreover, a combined spherical harmonic and spherical regularization wavelet solution is derived as an appropriate tool in future (global and local) high-presision resolution of the earth" s gravitational potential.