Refine
Year of publication
- 1996 (113) (remove)
Document Type
- Preprint (70)
- Report (23)
- Article (9)
- Master's Thesis (7)
- Doctoral Thesis (2)
- Diploma Thesis (1)
- Working Paper (1)
Keywords
- AG-RESY (4)
- COMOKIT (4)
- Case-Based Reasoning (3)
- CoMo-Kit (3)
- PARO (3)
- Fallbasiertes Planen (2)
- SKALP (2)
- case-based planning (2)
- Abstraction (1)
- Boltzmann Equation (1)
Faculty / Organisational entity
In this report we give an overview of the development of our new Waldmeisterprover for equational theories. We elaborate a systematic stepwise design process, startingwith the inference system for unfailing Knuth - Bendix completion and ending up with animplementation which avoids the main diseases today's provers suffer from: overindulgencein time and space.Our design process is based on a logical three - level system model consisting of basicoperations for inference step execution, aggregated inference machine, and overall controlstrategy. Careful analysis of the inference system for unfailing completion has revealed thecrucial points responsible for time and space consumption. For the low level of our model,we introduce specialized data structures and algorithms speeding up the running system andcutting it down in size - both by one order of magnitude compared with standard techniques.Flexible control of the mid - level aggregation inside the resulting prover is made possible by acorresponding set of parameters. Experimental analysis shows that this flexibility is a pointof high importance. We go on with some implementation guidelines we have found valuablein the field of deduction.The resulting new prover shows that our design approach is promising. We compare oursystem's throughput with that of an established system and finally demonstrate how twovery hard problems could be solved by Waldmeister.
Mit der schnellen Verbreitung der CAx-Techniken in der deutschen Automobilindustrie wächst die Notwendigkeit einer besseren Integration der CAx-Systeme in die Prozeßketten und der Beherrschung der Produktinformationsflüsse. Aufgrund dieser Tatsachen ist in den letzten Jah-ren ein Wandel der CAx-Systemarchitekturen von geschloßenen, monolithischen zu offen inte-grierten Systemen erkennbar. Im folgenden wird dieser Prozeß sowie dessen Implikationen auf die Anwendung und auf die Systemhersteller analysiert. Ausgehend von der Initiative der deutschen Automobilindustrie wurde das Projekt ANICA (Analysis of Interfaces of various CAD/CAM-Systems) gestartet. In diesem Projekt werden die Schnittstellen zu den Systemkernen einiger CAx-Hersteller untersucht und ein Konzept für kooperierende CAx-Systeme in der Automobilindustrie wird entwickelt.
Toying with Jordan matrices
(1996)
We present first steps towards fully automated deduction that merely requiresthe user to submit proof problems and pick up results. Essentially, this necessi-tates the automation of the crucial step in the use of a deduction system, namelychoosing and configuring an appropriate search-guiding heuristic. Furthermore,we motivate why learning capabilities are pivotal for satisfactory performance.The infrastructure for automating both the selection of a heuristic and integra-tion of learning are provided in form of an environment embedding the "core"deduction system.We have conducted a case study in connection with a deduction system basedon condensed detachment. Our experiments with a fully automated deductionsystem 'AutoCoDe' have produced remarkable results. We substantiate Au-toCoDe's encouraging achievements with a comparison with the renowned the-orem prover Otter. AutoCoDe outperforms Otter even when assuming veryfavorable conditions for Otter.
Abstract: We study the roughening transition of an interface in an Ising system on a 3D simple cubic lattice using a finite size scaling method. The particular method has recently been proposed and successfully tested for various solid on solid models. The basic idea is the matching of the renormalization-groupflow of the interface with that of the exactly solvable body centered cubic solid on solid model. We unambiguously confirm the Kosterlitz-Thouless nature of the roughening transition of the Ising interface. Our result for the inverse transition temperature K_R = 0.40754(5) is almost by two orders of magnitude more accurate than the estimate of Mon, Landau and Stauffer [9].
Load balancing is one of the central problems that have to be solved in parallel computation. Here, the problem of distributed, dynamic load balancing for massive parallelism is addressed. A new local method, which realizes a physical analogy to equilibrating liquids in multi-dimensional tori or hypercubes, is presented. It is especially suited for communication mechanisms with low set-up to transfer ratio occurring in tightly-coupled or SIMD systems. By successive shifting single load elements to the direct neighbors, the load is automatically transferred to lightly loaded processors. Compared to former methods, the proposed Liquid model has two main advantages. First, the task of load sharing is combined with the task of load balancing, where the former has priority. This property is valuable in many applications and important for highly dynamic load distribution. Second, the Liquid model has high efficiency. Asymptotically, it needs O(D . K . Ldiff ) load transfers to reach the balanced state in a D-dimensional torus with K processors per dimension and a maximum initial load difference of Ldiff . The Liquid model clearly outperforms an earlier load balancing approach, the nearest-neighbor-averaging. Besides a survey of related research, analytical results within a formal framework are derived. These results are validated by worst-case simulations in one-and two-dimensional tori with up to two thousand processors.
The calculation of form factors is an important problem in computing the global illumination in the radiosity setting. Closed form solutions often are only available for objects without obstruction and are very hard to calculate. Using Monte Carlo integration and ray tracing provides a fast and elegant tool for the estimation of the form factors. In this paper we show, that using deterministic low discrepancy sample points is superior to random sampling, resulting in an acceleration of more than half an order of magnitude.