### Refine

#### Year of publication

- 1993 (26) (remove)

#### Document Type

- Preprint (26) (remove)

#### Keywords

#### Faculty / Organisational entity

This report contains a collection of abstracts for talks given at the "Deduktionstreffen" held at Kaiserslautern, October 6 to 8, 1993. The topics of the talks range from theoretical aspects of term rewriting systems and higher order resolution to descriptions of practical proof systems in various applications. They are grouped together according the following classification: Distribution and Combination of Theorem Provers, Termination, Completion, Functional Programs, Inductive Theorem Proving, Automatic Theorem Proving, Proof Presentation. The Deduktionstreffen is the annual meeting of the Fachgruppe Deduktionssysteme in the Gesellschaft für Informatik (GI), the German association for computer science.

We study deterministic conditional rewrite systems, i.e. conditional rewrite systemswhere the extra variables are not totally free but 'input bounded'. If such a systemR is quasi-reductive then !R is decidable and terminating. We develop a critical paircriterion to prove confluence if R is quasi-reductive and strongly deterministic. In thiscase we prove that R is logical, i.e./!R==R holds. We apply our results to proveHorn clause programs to be uniquely terminating.This research was supported by the Deutsche Forschungsgemeinschaft, SFB 314, Project D4

Case-based problem solving can be significantly improved by applying domain knowledge (in opposition to problem solving knowledge), which can be acquired with reasonable effort, to derive explanations of the correctness of a case. Such explanations, constructed on several levels of abstraction, can be employed as the basis for similarity assessment as well as for adaptation by solution refinement. The general approach for explanation-based similarity can be applied to different real world problem solving tasks such as diagnosis and planning in technical areas. This paper presents the general idea as well as the two specific, completely implemented realizations for a diagnosis and a planning task.

Exact Solutions of Discrete Kinetic Models and Stationary Problems for the Plane Broadwell Model
(1993)

Spline functions that interpolate data given on the sphere are developed in a weighted Sobolev space setting. The flexibility of the weights makes possible the choice of the approximating function in a way which emphasizes attributes desirable for the particular application area. Examples show that certain choices of the weight sequences yield known methods. A pointwise convergence theorem containing explicit constants yields a useable error bound.

In this paper, we deal with the problem of spherical interpolation of discretely given data of tensorial type. To this end, spherical tensor fields are investigated and a decomposition formula is described. Tensor spherical harmonics are introduced as eigenfunctions of a tensorial analogon to the Beltrami operator and discussed in detail. Based on these preliminaries, a spline interpolation process is described and error estimates are presented. Furthermore, some relations between the spline basis functions and the theory of radial basis functions are developed.

Discrete families of functions with the property that every function in a certain space can be represented by its formal Fourier series expansion are developed on the sphere. A Fourier series type expansion is obviously true if the family is an orthonormal basis of a Hilbert space, but it also can hold in situations where the family is not orthogonal and is overcomplete. Furthermore, all functions in our approach are axisymmetric (depending only on the spherical distance) so that they can be used adequately in (rotation) invariant pseudodifferential equations on the frames (ii) Gauss- Weierstrass frames, and (iii) frames consisting of locally supported kernel functions. Abel-Poisson frames form families of harmonic functions and provide us with powerful approximation tools in potential theory. Gauss-Weierstrass frames are intimately related to the diffusion equation on the sphere and play an important role in multiscale descriptions of image processing on the sphere. The third class enables us to discuss spherical Fourier expansions by means of axisymmetric finite elements.

This paper is concerned with the development of a self-adaptive spatial descretization for PDEs using a wavelet basis. A Petrov-Galerkin method [LPT91] is used to reduce the determination of the unknown at the new time step to the computation of scalar products. These have to be discretized in an appropriate way. We investigate this point in detail and devise an algorithm that has linear operation count with respect to the number of unknowns. It is tested with spline wavelets and Meyer wavelets retaining the latter for their better localisation at finite precision. The algorithm is then applied to the one dimensional thermodiffusive equations. We show that the adaption strategy merits to be modified in order to take into account the particular and very strong nonlinearity of this problem. Finally, a supplementary Fourier discretization permits the computation of two dimensional flame fronts.

We investigate restricted termination and confluence properties of term rewritADing systems, in particular weak termination and innermost termination, and theirinterrelation. New criteria are provided which are sufficient for the equivalenceof innermost / weak termination and uniform termination of term rewriting sysADtems. These criteria provide interesting possibilities to infer completeness, i.e.termination plus confluence, from restricted termination and confluence properADties.Using these basic results we are also able to prove some new results aboutmodular termination of rewriting. In particular, we show that termination ismodular for some classes of innermost terminating and locally confluent termrewriting systems, namely for nonADoverlapping and even for overlay systems. Asan easy consequence this latter result also entails a simplified proof of the factthat completeness is a decomposable property of soADcalled constructor systems.Furthermore we show how to obtain similar results for even more general cases of(nonADdisjoint) combined systems with shared constructors and of certain hierarADchical combinations of systems with constructors. Interestingly, these modularityresults are obtained by means of a proof technique which itself constitutes a modADular approach.

We discuss how kinetic and aerodynamic descriptions of a gas can be matched at some prescribed boundary. The boundary (matching) conditions arise from requirement that the relevant moments (p,u,...) of the particle density function be continuous at the boundary, and from the requirement that the closure relation, by which the aerodynamic equations (holding on one side of the boundary) arise from the kinetic equation (holding on the other side), be satisfied at the boundary. We do a case study involving the Knudsen gas equation on one side and a system involving the Burgers equation on the other side in section 2, and a discussion for the coupling of the full Boltzmann equation with the compressible Navier-Stokes equations in section 3.

Questions arising from Statistical Decision Theory, Bayes Methods and other probability theoretic fields lead to concepts of orthogonality of a family of probability measures. In this paper we therefore give a sketch of a generalized information theory which is very helpful in considering and answering those questions. In this adapted information theory Shannon's classical transition channels modelled by finite stochastic matrices are replaced by compact families of probability measures that are uniformly integrable. These channels are characterized by concepts such as information rate and capacity and by optimal priors and the optimal mixture distribution. For practical studies we introduce an algorithm to calculate the capacity of the whole probability family which is appli cable even for general output space. We then explain how the algorithm works and compare its numerical costs with those of the classical Arimoto-Blahut-algorithm.

Abstract: The classification of quasi - primary fields is outlined. It is proved that the only conserved quasi - primary currents are the energy - momentum tensor and the O(N)-Noether currents. Derivation of all quasi - primary fields and the resolution of degeneracy is sketched. Finally the limits d = 2 and d = 4 of the space dimension are discussed. Whereas the latter is trivial the former is only almost so. (To appear in the Proceedings of the XXII Conference on Differential Geometry Methods in Theoretical Physics, Ixtapa, Mexico, September 20-24, 1993)

In these lectures we will mainly treat a billard game. Our particles will be hard spheres. Not always: We will also touch cases, where particles have interior energies due to rotation or vibration, which they exchange in a collision, and we will talk about chemical reactions happening during a collision. But many essential aspects occur already in the billard case which will be therefore paradigmatic. I do not know enough about semiconductors to handle collisions there - the Boltzmann case is certainly different but may give some idea even for the other cases.

This paper considers the numerical solution of a transmission boundary-value problem for the time-harmonic Maxwell equations with the help of a special finite volume discretization. Applying this technique to several three-dimensional test problems, we obtain large, sparse, complex linear systems, which are solved by using BiCG, CGS, BiCGSTAB resp., GMRES. We combine these methods with suitably chosen preconditioning matrices and compare the speed of convergence.

The paper presents the shuffle algorithm proposed by Baganoff, which can be implemented in simulation methods for the Boltzmann equation to simplify the binary collision process. It is shown that the shuffle algborithm is a discrete approximation of an isotropic collision law. The transition probability as well as the scattering cross section of the shuffle algorithm are opposed to the corresponding quantities of a hard-sphere model. The discrepancy between measures on a sphere is introduced in order to quantify the approximation error by using the shuffle algorithm.