### Refine

#### Year of publication

- 1993 (26) (remove)

#### Document Type

- Preprint (26) (remove)

#### Keywords

#### Faculty / Organisational entity

Abstract: The classification of quasi - primary fields is outlined. It is proved that the only conserved quasi - primary currents are the energy - momentum tensor and the O(N)-Noether currents. Derivation of all quasi - primary fields and the resolution of degeneracy is sketched. Finally the limits d = 2 and d = 4 of the space dimension are discussed. Whereas the latter is trivial the former is only almost so. (To appear in the Proceedings of the XXII Conference on Differential Geometry Methods in Theoretical Physics, Ixtapa, Mexico, September 20-24, 1993)

Simulation methods like DSMC are an efficient tool to compute rarefied gas flows. Using supercomputers it is possible to include various real gas effects like vibrational energies or chemical reactions in a gas mixture. Nevertheless it is still necessary to improve the accuracy of the current simulation methods in order to reduce the computational effort. To support this task the paper presents a comparison of the classical DSMC method with the so called finite Pointset Method. This new approach was developed during several years in the framework of the European space project HERMES. The comparison given in the paper is based on two different testcases: a spatially homogeneous relaxation problem and a 2-dimensional axisymmetric flow problem at high Mach numbers.

This paper is concerned with the development of a self-adaptive spatial descretization for PDEs using a wavelet basis. A Petrov-Galerkin method [LPT91] is used to reduce the determination of the unknown at the new time step to the computation of scalar products. These have to be discretized in an appropriate way. We investigate this point in detail and devise an algorithm that has linear operation count with respect to the number of unknowns. It is tested with spline wavelets and Meyer wavelets retaining the latter for their better localisation at finite precision. The algorithm is then applied to the one dimensional thermodiffusive equations. We show that the adaption strategy merits to be modified in order to take into account the particular and very strong nonlinearity of this problem. Finally, a supplementary Fourier discretization permits the computation of two dimensional flame fronts.

Exact Solutions of Discrete Kinetic Models and Stationary Problems for the Plane Broadwell Model
(1993)

Based on experiences from an autonomous mobile robot project called MOBOT -III, we found hard realtime-constraints for the operating-system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operatingsystem-kernel", not limited to mobile- robot-projects only, but which might be useful also wherever you have to guarantee a high reliability of a realtime-system. The focus in this article is on a communication-scheme fulfilling the demanded (hard realtime-) assurances although not implying time-delays or jitters on the critical informationchannels. The central chapters discuss a locking-free shared buffer management, without the need for interrupts and a way to arrange the communication architecture in order to produce minimal protocol-overhead and short cycle-times. Most of the remaining communication-capacity (if there is any) is used for redundant transfers, increasing the reliability of the whole system. ALBATROSS is actually implemented on a multi-processor VMEbus-system.

Case-based problem solving can be significantly improved by applying domain knowledge (in opposition to problem solving knowledge), which can be acquired with reasonable effort, to derive explanations of the correctness of a case. Such explanations, constructed on several levels of abstraction, can be employed as the basis for similarity assessment as well as for adaptation by solution refinement. The general approach for explanation-based similarity can be applied to different real world problem solving tasks such as diagnosis and planning in technical areas. This paper presents the general idea as well as the two specific, completely implemented realizations for a diagnosis and a planning task.

The system of shallow water waves is one of the classical examples for nonlinear, twodimensional conservation laws. The paper investigates a simple kinetic equation depending on a parameter e which leads for e to 0 to the system of shallow water waves. The corresponding equilibrium distribution function has a compact support which depends on the eigenvalues of the hyperbolic system. It is shown that this kind of kinetic approach is restricted to a special class of nonlinear conservation laws. The kinetic model is used to develop a simple particle method for the numerical solution of shallow water waves. The particle method can be implemented in a straightforward way and produces in test examples sufficiently accurate results.

SPIN-NFDS Learning and Preset Knowledge for Surface Fusion - A Neural Fuzzy Decision System -
(1993)

The problem to be discussed in this paper may be characterized in short by the question: "Are these two surface fragments belonging together (i.e. belonging to the same surface)?" The presented techniques try to benefit from some predefined knowledge as well as from the possibility to refine and adapt this knowledge according to a (changing) real environment, resulting in a combination of fuzzy-decision systems and neural networks. The results are encouraging (fast convergence speed, high accuracy), and the model might be used for a wide range of applications. The general frame surrounding the work in this paper is the SPIN- project, where emphasis is on sub-symbolic abstractions, based on a 3-d scanned environment.

This paper refers to the problem of adaptability over an infinite period of time, regarding dynamic networks. A never ending flow of examples have to be clustered, based on a distance measure. The developed model is based on the self-organizing feature maps of Kohonen [6], [7] and some adaptations by Fritzke [3]. The problem of dynamic surface classification is embedded in the SPIN project, where sub-symbolic abstractions, based on a 3-d scanned environment is being done.

This report contains a collection of abstracts for talks given at the "Deduktionstreffen" held at Kaiserslautern, October 6 to 8, 1993. The topics of the talks range from theoretical aspects of term rewriting systems and higher order resolution to descriptions of practical proof systems in various applications. They are grouped together according the following classification: Distribution and Combination of Theorem Provers, Termination, Completion, Functional Programs, Inductive Theorem Proving, Automatic Theorem Proving, Proof Presentation. The Deduktionstreffen is the annual meeting of the Fachgruppe Deduktionssysteme in the Gesellschaft für Informatik (GI), the German association for computer science.

The paper presents the shuffle algorithm proposed by Baganoff, which can be implemented in simulation methods for the Boltzmann equation to simplify the binary collision process. It is shown that the shuffle algborithm is a discrete approximation of an isotropic collision law. The transition probability as well as the scattering cross section of the shuffle algorithm are opposed to the corresponding quantities of a hard-sphere model. The discrepancy between measures on a sphere is introduced in order to quantify the approximation error by using the shuffle algorithm.

In this paper, we deal with the problem of spherical interpolation of discretely given data of tensorial type. To this end, spherical tensor fields are investigated and a decomposition formula is described. Tensor spherical harmonics are introduced as eigenfunctions of a tensorial analogon to the Beltrami operator and discussed in detail. Based on these preliminaries, a spline interpolation process is described and error estimates are presented. Furthermore, some relations between the spline basis functions and the theory of radial basis functions are developed.

We discuss how kinetic and aerodynamic descriptions of a gas can be matched at some prescribed boundary. The boundary (matching) conditions arise from requirement that the relevant moments (p,u,...) of the particle density function be continuous at the boundary, and from the requirement that the closure relation, by which the aerodynamic equations (holding on one side of the boundary) arise from the kinetic equation (holding on the other side), be satisfied at the boundary. We do a case study involving the Knudsen gas equation on one side and a system involving the Burgers equation on the other side in section 2, and a discussion for the coupling of the full Boltzmann equation with the compressible Navier-Stokes equations in section 3.

The paper presents a fast implementation of a constructive method to generate a special class of low-discrepancy sequences which are based on Van Neumann-Kakutani tranformations. Such sequences can be used in various simulation codes where it is necessary to generate a certain number of uniformly distributed random numbers on the unit interval.; From a theoretical point of view the uniformity of a sequence is measured in terms of the discrepancy which is a special distance between a finite set of points and the uniform distribution on the unit interval.; Numerical results are given on the cost efficiency of different generators on different hardware architectures as well as on the corresponding uniformity of the sequences. As an example for the efficient use of low-discrepancy sequences in a complex simulation code results are presented for the simulation of a hypersonic rarefied gas flow.

This paper considers the numerical solution of a transmission boundary-value problem for the time-harmonic Maxwell equations with the help of a special finite volume discretization. Applying this technique to several three-dimensional test problems, we obtain large, sparse, complex linear systems, which are solved by using BiCG, CGS, BiCGSTAB resp., GMRES. We combine these methods with suitably chosen preconditioning matrices and compare the speed of convergence.

Discrete families of functions with the property that every function in a certain space can be represented by its formal Fourier series expansion are developed on the sphere. A Fourier series type expansion is obviously true if the family is an orthonormal basis of a Hilbert space, but it also can hold in situations where the family is not orthogonal and is overcomplete. Furthermore, all functions in our approach are axisymmetric (depending only on the spherical distance) so that they can be used adequately in (rotation) invariant pseudodifferential equations on the frames (ii) Gauss- Weierstrass frames, and (iii) frames consisting of locally supported kernel functions. Abel-Poisson frames form families of harmonic functions and provide us with powerful approximation tools in potential theory. Gauss-Weierstrass frames are intimately related to the diffusion equation on the sphere and play an important role in multiscale descriptions of image processing on the sphere. The third class enables us to discuss spherical Fourier expansions by means of axisymmetric finite elements.

Spline functions that interpolate data given on the sphere are developed in a weighted Sobolev space setting. The flexibility of the weights makes possible the choice of the approximating function in a way which emphasizes attributes desirable for the particular application area. Examples show that certain choices of the weight sequences yield known methods. A pointwise convergence theorem containing explicit constants yields a useable error bound.

We study deterministic conditional rewrite systems, i.e. conditional rewrite systemswhere the extra variables are not totally free but 'input bounded'. If such a systemR is quasi-reductive then !R is decidable and terminating. We develop a critical paircriterion to prove confluence if R is quasi-reductive and strongly deterministic. In thiscase we prove that R is logical, i.e./!R==R holds. We apply our results to proveHorn clause programs to be uniquely terminating.This research was supported by the Deutsche Forschungsgemeinschaft, SFB 314, Project D4

We investigate restricted termination and confluence properties of term rewritADing systems, in particular weak termination and innermost termination, and theirinterrelation. New criteria are provided which are sufficient for the equivalenceof innermost / weak termination and uniform termination of term rewriting sysADtems. These criteria provide interesting possibilities to infer completeness, i.e.termination plus confluence, from restricted termination and confluence properADties.Using these basic results we are also able to prove some new results aboutmodular termination of rewriting. In particular, we show that termination ismodular for some classes of innermost terminating and locally confluent termrewriting systems, namely for nonADoverlapping and even for overlay systems. Asan easy consequence this latter result also entails a simplified proof of the factthat completeness is a decomposable property of soADcalled constructor systems.Furthermore we show how to obtain similar results for even more general cases of(nonADdisjoint) combined systems with shared constructors and of certain hierarADchical combinations of systems with constructors. Interestingly, these modularityresults are obtained by means of a proof technique which itself constitutes a modADular approach.

Questions arising from Statistical Decision Theory, Bayes Methods and other probability theoretic fields lead to concepts of orthogonality of a family of probability measures. In this paper we therefore give a sketch of a generalized information theory which is very helpful in considering and answering those questions. In this adapted information theory Shannon's classical transition channels modelled by finite stochastic matrices are replaced by compact families of probability measures that are uniformly integrable. These channels are characterized by concepts such as information rate and capacity and by optimal priors and the optimal mixture distribution. For practical studies we introduce an algorithm to calculate the capacity of the whole probability family which is appli cable even for general output space. We then explain how the algorithm works and compare its numerical costs with those of the classical Arimoto-Blahut-algorithm.

In these lectures we will mainly treat a billard game. Our particles will be hard spheres. Not always: We will also touch cases, where particles have interior energies due to rotation or vibration, which they exchange in a collision, and we will talk about chemical reactions happening during a collision. But many essential aspects occur already in the billard case which will be therefore paradigmatic. I do not know enough about semiconductors to handle collisions there - the Boltzmann case is certainly different but may give some idea even for the other cases.