Refine
Year of publication
- 1995 (50) (remove)
Document Type
- Preprint (50) (remove)
Language
- English (50) (remove)
Keywords
- Boltzmann Equation (3)
- Numerical Simulation (3)
- Case Based Reasoning (2)
- Boundary Value Problems (1)
- Case-Based Reasoning (1)
- Case-Based Reasoning Systems (1)
- Evaluation (1)
- Evolution Equations (1)
- Hybrid Codes (1)
- Palm distributions (1)
Faculty / Organisational entity
Evaluation is an important issue for every scientific field and a necessity for an emerging soft-ware technology like case- based reasoning. This paper is a supplementation to the review of industrial case-based reasoning tools by K.-D. Althoff, E. Auriol, R. Barletta and M. Manago which describes the most detailed evaluation of commercial case-based reasoning tools currently available. The author focuses on some important aspects that correspond to the evaluation ofcase-based reasoning systems and gives links to ongoing research.
Case-Based Reasoning for Decision Support and Diagnostic Problem Solving: The INRECA Approach
(1995)
INRECA offers tools and methods for developing, validating, and maintaining decision support systems. INRECA's basic technologies are inductive and case-based reasoning, namely KATE -INDUCTION (cf., e.g., Manago, 1989; Manago, 1990) and S3-CASE, a software product based on PATDEX (cf., e.g., Wess,1991; Richter & Wess, 1991; Althoff & Wess, 1991). Induction extracts decision knowledge from case databases. It brings to light patterns among cases and helps monitoring trends over time. Case-based rea -soning relates the engineer's current problem to past experiences.
The feature interaction problem in telecommunications systems increasingly ob-structs the evolution of such systems. We develop formal detection criteria whichrender a necessary (but less than sufficient) condition for feature interactions. It can be checked mechanically and points out all potentially critical spots. Thesehave to be analysed manually. The resulting resolution decisions are incorporatedformally. Some prototype tool support is already available. A prerequisite forformal criteria is a formal definition of the problem. Since the notions of featureand feature interaction are often used in a rather fuzzy way, we attempt a formaldefinition first and discuss which aspects can be included in a formalization (andtherefore in a detection method). This paper describes ongoing work.
In this paper the autonomous mobile vehicle MOBOT-IV is presented, which is capable of exploring an indoor-environment while building up an internal representation of its world. This internal model is used for the navigation of the vehicle during and after the exploration phase. In contrast to methods, which use a grid based or line based environment representation, in the approach presented in this paper, local sector maps are the basic data structure of the world model. This paper describes the method of the view-point-planning for map building, the use of this map for navigation and the method of external position estimation including the hand- ling of an position error in a moving real-time system.
Oscillatory surface in-plane lattice spacing during growth of Co and Cu on a Cu(001) single crystal
(1995)
A concept of generalized discrepancy, which involves pseudodifferential operators to give a criterion of equidistributed pointsets, is developed on the sphere. A simply structured formula in terms of elementary functions is established for the computation of the generalized discrepancy. With the help of this formula five kinds of point systems on the sphere, namely lattices in polar coordinates, transformed 2-dimensional sequences, rotations on the sphere, triangulation, and sum of three squares sequence, are investigated. Quantitative tests are done, and the results are compared with each other. Our calculations exhibit different orders of convergence of the generalized discrepancy for different types of point systems.
Some new approximation methods are described for harmonic functions corresponding to boundary values on the (unit) sphere. Starting from the usual Fourier (orthogonal) series approach, we propose here nonorthogonal expansions, i.e. series expansions in terms of overcomplete systems consisting of localizing functions. In detail, we are concerned with the so-called Gabor, Toeplitz, and wavelet expansions. Essential tools are modulations, rotations, and dilations of a mother wavelet. The Abel-Poisson kernel turns out to be the appropriate mother wavelet in approximation of harmonic functions from potential values on a spherical boundary.
Spline functions that approximate data given on the sphere are developed in a weighted Sobolev space setting. The flexibility of the weights makes possible the choice of the approximating function in a way which emphasizes attributes desirable for the particular application area. Examples show that certain choices of the weight sequences yield known methods. A convergence theorem containing explicit constants yields a usable error bound. Our survey ends with the discussion of spherical splines in geodetically relevant pseudodifferential equations.
The basic theory of spherical singular integrals is recapitulated. Criteria are given for measuring the space-frequency localization of functions on the sphere. The trade off between space localization on the sphere and frequency localization in terms of spherical harmonics is described in form of an uncertainty principle. A continuous version of spherical multiresolution is introduced, starting from continuous wavelet transform corresponding to spherical wavelets with vanishing moments up to a certain order. The wavelet transform is characterized by least-squares properties. Scale discretization enables us to construct spherical counterparts of wavelet packets and scale discrete Daubechies" wavelets. It is shown that singular integral operators forming a semigroup of contraction operators of class (Co) (like Abel-Poisson or Gauß-Weierstraß operators) lead in canonical way to pyramyd algorithms. Fully discretized wavelet transforms are obtained via approximate integration rules on the sphere. Finally applications to (geo-)physical reality are discussed in more detail. A combined method is proposed for approximating the low frequency parts of a physical quantity by spherical harmonics and the high frequency parts by spherical wavelets. The particular significance of this combined concept is motivated for the situation of today" s physical geodesy, viz. the determination of the high frequency parts of the earth" s gravitational potential under explicit knowledge of the lower order part in terms of a spherical harmonic expansion.
We present a method for learning heuristics employed by an automated proverto control its inference machine. The hub of the method is the adaptation of theparameters of a heuristic. Adaptation is accomplished by a genetic algorithm.The necessary guidance during the learning process is provided by a proof prob-lem and a proof of it found in the past. The objective of learning consists infinding a parameter configuration that avoids redundant effort w.r.t. this prob-lem and the particular proof of it. A heuristic learned (adapted) this way canthen be applied profitably when searching for a proof of a similar problem. So,our method can be used to train a proof heuristic for a class of similar problems.A number of experiments (with an automated prover for purely equationallogic) show that adapted heuristics are not only able to speed up enormously thesearch for the proof learned during adaptation. They also reduce redundancies inthe search for proofs of similar theorems. This not only results in finding proofsfaster, but also enables the prover to prove theorems it could not handle before.
Problems stemming from the study of logic calculi in connection with an infer-ence rule called "condensed detachment" are widely acknowledged as prominenttest sets for automated deduction systems and their search guiding heuristics. Itis in the light of these problems that we demonstrate the power of heuristics thatmake use of past proof experience with numerous experiments.We present two such heuristics. The first heuristic attempts to re-enact aproof of a proof problem found in the past in a flexible way in order to find a proofof a similar problem. The second heuristic employs "features" in connection withpast proof experience to prune the search space. Both these heuristics not onlyallow for substantial speed-ups, but also make it possible to prove problems thatwere out of reach when using so-called basic heuristics. Moreover, a combinationof these two heuristics can further increase performance.We compare our results with the results the creators of Otter obtained withthis renowned theorem prover and this way substantiate our achievements.
Correctness and runtime efficiency are essential properties of software ingeneral and of high-speed protocols in particular. Establishing correctnessrequires the use of FDTs during protocol design, and to prove the protocolcode correct with respect to its formal specification. Another approach toboost confidence in the correctness of the implementation is to generateprotocol code automatically from the specification. However, the runtimeefficiency of this code is often insufficient. This has turned out to be amajor obstacle to the use of FDTs in practice.One of the FDTs currently applied to communication protocols is Es-telle. We show how runtime efficiency can be significantly improved byseveral measures carried out during the design, implementation and run-time of a protocol. Recent results of improvements in the efficiency ofEstelle-based protocol implementations are extended and interpreted.
The well-known and powerful proof principle by well-founded induction says that for verifying \(\forall x : P (x)\) for some property \(P\) it suffices to show \(\forall x : [[\forall y < x :P (y)] \Rightarrow P (x)] \) , provided \(<\) is a well-founded partial ordering on the domainof interest. Here we investigate a more general formulation of this proof principlewhich allows for a kind of parameterized partial orderings \(<_x\) which naturallyarises in some cases. More precisely, we develop conditions under which theparameterized proof principle \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)]\) is sound in thesense that \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)] \Rightarrow \forall x : P (x)\) holds, and givecounterexamples demonstrating that these conditions are indeed essential.
Normalized Coprime Factorizations in Continuous and Discrete Time - A Joint State-Space Approach
(1995)
Based on state-space formulas for coprime factorizations over ... and an algebraic characterization of J-inner functions, normalized doubly-coprime factorizations for different classes of continuous- and discrete-time transfer functions are derived by using a single general construction method. The parametrization of the factors is in terms of the stabilizing solutions of general degenerate continuous- respectively discrete-time Riccati equations, which are obtained by examining state-space representations of J-normalized factor matrices.
This paper deals with domain decomposition methods for kinetic and drift diffusion semiconductor equations. In particular accurate coupling conditions at the interface between the kinetic and drift diffusion domain are given. The cases of slight and strong nonequilibrium situations at the interface are considered and some numerical examples are shown.
This survey contains a description of different types of mathematical models used for the simulation of vehicular traffic. It includes models based on ordinary differential equations, fluid dynamic equations and on equations of kinetic type. Connections between the different types of models are mentioned. Particular emphasis is put on kinetic models and on simulation methods for these models.
Abstract: It is shown that nonvacuum pseudoparticles can account forquantum tunneling and metastability. In particular the saddle-point nature of the pseudoparticles is demonstrated, and the evaluation of path-integrals in their neighbourhood. Finally the relation between instantons and bounces is used to derive a result conjectured by Bogomolny andFateyev.
A new approach with BRST invariance is suggested to cure the degeneracy problem of ill defined path integrals in the path- integral calculation of quantum mechanical tunneling effects in which the problem arises due to the occurrence of zero modes. The Faddeev-Popov procedure is avoided and the integral over the zero mode is transformed in a systematic way into a well defined integral over instanton positions. No special procedure has to be adopted as in the Faddeev-Popov method in calculating the Jacobian of the transformation. The quantum mechanical tunneling for the Sine-Gordon potential is used as a test of the method and the width of the lowest energy band is obtained in exact agreement with that of WKB calculations.
It is shown that nonvacuum pseudoparticles can account for quantum tunneling and metastability. In particular the saddle- point nature of the pseudoparticles is demonstrated, and the evaluation of path-integrals in their neighbourhood. Finally the relation between instantons and bounces is used to derive a result conjectured by Bogomolny and Fateyev.
Double Scaling Limits, Airy Functions and Multicritical Behaviour in O(N) Vektor Sigma Models
(1995)
O(N) vector sigma models possessing catastrophes in their action are studied. Coupling the limit N - > infinity with an appropriate scaling behaviour of the coupling constants, the partition function develops a singular factor. This is a generalized Airy function in the case of spacetime dimension zero and the partition function of a scalar field theory for positive spacetime dimension.
This report is intended to provide an introduction to the method of SmoothedParticle Hydrodynamics or SPH. SPH is a very versatile, fully Lagrangian, particle based code for solving fluid dynamical problems. Many technical aspects of the method are explained which can then be employed to extend the application of SPH to new problems.
Symmetry properties of average densities and tangent measure distributions of measures on the line
(1995)
Answering a question by Bedford and Fisher we show that for every Radon measure on the line with positive and finite lower and upper densities the one-sided average densities always agree with one half of the circular average densities at almost every point. We infer this result from a more general formula, which involves the notion of a tangent measure distribution introduced by Bandt and Graf. This formula shows that the tangent measure distributions are Palm distributions and define self-similar random measures in the sense of U. Zähle.
2D quantum dilaton gravitational Hamiltonian, boundary terms and new definition for total energy
(1995)
The ADM and Bondi mass for the RST model have been first discussed from Hawking and Horowitz's argument. Since there is a nonlocal term in the RST model, the RST lagrangian has to be localized so that Hawking and Horowitz's proposal can be carried out. Expressing the localized RST action in terms of the ADM formulation, the RST Hamiltonian can be derived, meanwhile keeping track of all boundary terms. Then the total boundary terms can be taken as the total energy for the RST model. Our result shows that the previous expression for the ADM and Bondi mass actually needs to be modified at quantum level, but at classical level, our mass formula can be reduced to that given by Bilal and Kogan [5] and de Alwis [6]. It has been found that there is a new contribution to the ADM and Bondi mass from the RST boundary due to the existence of the hidden dynamical field. The ADM and Bondi mass with and without the RST boundary for the static and dynamical solutions have been discussed respectively in detail, and some new properties have been found. The thunderpop of the RST model has also been encountered in our new Bondi mass formula.
We describe a hybrid architecture supporting planning for machining workpieces. The architecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.
The paper describes the concepts and background theory for the analysis of a neural-like network for learning and replication of periodic signals containing a finite number of distinct frequency components. The approach is based on the combination of ideas from dynamic neural networks and systems and control theory where concepts of dynamics, adaptive control and tracking of specified time signals are fundamental. The proposed procedure is a two stage process consisting of a learning phase when the network is driven by the required signal followed by a replication phase where the network operates in an autonomous feedback mode whilst continuing to generate the required signal to a desired acccuracy for a specified time. The analysis draws on currently available control theory and, in particular, on concepts from model reference adaptive control.
The paper describes the concepts and background theory of the analysis of a neural-like network for the learning and replication of periodic signals containing a finite number of distinct frequency components. The approach is based on a two stage process consisting of a learning phase when the network is driven by the required signal followed by a replication phase where the network operates in an autonomous feedback mode whilst continuing to generate the required signal to a desired accuracy for a specified time. The analysis focusses on stability properties of a model reference adaptive control based learning scheme via the averaging method. The averaging analysis provides fast adaptive algorithms with proven convergence properties.
In this paper we consider a certain class of geodetic linear inverse problems LambdaF=G in a reproducing kernel Hilbert space setting to obtain a bounded generalized inverse operator Lambda. For a numerical realization we assume G to be given at a finite number of discrete points to which we employ a spherical spline interpolation method adapted to the Hilbertspaces. By applying Lambda to the obtained spline interpolant we get an approximation of the solution F. Finally our main task is to show some properties of the approximated solution and to prove convergence results if the data set increases.
By the use of locally supported basis functions for spherical spline interpolation the applicability of this approximation method is spread out since the resulting interpolation matrix is sparse and thus efficient solvers can be used. In this paper we study locally supported kernels in detail. Investigations on the Legendre coefficients allow a characterization of the underlying Hilbert space structure. We show now spherical spline interpolation with polynomial precision can be managed with locally supported kernels, thus giving the possibility to combine approximation techniques based on spherical harmonic expansions with those based on locally supported kernels.
Recently, Xu and Cheney (1992) have proved that if all the Legendre coefficients of a zonal function defined on a sphere are positive then the function is strictly positive definite. It will be shown in this paper, that even if finitely many of the Legendre coefficients are zero, the strict positive definiteness can be assured. The results are based on approximation properties of singular integrals, and provide also a completely different proof of the results ofXu and Cheney.
The paper presents numerical results on the simulation of boundary value problems for the Boltzmann equation in one and two dimensions. In the one-dimensional case, we use prescribed fluxes at the left and diffusive conditions on the right end of a slab to study the resulting steady state solution. Moreover, we compute the numerical density function in velocity space and compare the result with the Chapman-Enskog distribution obtained in the limit for continuous media. The aim of the two-dimensional simulations is to investigate the possibility of a symmetry break in the numerical solution.
Numerical Simulation of the Stationary One-Dimensional Boltzmann Equation by Particle Methods
(1995)
The paper presents a numerical simulation technique - based on the well-known particle methods - for the stationary, one-dimensional Boltzmann equation for Maxwellian molecules. In contrast to the standard splitting methods, where one works with the instationary equation, the current approach simulates the direct solution of the stationary problem. The model problem investigated is the heat transfer between two parallel plates in the rarefied gas regime. An iteration process is introduced which leads to the stationary solution of the exact - space discretized - Boltzmann equation, in the sense of weak convergence.
Second Order Scheme for the Spatially Homogeneous Boltzmann Equation with Maxwellian Molecules
(1995)
In the standard approach, particle methods for the Boltzmann equation are obtained using an explicit time discretization of the spatially homogeneous Boltzmann equation. This kind of discretization leads to a restriction of the discretization parameter as well as on the differential cross section in the case of the general Boltzmann equation. Recently, it was shown, how to construct an implicit particle scheme for the Boltzmann equation with Maxwellian molecules. The present paper combines both approaches using a linear combination of explicit and implicit discretizations. It is shown that the new method leads to a second order particle method, when using an equiweighting of explicit and implicit discretization.
A way to derive consistently kinetic models for vehicular traffic from microscopic follow the leader models is presented. The obtained class of kinetic equations is investigated. Explicit examples for kinetic models are developed with a particular emphasis on obtaining models, that give realistic results. For space homogeneous traffic flow situations numerical examples are given including stationary distributions and fundamental diagrams.
We study the combination of the following already known ideas for showing confluence ofunconditional or conditional term rewriting systems into practically more useful confluence criteria forconditional systems: Our syntactic separation into constructor and non-constructor symbols, Huet's intro-duction and Toyama's generalization of parallel closedness for non-noetherian unconditional systems, theuse of shallow confluence for proving confluence of noetherian and non-noetherian conditional systems, theidea that certain kinds of limited confluence can be assumed for checking the fulfilledness or infeasibilityof the conditions of conditional critical pairs, and the idea that (when termination is given) only primesuperpositions have to be considered and certain normalization restrictions can be applied for the sub-stitutions fulfilling the conditions of conditional critical pairs. Besides combining and improving alreadyknown methods, we present the following new ideas and results: We strengthen the criterion for overlayjoinable noetherian systems, and, by using the expressiveness of our syntactic separation into constructorand non-constructor symbols, we are able to present criteria for level confluence that are not criteria forshallow confluence actually and also able to weaken the severe requirement of normality (stiffened withleft-linearity) in the criteria for shallow confluence of noetherian and non-noetherian conditional systems tothe easily satisfied requirement of quasi-normality. Finally, the whole paper also gives a practically usefuloverview of the syntactic means for showing confluence of conditional term rewriting systems.
Self-localization in unknown environments respectively correlation of current and former impressions of the world is an essential ability for most mobile robots. The method,proposed in this article is the construction of a qualitative, topological world model as a basis for self-localization. As a central aspect the reliability regarding error-tolerance and stability will be emphasized. The proposed techniques demand very low constraints for the kind and quality of the employed sensors as well as for the kinematic precisionof the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot.
World models for mobile robots as introduced in many projects, are mostly redundant regarding similar situations detected in different places. The present paper proposes a method for dynamic generation of a minimal world model based on these redundancies. The technique is an extention of the qualitative topologic world modelling methods. As a central aspect the reliability regarding errortolerance and stability will be emphasized. The proposed technique demands very low constraints on the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard realtime constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot "