Refine
Year of publication
- 1994 (70) (remove)
Document Type
- Preprint (35)
- Report (28)
- Article (5)
- Doctoral Thesis (1)
- Working Paper (1)
Language
- English (70) (remove)
Has Fulltext
- yes (70)
Keywords
- AG-RESY (2)
- Boltzmann Equation (2)
- Numerical Simulation (2)
- PARO (2)
- Angular distribution (1)
- CPLD (1)
- Case-Based Classification Algorithms (1)
- Case-Based Representability (1)
- CoMo-Kit (1)
- Domain Decomposition (1)
- Festkörper (1)
- GRAPHICS (1)
- Güteschaltung (1)
- LOADBAL (1)
- Laser-produced plasma (1)
- Lasererzeugtes Plasma (1)
- Neodym-YAG-Laser (1)
- Neodymium lasers (1)
- Particle Methods (1)
- Perona-Malik filter (1)
- Petri nets (1)
- Plasma expansion (1)
- Plasmaexpansion (1)
- Random number generation (1)
- Rarefied Gas Dynamics (1)
- SIMD architectures (1)
- SWEEPING (1)
- Tantal (1)
- Tantalum (1)
- VMEbus (1)
- Winkelverteilung (1)
- anisotropic diffusion (1)
- asymptotic expansions (1)
- asynchronous circuits (1)
- branch-and-bound (1)
- bus controller (1)
- display algorithms (1)
- distributedknowledge-based systems (1)
- finite volume methods (1)
- frame buffer operations (1)
- graphic processors (1)
- image processing (1)
- inversion method (1)
- load balancing (1)
- local communication (1)
- operations research (1)
- pseudospectral methods (1)
- raster graphics (1)
- scale-space (1)
- seed filling (1)
- wavelets (1)
- well-posedness (1)
Faculty / Organisational entity
Order-semi-primal lattices
(1994)
A nonequilibrium situation governed by kinetic equations with strongly contrasted Knudsen numbers in different subdomains is discussed. We consider a domain decomposition problem for Boltzmann- and Euler equations, establish the correct coupling conditions and prove the validity of the obtained coupled solution . Moreover numerical examples comparing different types of coupling conditions are presented.
Let (\(a_i)_{i\in \bf{N}}\) be a sequence of identically and independently distributed random vectors drawn from the \(d\)-dimensional unit ball \(B^d\)and let \(X_n\):= convhull \((a_1,\dots,a_n\)) be the random polytope generated by \((a_1,\dots\,a_n)\). Furthermore, let \(\Delta (X_n)\) : = (Vol \(B^d\) \ \(X_n\)) be the deviation of the polytope's volume from the volume of the ball. For uniformly distributed \(a_i\) and \(d\ge2\), we prove that tbe limiting distribution of \(\frac{\Delta (X_n)} {E(\Delta (X_n))}\) for \(n\to\infty\) satisfies a 0-1-law. Especially, we provide precise information about the asymptotic behaviour of the variance of \(\Delta (X_n\)). We deliver analogous results for spherically symmetric distributions in \(B^d\) with regularly varying tail.
Free Form Volumes
(1994)
This report presents a generalization of tensor-product B-spline surfaces. The new scheme permits knots whose endpoints lie in the interior of the domain rectangle of a surface. This allows local refinement of the knot structure for approximation purposes as well as modeling surfaces with local tangent or curvature discontinuities. The surfaces are represented in terms of B-spline basis functions, ensuring affine invariance, local control, the convex hull property, and evaluation by de Boor's algorithm. A dimension formula for a class of generalized tensor-product spline spaces is developed.
The rapid development of any field of knowledge brings with it unavoidable fragmentation and proliferation of new disciplines. The development of computer science is no exception. Software engineering (SE) and human-computer interaction (HCI) are both relatively new disciplines of computer science. Furthermore, as both names suggest, they each have strong connections with other subjects. SE is concerned with methods and tools for general software development based on engineering principles. This discipline has its roots not only in computer science but also in a number of traditional engineering disciplines. HCI is concerned with methods and tools for the development of human-computer interfaces, assessing the usability of computer systems and with broader issues about how people interact with computers. It is based on theories about how humans process information and interact with computers, other objects and other people in the organizational and social contexts in
which computers are used. HCI draws on knowledge and skills from psychology, anthropology and sociology in addition to computer science. Both disciplines need ways of measuring how well their products and development processes fulfil their intended requirements. Traditionally SE has been concerned with 'how software is constructed' and HCI with 'how people use software'. Given the
different histories of the disciplines and their different objectives, it is not surprising that they take different approaches to measurement. Thus, each has its own distinct 'measurement culture.' In this paper we analyse the differences and the commonalties of the two cultures by examining the measurement approaches used by each. We then argue the need for a common measurement taxonomy and framework, which is derived from our analyses of the two disciplines. Next we demonstrate the usefulness of the taxonomy and framework via specific example studies drawn from our own work and that of others and show that, in fact, the two disciplines have many important similarities as well as differences and that there is some evidence to suggest that they are growing closer. Finally, we discuss the role of the taxonomy as a framework to support: reuse, planning future studies, guiding practice and facilitating communication between the two disciplines.
Optimization of Projection Methods for Solving ill-posed Problems. In this paper we propose a modification of the projection scheme for solving ill-posed problems. We show that this modification allows to obtain the best possible order of accuracy of Tikhonov Regularization using an amount of information which is far less than for the standard projection technique.
Visualization of large data sets, especially on small machines, requires advanced techniques in image processing and image generation. Our hybrid raytracer is capable of rendering volumetric and geometric data simultaneously, without loss of accuracy due to data conversion. Compound data sets, consisting of several types of data, are called "hybrid data sets". There is only one rendering pipeline to obtain loss-less and efficient visualization of hybrid data. Algorithms apply to both types of data. Optical material properties are stored in the same data base for both volumetric and geometric objects, and anti-aliasing methods appeal to both data types. Stereoscopic display routines have been added to obtain true three-dimensional visualization on various media, and animation features allow generation of recordable 3-D sequences.
The main problem in computer graphics is to solve the global illumination problem,
which is given by a Fredholm integral equation of the second kind, called the radiance equation (REQ). In order to achieve realistic images, a very complex kernel
of the integral equation, modelling all physical effects of light, must be considered. Due to this complexity Monte Carlo methods seem to be an appropriate approach to solve the REQ approximately. We show that replacing Monte Carlo by quasi-Monte Carlo in some steps of the algorithm results in a faster convergence.
The radiance equation, which describes the global illumination problem in computer graphics, is a high dimensional integral equation. Estimates of the solution are usually computed on the basis of Monte Carlo methods. In this paper we propose and investigate quasi-Monte Carlo methods, which means that we replace (pseudo-) random samples by low discrepancy sequences, yielding deterministic algorithms. We carry out a comparative numerical study between Monte Carlo and quasi-Monte Carlo methods. Our results show that quasi-Monte Carlo converges considerably faster.
Monte Carlo integration is often used for antialiasing in rendering processes.
Due to low sampling rates only expected error estimates can be stated, and the variance can be high. In this article quasi-Monte Carlo methods are presented, achieving a guaranteed upper error bound and a convergence rate essentially as fast as usual Monte Carlo.
Hardware / Software Codesign
(1994)
The Basic Reference Model of ODP introduces a number of basic concepts in order to provide a common basis for the development of a coherent set of standards. To achieve this objective, a clear understanding of the basic concepts is one prerequisite. This paper makes an effort at clarifying some of the basic concepts independently of standardized or non-standardized formal description techniques. Among the basic concepts considered here are: agent, action, interaction, interaction point, architecture, behaviour, system, composition, refinement, and abstraction. In a case study, it is then shown how these basic concepts can be represented in a formal specification written in temporal logic.
In this paper the complexity of the local solution of Fredholm integral equations
is studied. For certain Sobolev classes of multivariate periodic functions with dominating mixed derivative we prove matching lower and upper bounds. The lower bound is shown using relations to s-numbers. The upper bound is proved in a constructive way providing an implementable algorithm of optimal order based on Fourier coefficients and a hyperbolic cross approximation.
We study the complexity of local solution of Fredholm integral equations. This means that we want to compute not the full solution, but rather a functional (weighted mean, value in a point) of it. For certain Sobolev classes of multivariate periodic functions we prove matching upper and lower bounds and construct an algorithm of the optimal order, based on Fourier coefficients and a hyperbolic cross approximation.
Best-Fit Pattern Matching
(1994)
This report shows that dispatching of methods in object oriented languages is in principle the same as best fit pattern matching. A general conceptual description of best fit pattern matching is presented. Many object oriented features are modelled by means of the general concept. This shows that simple methods, multi methods, overloading of functions, pattern matching,
dynamic and union types, and extendable records can be combined in a single comprehensive concept.
The problem to interpolate Hermite-type data (i.e. two points with attached tangent vectors) with elastic curves of prescribed tension is known to have multiple solutions. A method is presented that finds all solutions of length not exceeding one period of its curvature function. The algorithm is based on algebraic relations between discrete curvature information which allow to transform the problem into a univariate one. The method operates with curves that by construction partially interpolate the given data. Hereby the objective function of the problem is drastically simplified. A bound on the maximum curvature value is established that provides an interval containing all solutions.
We introduce the concept of streamballs for fluid flow visualization. Streamballs are based upon implicit surface generation techniques adopted from the well-known metaballs. Their property to split or merge automatically in areas of significant divergence or convergence makes them an ideal tool for the visualization of arbitrary complex flow fields. Using convolution surfaces generated by continuous skeletons for streamball construction offers the possibility to visualize even tensor fields.
The particle flux produced by an obliquely incident Nd Q-switched pulse (20 ns) on a Ta target has been analysed with regard to its angular distribution resolved for both its neutral and ion components. The laser intensity has been varied in the range between about 10^10 - 10^11 W cm-2, which is appropriate for many low-irradiance applications. It is observed that, at all emission angles and for the whole range of laser intensities, the number of neutral species clearly dominates the composition of the particles. At 1.3 x 10^10 W cm-2 the total number of emitted particles is 4 x 10^14, scaling as E_L^¾ with the laser energy. While for relatively low laser energies the angular distribution shows the usual smooth cos-behaviour, an additional strong directive emission cone, superimposed upon the cos-distribution, develops if the laser energy is enhanced. Both the strength and the width strongly depend on the laser intensity. While at lower intensities a fit by a cos^n function with n ~ 10 seems appropriate, n increases to 26 at an intensity of 10^11 W cm-2 . It can be assumed that secondary energy transfer processes that are not yet fully understood are responsible for this anomalous emission.
Abstract: We calculate exact analytical expressions for O(alpha s) 3-jet and O (alpha^2 s ) 4-jet cross sections in polarized deep inelastic lepton nucleon scattering. Introducing an invariant jet definition scheme, we present differential distributions of 3- and 4-jet cross sections in the basic kinematical variables x and W^2 as well as total jet cross sections and show their dependence on the chosen spin-dependent (polarized) parton distributions. Noticebly differences in the predictions are found for the two extreme choices, i.e. a large negative sea-quark density or a large positive gluon density. Therefore, it may be possible to discriminate between different parametrizations of polarized parton densities, and hence between the different physical pictures of the proton spin underlying these parametrizations.
Recently renewed interest in solitons has arisen in connection with exceptional statistics occuring in low-dimensional quantum field theory. The nonperturbative approach to quantum solitons [1, 2, 3, 4, 5], based on the notion of a disorder variable [6, 7], does not make use of the well-known semiclassical quantisation procedure around classical soliton solutions [8]. In a recent article [9] the author introduced multicomponent scalar field models, treated nonperturbatively on a Euclidean space-time lattice. The exponentially decaying disorder correlation functions are connected with soliton fields showing nonAbelian braid group statistics. It is the aim of this note to present the corresponding classical soliton solutions, which do not seem to have appeared in the literature.
The Boltzmann equation solutions are considered for the small Knudsen number. The main attention is devoted to certain deviations from the classical Navier-Stokes description. The equations for the quasistationary slow flows are derived. These equations do not contain the Knudsen number and provide in this sense a limiting description of hydrodynamical variables. Two well-known special cases are also indicated. In the isothermal case the equations are equivalent to the incompressible Navier-Stokes equations, in stationary case they coincide with the equations of slow non-isothermal flows. It is shown that the derived equations possess all principal properties of the Boltzmann equation on contrast to the Burnett equations. In one dimension the equations reduce to the nonlinear diffusion equations, being exactly solvable for Maxwell molecules. Multidimensional stationary heat-transfer problems are also discussed. It is shown that one can expect an essential difference between the Boltzmann equaiton solution in the limit of the continuous media and the corresponding solution of the Navier-Stokes equations.
In this article a diffusion equation is obtained as a limit of a reversible kinetic equation with an ad hoc scaling. The diffusion is produced by the collisions of the particles with the boundary. These particles are assumed to be reflected according to a reversible law having convenient mixing properties. Optimal convergence results are obtained in a very simple manner. This is made possible because the model, based on Arnold" s cat map can be handled with Fourier series instead of the symbolic dynamics associated to a Markow partition.
In this paper we deal with the problem of computing the stresses in stationary loaded bearings. A method to obtain the pressure in the lubrication fluid, which is given as a solution of Reynolds" differential equation, is presented. Furthermore, using the theory of plain stress, the stresses in the bearing shell are described by derivatives of biharmonic functions. A spline interpolation method for computing these functions is developed and an estimate for the error on the boundaries is presented. Finally the described methods are tested theoretically as well as with real examples.
The distribution of quasiprimary fields of fixed classes characterized by their O(N) representations Y and the number p of vector fields from which they are composed at N=infty in dependence on their normal dimension delta is shown to obey a Hardy-Ramanujan law at leading order in a 1/N-expansion. We develop a method of collective fusion of the fundamental fields which yields arbitrary qps and resolves any degeneracy.
We present a convenient notation for positive/negativeADconditional equations. Theidea is to merge rules specifying the same function by using caseAD, ifAD, matchAD, and letADexpressions.Based on the presented macroADruleADconstruct, positive/negativeADconditional equational specifiADcations can be written on a higher level. A rewrite system translates the macroADruleADconstructsinto positive/negativeADconditional equations.
Based on normalized coprime factorizations with respect to indefinite metrics and the construction of suitable characteristic functions, the Ober balanced canonical forms for the classes of bounded real and positive real are derived. This uses a matrix representation of the shift realization with respect to a basis related to sets of orthogonal polynomials.
Whenever new parts of a car have been developed, the manufacturer needs an estimation of the lifetime of this new part. On one hand the construction must not be too weak, so that the part holds long enough to satisfy the customer, but on the other hand, if the construction is too excessive, the part gets too heavy.; One is interested in methods that only need few measured data from the specimen itself, but use data about the material, because constructing and testing of specimen is expensive.
Monte-Carlo methods are widely used numerical tools in various fields of application, like rarefied gas dynamics, vacuum technology, stellar dynamics or nuclear physics. A central part in all applications is the generation of random variates according to a given probability law. Fundamental techniques to generate non-uniform random variates are the inversion principle or the acceptance-rejection method. Both procedures can be quite time-consuming if the given probability law has a complicated structure.; In this paper we consider probability laws depending on a small parameter and investigate the use of asmptotic expansions to generate random variates. The results given in the paper are restrictedto first order expansions. We show error estimates for the discrepancy as well as for the bounded Lipschitz distance of the asymptotic expansion. Furthermore the integration error for some special classes of functions is given. The efficiency of the method is proved by a numerical example from rarefied gas flows.
In spite of its lack of theoretical justification, nonlinear diffusion filtering has become a powerful image enhancement tool in the recent years. The goal of the present paper is to provide a mathematical foundation for nonlinear diffusion filtering as a scale-space transformation which is flexible enough to simplify images without loosing the capability of enhancing edges. By stuying the Lyapunow functional, it is shown that nonlinear diffusion reduces Lp norms and central moments and increases the entropy of images. The proposed anisotropic class utilizes a diffusion tensor which may be adapted to the image structure. It permits existence, uniqueness and regularity results, the solution depends continuously on the initial image, and it fulfills an extremum principle. All considerations include linear and certain nonlinear isotropic models and apply to m-dimensional vector-valued images. The results are juxtaposed to linear and morphological scale-spaces.
Particle methods to simulate rarefied gas flows have found an increasing interest in Computational Fluid Dynamics during the last decade, see for example [1], [2], [3] and [4]. The general goal is to develop numerical schemes which are reliable enough to substitute real windtunnel experiments, needed for example in space research, by computer experiments. In order to achieve this goal one needs numerical methods solving the Boltzmann equation including all important physical effects. In general this means 3D computations for a chemically reacting rarefied gas. With codes of this kind at hand, Boltzmann simulation becomes a powerful tool in studying rarefied gas phenomena.
Linear half-space problems can be used to solve domain decomposition problems between Boltzmann and aerodynamic equations. A new fast numerical method computing the asymptotic states and outgoing distributions for a linearized BGK half-space problem is presented. Relations with the so-called variational methods are discussed. In particular, we stress the connection between these methods and Chapman-Enskog type expansions.
A nonequilibrium situation governed by kinetic equations with strongly contrasted Knudsen numbers in different subdomains is discussed. We consider a domain decomposition problem for Boltzmann- and Euler equations, establish the correct coupling conditions and prove the validity of the obtained coupled solution. Moreover numerical examples comparing different types of coupling conditions are presented.
We consider the numerical computation of nonlinear functionals of distribution functions approximated by point measures. Two methods are described and estimates for the speed of convergence as the number of points tends to infinity are given. Moreover numerical results for the entropy functional are presented.
The paper presents some approximation methods for the Boltzmann equation. In the first part fully implicit discretization techniques for the spatially homogeneous Boltzmann equation are investigated. The implicit equation is solved using an iteration process. It is shown that the iteration converges to the correct solution for the moments of the distribution function as long as the mass conservation is strictly fulfilled. For a simple model Boltzmann equation some unexpected features of the implicit scheme and the corresponding iteration process are clarified. In the second part a new iteration algorithm is proposed which should be used for the stationary Boltzmann equation. The realization of the method is very similar to the standard splitting algorithms except some new stochastic elements.
Based on the idea of using topologic feature-mapsinstead of geometric environment maps in practical mobile robot tasks, we show an applicable way tonavigate on such topologic maps. The main features regarding this kind of navigation are: handling of very inaccurate position (and orientation) information as well as implicit modelling of complex kinematics during an adaptation phase. Due to the lack of proper a-priori knowledge, a re-inforcement based model is used for the translation of navigator commands to motor actions. Instead of employing a backpropagation network for the cen-tral associative memory module (attaching actionprobabilities to sensor situations resp. navigatorcommands) a much faster dynamic cell structure system based on dynamic feature maps is shown. Standard graph-search heuristics like A* are applied in the planning phase.
The problem to be discussed here, is the usage of neural network clustering techniques on a mobile robot, in order to build qualitative topologic environment maps. This has to be done in realtime, i.e. the internal world model has to be adapted by the flow of sensor- samples without the possibility to stop this data-flow.Our experiments are done in a simulation environment as well as on a robot, called ALICE.
Visual Search has been investigated by many researchers inspired by the biological fact, that the sensory elements on the mammal retina are not equably distributed. Therefore the focus of attention (the area of the retina with the highest density of sensory elements) has to be directed in a way to efficiently gather data according to certain criteria. The work discussed in this article concentrates on applying a laser range finder instead of a silicon retina. The laser range finder is maximal focused at any time, but therefore a low resolution total-scene-image, available with camera-like devices from scratch on, cannot be used here. By adapting a couple of algorithms, the edge-scanning module steering the laser range finder is able to trace a detected edge. Based on the data scanned so far , two questions have to be answered. First: "Should the actual (edge-) scanning be interrupted in order to give another area of interest a chance of being investigated?" and second: "Where to start a new edge-scanning, after being interrupted?". These two decision-problems might be solved by a range of decision systems. The correctness of the decisions depends widely on the actual environment and the underlying rules may not be well initialized with a-priori knowledge. So we will present a version of a reinforcement decision system together with an overall scheme for efficiently controlling highly focused devices.
In this paper we describe a framework for defining and operationalizing conceptual models of distributed knowledge-based systems which extends published approaches by the notion of ,agents" and multiple task decompositions. The main part deals with techniques underlying our distributed interpreter. We show how a client-server-architecture can be implemented which allows prototyping distributed knowledge-based systems. Further we describe our mechanism which manages task interactions and supports dependency-directed backtracking efficiently.
Termination of Rewriting
(1994)
More and more, term rewriting systems are applied in computer science aswell as in mathematics. They are based on directed equations which may be used as non-deterministic functional programs. Termination is a key property for computing with termrewriting systems.In this thesis, we deal with different classes of so-called simplification orderings which areable to prove the termination of term rewriting systems. Above all, we focus on the problemof applying these termination methods to examples occurring in practice. We introduce aformalism that allows clear representations of orderings. The power of classical simplifica-tion orderings - namely recursive path orderings, path and decomposition orderings, Knuth-Bendix orderings and polynomial orderings - is improved. Further, we restrict these orderingssuch that they are compatible with underlying AC-theories by extending well-known methodsas well as by developing new techniques. For automatically generating all these orderings,heuristic-based algorithms are given. A comparison of these orderings with respect to theirpowers and their time complexities concludes the theoretical part of this thesis. Finally, notonly a detailed statistical evaluation of examples but also a brief introduction into the designof a software tool representing the integration of the specified approaches is given.
A Case Study on Specifikation,Detection and Resolution of IN Feature Interactions with Estelle
(1994)
We present an approach for the treatment of Feature Interactions in Intelligent Networks. The approach is based on the formal description technique Estelle and consists of three steps. For the first step, a specification style supporting the integration of additional features into a basic service is introduced . As a result, feature integration is achieved by adding specification text, i.e . on a purely syntactical level. The second step is the detection of feature interactions resulting from the integration of additional features. A formal criterion is given that can be used for the automatic detection of a particular class of feature interactions. In the third step, previously detected feature interactions are resolved. An algorithm has been devised that allows the automatical incorporation of high-level design decisions into the formal specification. The presented approach is applied to the Basic Call Service and several supplementary interacting features.
Within the present paper we investigate case-based representability as well as case-based learnability of indexed families of uniformly recursive languages. Since we are mainly interested in case-based learning with respect to an arbitrary fixed similarity measure, case-based learnability of an indexed family requires its representability, first. We show that every indexed family is case- based representable by positive and negative cases. If only positive cases are allowed the class of representable families is comparatively small. Furthermore, we present results that provide some bounds concerning the necessary size of case bases. We study, in detail, how the choice of a case selection strategy influences the learning capabilities of a case-based learner. We define different case selection strategies and compare their learning power to one another. Furthermore, we elaborate the relations to Gold-style language learning from positive and both positive and negative examples.
Automatic proof systems are becoming more and more powerful.However, the proofs generated by these systems are not met withwide acceptance, because they are presented in a way inappropriatefor human understanding.In this paper we pursue two different, but related, aims. First wedescribe methods to structure and transform equational proofs in away that they conform to human reading conventions. We developalgorithms to impose a hierarchical structure on proof protocols fromcompletion based proof systems and to generate equational chainsfrom them.Our second aim is to demonstrate the difficulties of obtaining suchprotocols from distributed proof systems and to present our solutionto these problems for provers using the TEAMWORK method. Wealso show that proof systems using this method can give considerablehelp in structuring the proof listing in a way analogous to humanbehaviour.In addition to theoretical results we also include descriptions onalgorithms, implementation notes, examples and data on a variety ofexamples.
ALICE
(1994)
This paper presents the systematic synthesis of a fairly complex digitalcircuit and its CPLD implementation as an assemblage of communicatingasynchronous sequential circuits. The example, a VMEbus controller, waschosen because it has to control concurrent processes and to arbitrateconflicting requests.
While symbolic learning approaches encode the knowledge provided by the presentation of the cases explicitly into a symbolic representation of the concept, e.g. formulas, rules, or decision trees, case-based approaches describe learned concepts implicitly by a pair (CB; d), i.e. by a set CB of cases and a distance measure d. Given the same information, symbolic as well as the case-based approach compute a classification when a new case is presented. This poses the question if there are any differences concerning the learning power of the two approaches. In this work we will study the relationship between the case base, the measure of distance, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the conjecture of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
The introduction of sorts to first-order automated deduc-tion has brought greater conciseness of representation and a considerablegain in efficiency by reducing search spaces. This suggests that sort in-formation can be employed in higher-order theorem proving with similarresults. This paper develops a sorted (lambda)-calculus suitable for automatictheorem proving applications. It extends the simply typed (lambda)-calculus by ahigher-order sort concept that includes term declarations and functionalbase sorts. The term declaration mechanism studied here is powerfulenough to subsume subsorting as a derived notion and therefore gives ajustification for the special form of subsort inference. We present a set oftransformations for sorted (pre-) unification and prove the nondetermin-istic completeness of the algorithm induced by these transformations.
This paper presents fill algorithms for boundary-defined regions in raster graphics. The algorithms require only a constant size working memory. The methods presented are based on the so-called "seed fill" algorithms using the internal connectivity of the region with a given inner point. Basic methods as well as additional heuristics for speeding up the algorithm are described and verified. For different classes of regions, the time complexity of the algorithms is compared using empirical results.
A method for efficiently handling associativity and commutativity (AC) in implementations of (equational) theorem provers without incorporating AC as an underlying theory will be presented. The key of substantial efficiency gains resides in a more suitable representation of permutation-equations (such as f(x,f(y,z))=f(y,f(z,x)) for instance). By representing these permutation-equations through permutations in the mathematical sense (i.e. bijective func- tions :{1,..,n} {1,..,n}), and by applying adapted and specialized inference rules, we can cope more appropriately with the fact that permutation-equations are playing a particular role. Moreover, a number of restrictions concerning application and generation of permuta- tion-equations can be found that would not be possible in this extent when treating permu- tation-equations just like any other equation. Thus, further improvements in efficiency can be achieved.
A Nonlinear Ray Theory
(1994)
A proof of the famous Huygens" method of wavefront construction is reviewed and it is shown that the method is embedded in the geometrical optics theory for the calculation of the intensity of the wave based on high frequency approximation. It is then shown that Huygens" method can be extended in a natural way to the construction of a weakly nonlinear wavefront. This is an elegant nonlinear ray theory based on an approximation published by the author in 1975 which was inspired by the work of Gubkin. In this theory, the wave amplitude correction is incorporated in the eikonal equation itself and this leads to a sytem of ray equations coupled to the transport equation. The theory shows that the nonlinear rays stretch due to the wave amplitude, as in the work of Choquet-Bruhat (1969), followed by Hunter, Majda, Keller and Rosales, but in addition the wavefront rotates due to a non-uniform distribution of the amplitude on the wavefront. Thus the amplitude of the wave modifies the rays and the wavefront geometry, which in turn affects the growth and decay of the amplitude. Our theory also shows that a compression nonlinear wavefront may develop a kink but an expansion one always remains smooth. In the end, an exact solution showing the resolution of a linear caustic due to nonlinearity has been presented. The theory incorporates all features of Whitham" s geometrical shock dynamics.
The edge enhancement property of a nonlinear diffusion equation with a suitable expression for the diffusivity is an important feature for image processing. We present an algorithm to solve this equation in a wavelet basis and discuss its one dimensional version in some detail. Sample calculations demonstrate principle effects and treat in particular the case of highly noise perturbed signals. The results are discussed with respect to performance, efficiency, choice of parameters and are illustrated by a large number of figures. Finally, a comparison with a Fourier method and a finite volume method is performed.
Particle Methods
(1994)