Refine
Year of publication
- 1994 (91) (remove)
Document Type
- Preprint (40)
- Report (34)
- Master's Thesis (10)
- Article (5)
- Doctoral Thesis (1)
- Working Paper (1)
Has Fulltext
- yes (91)
Keywords
- Case-Based Planning (4)
- AG-RESY (2)
- Boltzmann Equation (2)
- CoMo-Kit (2)
- Fallbasierte Planning (2)
- Fallbasierte Planung (2)
- Numerical Simulation (2)
- PARO (2)
- Absorptionsspektroskopie (1)
- Angular distribution (1)
Faculty / Organisational entity
In this article a diffusion equation is obtained as a limit of a reversible kinetic equation with an ad hoc scaling. The diffusion is produced by the collisions of the particles with the boundary. These particles are assumed to be reflected according to a reversible law having convenient mixing properties. Optimal convergence results are obtained in a very simple manner. This is made possible because the model, based on Arnold" s cat map can be handled with Fourier series instead of the symbolic dynamics associated to a Markow partition.
Lernen von Abstraktionshierarchien zur Optimierung der Auswahl von maschinell abstrahierten Plänen
(1994)
Mit Hilfe von "Multistrategy" Ansätzen, die erklärungsbasiertes und induktives Lernen integrieren, ist es möglich, die Performanz von Planungssystemen signifikant zu verbessern. Dabei können gelöste Planungsprobleme zunächst mit einem wissensintensiven Verfahren abstrahiert und generalisiert werden. Durch den in diesem Beitrag im Vordergrund stehenden induktiven inkrementellen Lernalgorithmus ist es dann weiterhin möglich, die Gesamtheit des deduktiv generierten Wissens in einer Abstraktionshierarchie anzuordnen. Dabei wird die, im allgemeinen unentscheidbare, "spezieller-als-Relation" zwischen generalisierten Plänen, induktiv aus den gegebenen Planungsfällen gelernt. Diese Abstraktionshierarchie dient dann zur Klassifikation neuer Problemstellungen und damit zur Bestimmung einer speziellsten anwendbaren abstrakten Problemlösung.
The Boltzmann equation solutions are considered for the small Knudsen number. The main attention is devoted to certain deviations from the classical Navier-Stokes description. The equations for the quasistationary slow flows are derived. These equations do not contain the Knudsen number and provide in this sense a limiting description of hydrodynamical variables. Two well-known special cases are also indicated. In the isothermal case the equations are equivalent to the incompressible Navier-Stokes equations, in stationary case they coincide with the equations of slow non-isothermal flows. It is shown that the derived equations possess all principal properties of the Boltzmann equation on contrast to the Burnett equations. In one dimension the equations reduce to the nonlinear diffusion equations, being exactly solvable for Maxwell molecules. Multidimensional stationary heat-transfer problems are also discussed. It is shown that one can expect an essential difference between the Boltzmann equaiton solution in the limit of the continuous media and the corresponding solution of the Navier-Stokes equations.
The paper presents some approximation methods for the Boltzmann equation. In the first part fully implicit discretization techniques for the spatially homogeneous Boltzmann equation are investigated. The implicit equation is solved using an iteration process. It is shown that the iteration converges to the correct solution for the moments of the distribution function as long as the mass conservation is strictly fulfilled. For a simple model Boltzmann equation some unexpected features of the implicit scheme and the corresponding iteration process are clarified. In the second part a new iteration algorithm is proposed which should be used for the stationary Boltzmann equation. The realization of the method is very similar to the standard splitting algorithms except some new stochastic elements.
A Case Study on Specifikation,Detection and Resolution of IN Feature Interactions with Estelle
(1994)
We present an approach for the treatment of Feature Interactions in Intelligent Networks. The approach is based on the formal description technique Estelle and consists of three steps. For the first step, a specification style supporting the integration of additional features into a basic service is introduced . As a result, feature integration is achieved by adding specification text, i.e . on a purely syntactical level. The second step is the detection of feature interactions resulting from the integration of additional features. A formal criterion is given that can be used for the automatic detection of a particular class of feature interactions. In the third step, previously detected feature interactions are resolved. An algorithm has been devised that allows the automatical incorporation of high-level design decisions into the formal specification. The presented approach is applied to the Basic Call Service and several supplementary interacting features.
We introduce the concept of streamballs for fluid flow visualization. Streamballs are based upon implicit surface generation techniques adopted from the well-known metaballs. Their property to split or merge automatically in areas of significant divergence or convergence makes them an ideal tool for the visualization of arbitrary complex flow fields. Using convolution surfaces generated by continuous skeletons for streamball construction offers the possibility to visualize even tensor fields.
The problem to interpolate Hermite-type data (i.e. two points with attached tangent vectors) with elastic curves of prescribed tension is known to have multiple solutions. A method is presented that finds all solutions of length not exceeding one period of its curvature function. The algorithm is based on algebraic relations between discrete curvature information which allow to transform the problem into a univariate one. The method operates with curves that by construction partially interpolate the given data. Hereby the objective function of the problem is drastically simplified. A bound on the maximum curvature value is established that provides an interval containing all solutions.
Automatic proof systems are becoming more and more powerful.However, the proofs generated by these systems are not met withwide acceptance, because they are presented in a way inappropriatefor human understanding.In this paper we pursue two different, but related, aims. First wedescribe methods to structure and transform equational proofs in away that they conform to human reading conventions. We developalgorithms to impose a hierarchical structure on proof protocols fromcompletion based proof systems and to generate equational chainsfrom them.Our second aim is to demonstrate the difficulties of obtaining suchprotocols from distributed proof systems and to present our solutionto these problems for provers using the TEAMWORK method. Wealso show that proof systems using this method can give considerablehelp in structuring the proof listing in a way analogous to humanbehaviour.In addition to theoretical results we also include descriptions onalgorithms, implementation notes, examples and data on a variety ofexamples.
In dieser Arbeit wird eine Kombinationsmöglichkeit von fallbasiertem und induktivem Schliessen, basierend auf k-d- und Entscheidungsbäumen, entwickelt. Dabei wurde versucht, die Vorteile des induktiven Mechanismus, wie z. B. die sehr effiziente Klassifiz ierung und automatische Generierung, in den fallbasierten Mechanismus zu integrieren. Die Aufgabe zerfällt dabei in zwei Teilaufgaben, die im folgenden zusammengefasst werden.
This paper presents the systematic synthesis of a fairly complex digitalcircuit and its CPLD implementation as an assemblage of communicatingasynchronous sequential circuits. The example, a VMEbus controller, waschosen because it has to control concurrent processes and to arbitrateconflicting requests.
Best-Fit Pattern Matching
(1994)
This report shows that dispatching of methods in object oriented languages is in principle the same as best fit pattern matching. A general conceptual description of best fit pattern matching is presented. Many object oriented features are modelled by means of the general concept. This shows that simple methods, multi methods, overloading of functions, pattern matching,
dynamic and union types, and extendable records can be combined in a single comprehensive concept.
In this paper the complexity of the local solution of Fredholm integral equations
is studied. For certain Sobolev classes of multivariate periodic functions with dominating mixed derivative we prove matching lower and upper bounds. The lower bound is shown using relations to s-numbers. The upper bound is proved in a constructive way providing an implementable algorithm of optimal order based on Fourier coefficients and a hyperbolic cross approximation.
We study the complexity of local solution of Fredholm integral equations. This means that we want to compute not the full solution, but rather a functional (weighted mean, value in a point) of it. For certain Sobolev classes of multivariate periodic functions we prove matching upper and lower bounds and construct an algorithm of the optimal order, based on Fourier coefficients and a hyperbolic cross approximation.
A method for efficiently handling associativity and commutativity (AC) in implementations of (equational) theorem provers without incorporating AC as an underlying theory will be presented. The key of substantial efficiency gains resides in a more suitable representation of permutation-equations (such as f(x,f(y,z))=f(y,f(z,x)) for instance). By representing these permutation-equations through permutations in the mathematical sense (i.e. bijective func- tions :{1,..,n} {1,..,n}), and by applying adapted and specialized inference rules, we can cope more appropriately with the fact that permutation-equations are playing a particular role. Moreover, a number of restrictions concerning application and generation of permuta- tion-equations can be found that would not be possible in this extent when treating permu- tation-equations just like any other equation. Thus, further improvements in efficiency can be achieved.
Based on normalized coprime factorizations with respect to indefinite metrics and the construction of suitable characteristic functions, the Ober balanced canonical forms for the classes of bounded real and positive real are derived. This uses a matrix representation of the shift realization with respect to a basis related to sets of orthogonal polynomials.
Within the present paper we investigate case-based representability as well as case-based learnability of indexed families of uniformly recursive languages. Since we are mainly interested in case-based learning with respect to an arbitrary fixed similarity measure, case-based learnability of an indexed family requires its representability, first. We show that every indexed family is case- based representable by positive and negative cases. If only positive cases are allowed the class of representable families is comparatively small. Furthermore, we present results that provide some bounds concerning the necessary size of case bases. We study, in detail, how the choice of a case selection strategy influences the learning capabilities of a case-based learner. We define different case selection strategies and compare their learning power to one another. Furthermore, we elaborate the relations to Gold-style language learning from positive and both positive and negative examples.
While symbolic learning approaches encode the knowledge provided by the presentation of the cases explicitly into a symbolic representation of the concept, e.g. formulas, rules, or decision trees, case-based approaches describe learned concepts implicitly by a pair (CB; d), i.e. by a set CB of cases and a distance measure d. Given the same information, symbolic as well as the case-based approach compute a classification when a new case is presented. This poses the question if there are any differences concerning the learning power of the two approaches. In this work we will study the relationship between the case base, the measure of distance, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the conjecture of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Linear half-space problems can be used to solve domain decomposition problems between Boltzmann and aerodynamic equations. A new fast numerical method computing the asymptotic states and outgoing distributions for a linearized BGK half-space problem is presented. Relations with the so-called variational methods are discussed. In particular, we stress the connection between these methods and Chapman-Enskog type expansions.
The Basic Reference Model of ODP introduces a number of basic concepts in order to provide a common basis for the development of a coherent set of standards. To achieve this objective, a clear understanding of the basic concepts is one prerequisite. This paper makes an effort at clarifying some of the basic concepts independently of standardized or non-standardized formal description techniques. Among the basic concepts considered here are: agent, action, interaction, interaction point, architecture, behaviour, system, composition, refinement, and abstraction. In a case study, it is then shown how these basic concepts can be represented in a formal specification written in temporal logic.
In this paper we deal with the problem of computing the stresses in stationary loaded bearings. A method to obtain the pressure in the lubrication fluid, which is given as a solution of Reynolds" differential equation, is presented. Furthermore, using the theory of plain stress, the stresses in the bearing shell are described by derivatives of biharmonic functions. A spline interpolation method for computing these functions is developed and an estimate for the error on the boundaries is presented. Finally the described methods are tested theoretically as well as with real examples.
Whenever new parts of a car have been developed, the manufacturer needs an estimation of the lifetime of this new part. On one hand the construction must not be too weak, so that the part holds long enough to satisfy the customer, but on the other hand, if the construction is too excessive, the part gets too heavy.; One is interested in methods that only need few measured data from the specimen itself, but use data about the material, because constructing and testing of specimen is expensive.
Hardware / Software Codesign
(1994)
Monte Carlo integration is often used for antialiasing in rendering processes.
Due to low sampling rates only expected error estimates can be stated, and the variance can be high. In this article quasi-Monte Carlo methods are presented, achieving a guaranteed upper error bound and a convergence rate essentially as fast as usual Monte Carlo.
The radiance equation, which describes the global illumination problem in computer graphics, is a high dimensional integral equation. Estimates of the solution are usually computed on the basis of Monte Carlo methods. In this paper we propose and investigate quasi-Monte Carlo methods, which means that we replace (pseudo-) random samples by low discrepancy sequences, yielding deterministic algorithms. We carry out a comparative numerical study between Monte Carlo and quasi-Monte Carlo methods. Our results show that quasi-Monte Carlo converges considerably faster.
This paper presents fill algorithms for boundary-defined regions in raster graphics. The algorithms require only a constant size working memory. The methods presented are based on the so-called "seed fill" algorithms using the internal connectivity of the region with a given inner point. Basic methods as well as additional heuristics for speeding up the algorithm are described and verified. For different classes of regions, the time complexity of the algorithms is compared using empirical results.
Diese Diplomarbeit hat zwei Schwerpunkte: Zum einen werden die Gebiete fallbasierte Planung und Arbeitsplanung unter- sucht, bisherige Methoden und Systeme vorgestellt und deren Fähigkeiten und Schwächen herausgearbeitet. Dieser Teil macht etwa die Hälfte der Arbeit aus. Zum anderen wird, basierend auf den Ergebnissen dieser Untersuchungen, ein Konzept für die fallbasierte Arbeits- planung vorgestellt, gegenüber alternativen Ansätzen abgegrenzt und die Möglichkeit der Realisierung anhand einer prototypischen Imple- mentierung aufgezeigt. Während dem ersten Teil vorwiegend eine Literaturarbeit zugrundeliegt, beinhaltet der zweite Teil ausschliesslich eigene Ansätze.
Bei der industriellen Fertigung von rotationssymmetrischen Drehteilen kommt es in hohem Masse auf die Effizienz der verwendeten Fertigungspläne an. Diese können durch eine ungeschickte Wahl der Abfolge der Fertigungsschritte zusätzliche Werkzeug- oder Aufspannungswechsel enthalten, welche die Bearbeitungszeit eines Werkstückes wesentlich verlängern und durch die so bedingte geringere Ausnutzung der Maschinenkapazitäten erhebliche Kosten verursachen. In der vorliegenden Arbeit wird in diesem Rahmen eine Komponente zur Fallauswahl implementiert, deren Steuerung sowohl auf oberflächlicher als auch auf struktureller Ähnlichkeit zwischen zwei Werkstücken beruht. Zielsetzung dieser Zweiteilung ist eine Reduzierung der Anzahl der zu betrachtenden Fallbeispiele durch eine Indizierung der Fallbasis mit (schnellen) Algorithmen aufgrund von syntaktischen Übereinstimmungen bestimmter Attributwerte; anschliessend werden diese (wenigen) Fälle mit Hilfe tiefergreifender Analyseschritte nach maximaler Übereinstimmung mit der Aufgabenstellung durchsucht. Des weiteren wird eine Komponente zur Verfügung gestellt, die hilft, frühere Planungsfehler durch Wahl geeigneter Fallbeispiele zu vermeiden.
The main problem in computer graphics is to solve the global illumination problem,
which is given by a Fredholm integral equation of the second kind, called the radiance equation (REQ). In order to achieve realistic images, a very complex kernel
of the integral equation, modelling all physical effects of light, must be considered. Due to this complexity Monte Carlo methods seem to be an appropriate approach to solve the REQ approximately. We show that replacing Monte Carlo by quasi-Monte Carlo in some steps of the algorithm results in a faster convergence.
A nonequilibrium situation governed by kinetic equations with strongly contrasted Knudsen numbers in different subdomains is discussed. We consider a domain decomposition problem for Boltzmann- and Euler equations, establish the correct coupling conditions and prove the validity of the obtained coupled solution. Moreover numerical examples comparing different types of coupling conditions are presented.
We consider the numerical computation of nonlinear functionals of distribution functions approximated by point measures. Two methods are described and estimates for the speed of convergence as the number of points tends to infinity are given. Moreover numerical results for the entropy functional are presented.
A nonequilibrium situation governed by kinetic equations with strongly contrasted Knudsen numbers in different subdomains is discussed. We consider a domain decomposition problem for Boltzmann- and Euler equations, establish the correct coupling conditions and prove the validity of the obtained coupled solution . Moreover numerical examples comparing different types of coupling conditions are presented.
The introduction of sorts to first-order automated deduc-tion has brought greater conciseness of representation and a considerablegain in efficiency by reducing search spaces. This suggests that sort in-formation can be employed in higher-order theorem proving with similarresults. This paper develops a sorted (lambda)-calculus suitable for automatictheorem proving applications. It extends the simply typed (lambda)-calculus by ahigher-order sort concept that includes term declarations and functionalbase sorts. The term declaration mechanism studied here is powerfulenough to subsume subsorting as a derived notion and therefore gives ajustification for the special form of subsort inference. We present a set oftransformations for sorted (pre-) unification and prove the nondetermin-istic completeness of the algorithm induced by these transformations.
Let (\(a_i)_{i\in \bf{N}}\) be a sequence of identically and independently distributed random vectors drawn from the \(d\)-dimensional unit ball \(B^d\)and let \(X_n\):= convhull \((a_1,\dots,a_n\)) be the random polytope generated by \((a_1,\dots\,a_n)\). Furthermore, let \(\Delta (X_n)\) : = (Vol \(B^d\) \ \(X_n\)) be the deviation of the polytope's volume from the volume of the ball. For uniformly distributed \(a_i\) and \(d\ge2\), we prove that tbe limiting distribution of \(\frac{\Delta (X_n)} {E(\Delta (X_n))}\) for \(n\to\infty\) satisfies a 0-1-law. Especially, we provide precise information about the asymptotic behaviour of the variance of \(\Delta (X_n\)). We deliver analogous results for spherically symmetric distributions in \(B^d\) with regularly varying tail.
The distribution of quasiprimary fields of fixed classes characterized by their O(N) representations Y and the number p of vector fields from which they are composed at N=infty in dependence on their normal dimension delta is shown to obey a Hardy-Ramanujan law at leading order in a 1/N-expansion. We develop a method of collective fusion of the fundamental fields which yields arbitrary qps and resolves any degeneracy.
Free Form Volumes
(1994)
Die dreidimensionale Darstellung hybrider Datensätze hat sich in den letzten Jahren als
ein wichtiger Teilbereich der wissenschaftlichen Visualisierung etabliert. Hybride Datensätze enthalten sowohl diskrete Volumendaten als auch durch geometrische Primitive
definierte Objekte. Bei der visuellen Verarbeitung einer gegebenen Szene spielen Schatteninformationen eine wichtige Rolle, indem sie die Beziehungen von Objekten untereinander verständlich machen. Wir beschreiben ein einfaches Verfahren zur Berechnung von Schatteninformation, das in ein bestehendes System zur Visualisierung hybrider Datensätze integriert wurde. An einem Beispiel aus der klinischen Anwendung werden die Ergebnisse illustriert.