Refine
Year of publication
- 1996 (88) (remove)
Document Type
- Preprint (58)
- Report (21)
- Article (8)
- Doctoral Thesis (1)
Language
- English (88) (remove)
Keywords
- AG-RESY (4)
- COMOKIT (3)
- PARO (3)
- Case-Based Reasoning (2)
- CoMo-Kit (2)
- Fallbasiertes Planen (2)
- SKALP (2)
- case-based planning (2)
- Abstraction (1)
- Boltzmann Equation (1)
- CAx Technology (1)
- CORBA (1)
- Cantor sets (1)
- Collision Operator (1)
- Evolutionary Integral Equations (1)
- Features (1)
- General Knowledge (1)
- Genetic Algorithm (1)
- Gröbner bases (1)
- Hamiltonian (1)
- Instance-based Learning (1)
- Ising model (1)
- K-cardinality trees (1)
- Kinetic Theory of Gases (1)
- Knowledge-based Techniques (1)
- LOADBAL (1)
- Lavrentiev regularization (1)
- Minimum Principle (1)
- Monte Carlo (1)
- Nearest-Neighbor Classification (1)
- Object-OrientedCase Representation (1)
- Open System Architecture (1)
- PLAN Abstraction (1)
- Pontrjagin (1)
- Process Chain (1)
- ReasoningSystems (1)
- Resolvent Estimate (1)
- Semigroups (1)
- Software Process Support (1)
- algebraic geometry (1)
- artificial neural networks (1)
- case-based reasoning (1)
- complex System Development (1)
- conceptual design (1)
- constraint-based reasoning (1)
- description of reactive systems (1)
- distributed computing (1)
- finite size scaling (1)
- fractals (1)
- hypercubes (1)
- invariant theory (1)
- junction (1)
- limit models (1)
- load balancing (1)
- load sharing (1)
- mobile robots (1)
- moduli spaces (1)
- navigation (1)
- parallel algorithms (1)
- planning (1)
- polycyclic group rings (1)
- renormalization group (1)
- rewriting (1)
- rings (1)
- roughening transition (1)
- rule-based reasoning (1)
- search-space-problem (1)
- self-organization (1)
- singular optimal control (1)
- state-based formalism (1)
- tangent measure distributions (1)
- temporal logic (1)
- tori (1)
- world-modelling (1)
Faculty / Organisational entity
A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\).
If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices
and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of
lattices and present several applications in geometry.
On derived varieties
(1996)
Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation.
It is shown that Tikhonov regularization for ill- posed operator equation
\(Kx = y\) using a possibly unbounded regularizing operator \(L\) yields an orderoptimal algorithm with respect to certain stability set when the regularization parameter is chosen according to the Morozov's discrepancy principle. A more realistic error estimate is derived when the operators \(K\) and \(L\) are related to a Hilbert scale in a suitable manner. The result includes known error estimates for ordininary Tikhonov regularization and also the estimates available under the Hilbert scale approach.
Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional
subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\).
Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)),
\(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation
functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in
each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework
of continuous and \(f\)- additive polytope functionals.
Software development organizations measure their real-world processes, products, and resources to achieve the goal of improving their practices. Accurate and useful measurement relies on explicit models of the real-world processes, products, and resources. These explicit models assist with planning measurement, interpreting data, and assisting developers with their work. However, little work has been done on the joint use of measurem(int and process technologies. We hypothesize that it is possible to integrate measurement and process technologies in a way that supports automation of measurement-based feedback. Automated support for measurementbased feedback means that software developers and maintainers are provided with on-line, detailed information about their work. This type of automated support is expected to help software professionals gain intellectual control over their software projects. The dissertation offers three major contributions. First, an integrated measurement and
process modeling framework was constructed. This framework establishes the necessary foundation for integrating measurement and process technologies in a way that will permit automation. Second, a process-centered software engineering environment was developed to support measurement-based feedback. This system provides personnel with information about the tasks expected of them based on an integrated set of measurement and process views. Third, a set of assumptions and requirements about that system were examined in a controlled experiment. The experiment compared the use of different levels of automation to evaluate the acceptance and effectiveness of measurement-based feedback.
The calculation of form factors is an important problem in computing the global illumination in the radiosity setting. Closed form solutions often are only available for objects without obstruction and are very hard to calculate. Using Monte Carlo integration and ray tracing provides a fast and elegant tool for the estimation of the form factors. In this paper we show, that using deterministic low discrepancy sample points is superior to random sampling, resulting in an acceleration of more than half an order of magnitude.
Quasi-Monte Carlo Radiosity
(1996)
The problem of global illumination in computer graphics is described by a second kind Fredholm integral equation. Due to the complexity of this equation, Monte Carlo methods provide an interesting tool for approximating
solutions to this transport equation. For the case of the radiosity equation, we present the deterministic method of quasi-rondom walks. This method very efficiently uses low discrepancy sequences for integrating the Neumann series and consistently outperforms stochastic techniques. The method of quasi-random walks also is applicable to transport problems in settings other
than computer graphics.
A notion of discrepancy is introduced, which represents the integration error on spaces of \(r\)-smooth periodic functions. It generalizes the diaphony and constitutes a periodic counterpart to the classical \(L_2\)-discrepancy as weil as \(r\)-smooth versions of it introduced recently by Paskov [Pas93]. Based on previous work [FH96], we develop an efficient algorithm for computing periodic discrepancies for quadrature formulas possessing certain tensor product structures, in particular, for Smolyak quadrature rules (also called sparse grid methods). Furthermore, fast algorithms of computing periodic discrepancies for lattice rules can easily be derived from well-known properties of lattices. On this basis we carry out numerical comparisons of discrepancies between Smolyak and lattice rules.
In recent years, Smolyak quadrature rules (also called hyperbolic cross points or sparse grids) have gained interest as a possible competitor to number theoretic quadratures for high dimensional problems. A standard way of comparing the quality of multivariate quadrature formulas
consists in computing their \(L_2\)-discrepancy. Especially for larger dimensions, such computations are a highly complex task. In this paper we develop a fast recursive algorithm for computing the \(L_2\)-discrepancy (and related quality measures) of general Smolyak quadratures. We carry out numerical comparisons between the discrepancies of certain Smolyak rules, Hammersley and Monte Carlo sequences.
This document offers a concise introduction to the Goal Question Metric Paradigm (GQM Paradigm), and surveys research on applying and extending the GQM Paradigm. We describe the GQM Paradigm in terms of its basic principles, techniques for structuring GQM-related documents, and methods for performing tasks of planning and implementing a measurement program based on GQM. We also survey prototype software tools that support applying the GQM Paradigm in various ways. An annotated bibliography lists sources that document experience gained while using the GQM Paradigm and offer in-depth information about the GQM Paradigm.
This paper investigates the convergence of the Lanczos method for computing the smallest eigenpair of a selfadjoint elliptic differential operator via inverse iteration (without shifts).
Superlinear convergence rates are established, and their sharpness is investigated for a simple model problem. These results are illustrated numerically for a more difficult problem.
This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration.
These assumptions are fulfilled , e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data.
A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. lt is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number.
The first part of this paper studies a Levenberg-Marquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy. While the convergence analysis of standard implementations based on trust region strategies always requires the invertibility of the Fréchet derivative of the nonlinear operator at the exact solution, the new Levenberg-Marquardt scheme is suitable for ill-posed problems as long as the Taylor remainder is of second order in the interpolating metric between the range and dornain
topologies. Estimates of this type are established in the second part of the paper for ill-posed parameter identification problems arising in inverse groundwater hydrology. Both, transient and steady state data are investigated. Finally, the numerical performance of the new Levenberg-Marquardt scheme is
studied and compared to a usual implementation on a realistic but synthetic 2D model problem from the engineering literature.
The paper deals with parallel-machine and open-shop scheduling problems with preemptions and arbitrary nondecreasing objective function. An approach to describe
the solution region for these problems and to reduce them to minimization problems on polytopes is proposed. Properties of the solution regions for certain problems are investigated. lt is proved that open-shop problems with unit processing times are equivalent to certain parallel-machine problems, where preemption is allowed at arbitrary time. A polynomial algorithm is presented transforming a schedule of one type into a schedule of the other type.
Abstract: We study the roughening transition of an interface in an Ising system on a 3D simple cubic lattice using a finite size scaling method. The particular method has recently been proposed and successfully tested for various solid on solid models. The basic idea is the matching of the renormalization-groupflow of the interface with that of the exactly solvable body centered cubic solid on solid model. We unambiguously confirm the Kosterlitz-Thouless nature of the roughening transition of the Ising interface. Our result for the inverse transition temperature K_R = 0.40754(5) is almost by two orders of magnitude more accurate than the estimate of Mon, Landau and Stauffer [9].
The purpose of this paper is to present the state of the art in singular optimal control. If the Hamiltonian in an interval \([t_1,t_2]\) is independent of the control we call the control in this interval singular. Singular optimal controls appear in many applications so that research has been motivated since the 1950s. Often optimal controls consist of nonsingular and singular parts where the junctions between these parts are mostly very difficult to find. One section of this work shows the actual knowledge about the location of the junctions and the behaviour of the control at the junctions. The definition and the properties of the orders (problem order and arc order), which are important in this context, are given, too. Another chapter considers multidimensional controls and how they can be treated. An alternate definition of the orders in the multidimensional case is proposed and a counterexample, which confirms a remark given in the 1960s, is given. A voluminous list of optimality conditions, which can be found in several publications, is added. A strategy for solving optimal control problems numerically is given, and the existing algorithms are compared with each other. Finally conclusions and an outlook on the future research is given.
Here the almost sure convergence of one dimensional Kohonen" s algorithm in its general form, namely, 2k point neightbour setting with a non-uniform stimuli distribution is proved. We show that the asymptotic behaviour of the algorithm is governed by a cooperative system of differential equations which in general is irreducible. The system of differential equation has an asymptotically stable fixed point which a compact subset of its domain of attraction will be visited by the state variable Xn infinitely often.
Load balancing is one of the central problems that have to be solved in parallel computation. Here, the problem of distributed, dynamic load balancing for massive parallelism is addressed. A new local method, which realizes a physical analogy to equilibrating liquids in multi-dimensional tori or hypercubes, is presented. It is especially suited for communication mechanisms with low set-up to transfer ratio occurring in tightly-coupled or SIMD systems. By successive shifting single load elements to the direct neighbors, the load is automatically transferred to lightly loaded processors. Compared to former methods, the proposed Liquid model has two main advantages. First, the task of load sharing is combined with the task of load balancing, where the former has priority. This property is valuable in many applications and important for highly dynamic load distribution. Second, the Liquid model has high efficiency. Asymptotically, it needs O(D . K . Ldiff ) load transfers to reach the balanced state in a D-dimensional torus with K processors per dimension and a maximum initial load difference of Ldiff . The Liquid model clearly outperforms an earlier load balancing approach, the nearest-neighbor-averaging. Besides a survey of related research, analytical results within a formal framework are derived. These results are validated by worst-case simulations in one-and two-dimensional tori with up to two thousand processors.
For the case of the single-O(N)-vector linear sigma models the critical behaviour following from any A_k singularity in the action is worked out in the double scaling limit N->infinity, f_r -> f_r^c, 2 <= r <= k. After an exact elimination of Gaussian degrees of freedom, the critical objects such as coupling constants, indices and susceptibility matrix are derived for all A_k and spacetime dimensions 0 <= D <= 4. There appear exceptional spacetime dimensions where the degree k of the singularity A_k is more strongly constrained than by the renormalizability requirement.
The paper presents a novel approach to parallel motion planning for robot manipulators in 3D workspaces. The approach is based on a randomized parallel search algorithm and focuses on solving the path planning problem for industrial robot arms working in a reasonably cluttered workspace. The path planning system works in the discretized configuration space which needs not to be represented explicitly. The parallel search is conducted by a number of rule-based sequential search processes, which work to nd a path connecting the initial configuration to the goal via a number of randomly generated subgoal configurations. Since the planning performs only on-line collision tests with proper proximity information without using pre-computed information, the approach is suitable for planning problems with multirobot or dynamic environments. The implementation has been carried out on the parallel virtual machine (PVM) of a cluster of SUN4 workstations and SGI machines. The experimental results have shown that the approach works well for a 6-dof robot arm in a reasonably cluttered environment, and that parallel computation increases the efficiency of motion planning significantly.
We present first steps towards fully automated deduction that merely requiresthe user to submit proof problems and pick up results. Essentially, this necessi-tates the automation of the crucial step in the use of a deduction system, namelychoosing and configuring an appropriate search-guiding heuristic. Furthermore,we motivate why learning capabilities are pivotal for satisfactory performance.The infrastructure for automating both the selection of a heuristic and integra-tion of learning are provided in form of an environment embedding the "core"deduction system.We have conducted a case study in connection with a deduction system basedon condensed detachment. Our experiments with a fully automated deductionsystem 'AutoCoDe' have produced remarkable results. We substantiate Au-toCoDe's encouraging achievements with a comparison with the renowned the-orem prover Otter. AutoCoDe outperforms Otter even when assuming veryfavorable conditions for Otter.
Evolving Combinators
(1996)
One of the many abilities that distinguish a mathematician from an auto-mated deduction system is to be able to offer appropriate expressions based onintuition and experience that are substituted for existentially quantified variablesso as to simplify the problem at hand substantially. We propose to simulate thisability with a technique called genetic programming for use in automated deduc-tion. We apply this approach to problems of combinatory logic. Our experimen-tal results show that the approach is viable and actually produces very promisingresults. A comparison with the renowned theorem prover Otter underlines theachievements.This work was supported by the Deutsche Forschungsgemeinschaft (DFG).
We present a concept for an automated theorem prover that employs a searchcontrol based on ideas from several areas of artificial intelligence (AI). The combi-nation of case-based reasoning, several similarity concepts, a cooperation conceptof distributed AI and reactive planning enables a system using our concept tolearn form previous successful proof attempts. In a kind of bootstrapping processeasy problems are used to solve more and more complicated ones.We provide case studies from two domains of interest in pure equationaltheorem proving taken from the TPTP library. These case studies show thatan instantiation of our architecture achieves a high grade of automation andoutperforms state-of-the-art conventional theorem provers.
We investigate the usage of so-called inference rights. We point out the prob-lems arising from the inflexibility of existing approaches to heuristically controlthe search of automated deduction systems, and we propose the application ofinference rights that are well-suited for controlling the search more flexibly. More-over, inference rights allow for a mechanism of "partial forgetting" of facts thatis not realizable in the most controlling aproaches. We study theoretical founda-tions of inference rights as well as the integration of inference rights into alreadyexisting inference systems. Furthermore, we present possibilities to control suchmodified inference systems in order to gain efficiency. Finally, we report onexperimental results obtained in the area of condensed detachment.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
Planning for realistic problems in a static and deterministic environment with complete information faces exponential search spaces and, more often than not, should produce plans comprehensible for the user. This article introduces new planning strategies inspired by proof planning examples in order to tackle the search-space-problem and the structured-plan-problem. Island planning and refinement as well as subproblem refinement are integrated into a general planning framework and some exemplary control knowledge suitable for proof planning is given.
Toying with Jordan matrices
(1996)
Some formulae, containing logarithmic derivatives of (smooth) measures on infinitedimensional spaces, arise in quite different situations. In particular, logarithmic derivatives of a measure are inserted in the Schr"odinger equastion in the space consisting of functions that are square integrable with respect to this measure, what allows us to describe very simply a procedure of (canonical) quantization of infinite-dimensional Hamiltonian systems with the linear phase space. Further, the problem of reconstructing of a measure by its logarithmic derivative (that was posed in [1] independently of any applications) can be equivalent either to the problem of finding the "ground state" (considered as some measure) for infinite-dimensional Schr"odinger equation, or to the problem of finding an invariant measure for a stochastic differential equation (that is a central question of so-called stochastic quantization), or to the problem of recenstruc ting "Gibbsian measure by its specification" (i.e. by a collection of finite-dimensional conditional distributions). Logarithmic derivatives of some measure appear in Cameron-Martin-Girsanov-Maruyama formulae and in its generalizations related to arbitrary smooth measures; they allow also to connect these formulae and the Feynman-Kac formulae. This note discusses all these topics. Of course due to its shortness the presentation is formal in main, and precise analitical assumptions are usually absent. Actually only a list of formulae with small comments is given. Let us mention also that we do not consider at all so-called Dirichlet forms to which a great deal of literature is devoted (cf. [3] and references therein to the works of S. Alberion and others).
Representations of activities dealing with the development or maintenance of software are called software process models. Process models allow for communication, reasoning, guidance, improvement, and automation. Two approaches for building, instantiating, and managing processes, namely CoMo-Kit and MVP-E, are combined to build a more powerful one. CoMo-Kit is based on AI/KE technology; it was developed for supporting complex design processes and is not specialized to software development processes. MVP-E is a process-sensitive software engineering environment for modeling and analyzing software development processes, and guides software developers. Additionally, it provides services to establish and run measurement programmes in software organizations. Because both approaches were developed completely independently major integration efforts are to be made to combine their both advantages. This paper concentrates on the resulting language concepts and their operationalization necessary for building automated process support.
In this paper we consider the problem of finding in a given graph a minimal weight subtree of connected subgraph, which has a given number of edges. These NP-hard combinatorial optimization problems have various applications in the oil industry, in facility layout and graph partitioning. We will present different heuristic approaches based on spanning tree and shortest path methods and on an exact algorithm solving the problem in polynomial time if the underlying graph is a tree. Both the edge- and node weighted case are investigated and extensive numerical results on the behaviour of the heuristics compared to optimal solutions are presented. The best heuristic yielded results within an error margin of less than one percent from optimality for most cases. In a large percentage of tests even optimal solutions have been found.
Satellite gradiometry and its instrumentation is an ultra-sensitive detection technique of the space gravitational gradient (i.e. the Hesse tensor of the gravitational potential). Gradeometry will be of great significance in inertial navigation, gravity survey, geodynamics and earthquake prediction research. In this paper, satellite gradiometry formulated as an inverse problem of satellite geodesy is discussed from two mathematical aspects: Firstly, satellite gradiometry is considered as a continuous problem of harmonic downward continuation. The space-borne gravity gradients are assumed to be known continuously over the satellite (orbit) surface. Our purpose is to specify sufficient conditions under which uniqueness and existence can be guaranteed. It is shown that, in a spherical context, uniqueness results are obtainable by decomposition of the Hesse matrix in terms of tensor spherical harmonics. In particular, the gravitational potential is proved to be uniquely determined if second order radial derivatives are prescribed at satellite height. This information leads us to a reformulation of satellite gradiometry as a (Fredholm) pseudodifferential equation of first kind. Secondly, for a numerical realization, we assume the gravitational gradients to be known for a finite number of discrete points. The discrete problem is dealt with classical regularization methods, based on filtering techniques by means of spherical wavelets. A spherical singular integral-like approach to regularization methods is established, regularization wavelets are developed which allow the regularization in form of a multiresolution analysis. Moreover, a combined spherical harmonic and spherical regularization wavelet solution is derived as an appropriate tool in future (global and local) high-presision resolution of the earth" s gravitational potential.
In the present paper a general criticism of kinetic equations for vehicular traffic is given. The necessity of introducing an Enskog-type correction into these equations is shown. An Enskog-line kinetic traffic flow equation is presented and fluid dynamic equations are derived. This derivation yields new coefficients for the standard fluid dynamic equations of vehicular traffic. Numerical simulations for inhomogeneous traffic flow situations are shown together with a comparison between kinetic and fluid dynamic models.
Based on a new definition of delation a scale discrete version of spherical multiresolution is described, starting from a scale discrete wavelet transform on the sphere. Depending on the type of application, different families of wavelets are chosen. In particular, spherical Shannon wavelets are constructed that form an orthogonal multiresolution analysis. Finally fully discrete wavelet approximation is discussed in case of band-limited wavelets.
The paper presents some new estimates on the gain term of the Boltzmann collision operator. For Maxwellian molecules, it is shown that the L -norm of the gain term can be bounded in terms of the L1 and L -norm of the density function f. In the case of more general collision kernels, like the hard-sphere interaction potential, the gain term is estimated pointwise by the L -norm of the density function and the loss term of the Boltzmann collision operator.
The paper presents some adaptive load balance techniques for the simulation of rarefied gas flows on parallel computers. It is shown that a static load balance is insufficient to obtain a scalable parallel efficiency. Hence, two adaptive techniques are investigated which are based on simple algorithms. Numerical results show that using heuristic techniques one can achieve a sufficiently high efficiency over a wide range of different hardware platforms.
A combination of a state-based formalism and a temporal logic is proposed to get an expressive language for various descriptions of reactive systems. Thereby it is possible to use a model as well as a property oriented specification style in one description. The descriptions considered here are those of the environment, the specification, and the design of a reactive system. It is possible to express e.g. the requirements of a reactive system by states and transitions between them together with further temporal formulas restricting the behaviors of the statecharts. It is shown, how this combined formalism can be used: The specification of a small example is given and a designed controller is proven correct with respect to this specification. The combination of the langugages is based on giving a temporal semantics of a state-based formalism (statecharts) using a temporal logic (TLA).
The static deformation of the surface of the earth caused by surface pressure like the water load of an ocean or an artificial lake is discussed. First a brief mention is made on the solution of the Boussenesq problem for an infinite halfspace with the elastic medium to be assumed as homogeneous and isotropic. Then the elastic response for realistic earth models is determinied by spline interpolation using Navier splines. Major emphasis is on the derteminination of the elastic field caused by water loads from surface tractions on the (real) earth" s surface. Finally the elastic deflection of an artificial lake assuming a homogeneous isotropic crust is compared for both evaluation methods.
In the past years, development and production processes in many companies have changed in a revolutionary way, leading to new demands in information and CAx technology. The R&D-departments of the German automotive industry installed a working group to develop a common long term CAD/CAM strategy1. A preliminary result is the concept for an open CAx system architecture as a basis for realizing industrial requirements on CAD/ CAM and for the cooperation with system vendors. The project ANICA was started in cooperation with five international CAD/CAM -suppliers in order to show the feasibility of this architecture. The access interfaces of different system kernels are analysed with the aim of developing a concept for a cooperating CAx system network. The concept will be put into practice with a software prototype basing on CORBA and OLE. The communication elements within such an architecture have to go far beyond conventional CAD data. This will lead to an extension of "feature" concepts including CAx functionality and dynamic information about the process chain of a product. The impact on modern concepts for user interfaces, on reverse engineering methods and on product data models will be discussed to finally close the loop to industrial CAx application.
A continuous version of spherical multiresolution is described, starting from continuous wavelet transform on the sphere. Scale discretization enables us to construct spherical counterparts to Daubechies wavelets and wavelet packets (known from Euclidean theory). Essential tool is the theory of singular integrals on the sphere. It is shown that singular integral operators forming a semigroup of contraction operators of class (Co) (like Abel-Poisson or Gauß-Weierstraß operators) lead in canonical way to (pyramidal) algorithms.
The paper discusses the approximation of scattered data on the sphere which is one of the major tasks in geomathematics. Starting from the discretization of singular integrals on the sphere the authors devise a simple approximation method that employs locally supported spherical polynomials and does not require equidistributed grids. It is the basis for a hierarchical approximation algorithm using differently scaled basis functions, adaptivity and error control. The method is applied to two examples one of which is a digital terrain model of Australia.
This article will discuss a qualitative, topological and robust world-modelling technique with special regard to navigation-tasks for mobile robots operating in unknownenvironments. As a central aspect, the reliability regarding error-tolerance and stability will be emphasized. Benefits and problems involved in exploration, as well as in navigation tasks, are discussed. The proposed method demands very low constraints for the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot
A new look at the RST model
(1996)
The RST model is augmented by the addition of a scalar field and a boundary term so that it is well-posed and local. Expressing the RST action in terms of the ADM formulation, the constraint structure can be analysed completely. It is shown that from the view point of local field theories, there exists a hidden dynamical field 1 in the RST model. Thanks to the presence of this hidden dynamical field, we can reconstruct the closed algebra of the constraints which guarantee the general invariance of the RST action. The resulting stress tensors TSigma Sigma are recovered to be true tensor quantities. Especially, the part of the stress tensors for the hidden dynamical field 1 gives the precise expression for tSigma . At the quantum level, the cancellation condition for the total central charge is reexamined. Finally, with the help of the hidden dynamical field 1, the fact that the semi-classical static soluti on of the RST model has two independent parameters (P,M), whereas for the classical CGHS model there is only one, can be explained.
Significance of zero modes in path-integral quantization of solitonic theories with BRST invariance
(1996)
The significance of zero modes in the path-integral quantization of some solitonic models is investigated. In particular a Skyrme-like theory with topological vortices in (1 + 2) dimensions is studied, and with a BRST invariant gauge fixing a well defined transition amplitude is obtained in the one loop approximation. We also present an alternative method which does not necessitate evoking the time-dependence in the functional integral, but is equivalent to the original one in dealing with the quantization in the background of the static classical solution of the non-linear field equations. The considerations given here are particularly useful in - but also limited to -the one-loop approximation.
The constraint structure of the induced 2D-gravity with the Weyl and area-preserving diffeomorphism invariances is analysed in the ADM formulation. It is found that when the area-preserving diffeomorphism constraints are kept, the usual conformal gauge does not exist, whereas there is the possibility to choose the so-called "quasi-light-cone" gauge, in which besides the area-preserving diffeomorphism invariance, the reduced Lagrangian also possesses the SL(2,R) residual symmetry. This observation indicates that the claimed correspondence between the SL(2,R) residual symmetry and the area-preserving diffeomorphism invariance in both regularisation approaches does not hold. The string-like approach is then applied to quantise this model, but a fictitious non-zero central charge in the Virasoro algebra appears. When a set of gauge-independent SL(2,R) current-like fields is introduced instead of the string-like variables, a consistent quantum theory is obtained, which means that the area-preserving diffeomorphism invariance can be maintained at the quantum level.
The Lagrangian field-antifield formalism of Batalin and Vilkovisky (BV) is used to investigate the application of the collec- tive coordinate method to soliton quantisation. In field theories with soliton solutions, the Gaussian fluctuation operator has zero modes due to the breakdown of global symmetries of the Lagrangian in the soliton solutions. It is shown how Noether identities and local symmetries of the Lagrangian arise when collective coordinates are introduced in order to avoid divergences related to these zero modes. This transformation to collective and fluctuation degrees of freedom is interpreted as a canonical transformation in the symplectic field-antifield space which induces a time-local gauge symmetry. Separating the corresponding Lagrangian path integral of the BV scheme in lowest order into harmonic quantum fluctuations and a free motion of the collective coordinate with the classical mass of the soliton, we show how the BV approach clarifies the relation between zero modes, collective coordinates, gauge invariance and the center- of-mass motion of classical solutions in quantum fields. Finally, we apply the procedure to the reduced nonlinear O(3) oe-model.^L
Quantum tunneling between degenerate ground states through the central barrier of a potential is extended to excited states with the instanton method. This extension is achieved with the help of an LSZ reduction technique as in field theory and may be of importance in the study of macroscopic quantum phenomena in magnetic systems.
Starting from the coherent state representation of the evolution operator with the help of the path-integral, we derive a formula for the low-lying levels E = ffl0 Gamma 24ffl cos(s + ,)ss of a quantum spin system. The quenching of macroscopic quantum coherence is understood as the vanishing of cos(s + ,)ss in disagreement with the suppression of tunneling (i.e. 4ffl = 0) as claimed in the literature. A new configuration called the macroscopic Fermi-particle is suggested by the character of its wave function. The tunne- ling rate ( 24fflss ) does not vanish, not for integer spin s nor for a half-integer value of s, and is calculated explicitly (for the position dependent mass) up to the one-loop approximation.
We extend the methods of geometric invariant theory to actions of non reductive groups in the case of homomorphisms between decomposable sheaves whose automorphism groups are non recutive. Given a linearization of the natural actionof the group Aut(E)xAut(F) on Hom(E,F), a homomorphism iscalled stable if its orbit with respect to the unipotentradical is contained in the stable locus with respect to thenatural reductive subgroup of the automorphism group. Weencounter effective numerical conditions for a linearizationsuch that the corresponding open set of semi-stable homomorphismsadmits a good and projective quotient in the sense of geometricinvariant theory, and that this quotient is in additiona geometric quotient on the set of stable homomorphisms.
Tangent measure distributions were introduced by Bandt and Graf as a means to describe the local geometry of self-similar sets generated by iteration of contractive similitudes. In this paper we study the tangent measure distributions of hyperbolic Cantor sets generated by contractive mappings, which are not similitudes. We show that the tangent measure distributions of these sets equipped with either Hausdorff or Gibbs measure are unique almost everywhere and give an explicit formula describing them as probability distributions on the set of limit models of Bedford and Fisher.
t is well-known that for the integral group ring of a polycyclic group several decision problems are decidable. In this paper a technique to solve themembership problem for right ideals originating from Baumslag, Cannonito and Miller and studied by Sims is outlined. We want to analyze, how thesedecision methods are related to Gröbner bases. Therefore, we define effective reduction for group rings over Abelian groups, nilpotent groups and moregeneral polycyclic groups. Using these reductions we present generalizations of Buchberger's Gröbner basis method by giving an appropriate definition of"Gröbner bases" in the respective setting and by characterizing them using concepts of saturation and s-polynomials.
Structure and Construction of Instanton Bundles on P3
This paper addresses the role of abstraction in case-based reasoning. We develop a general framework for reusing cases at several levels of abstraction, which is particularly suited for describing and analyzing existing and designing new approaches of this kind. We show that in synthetic tasks (e.g. configuration, design, and planning), abstraction can be successfully used to improve the efficiency of similarity assessment, retrieval, and adaptation. Furthermore, a case-based planning system, called Paris, is described and analyzed in detail using this framework. An empirical study done with Paris demonstrates significant advantages concerning retrieval and adaptation efficiency as well as flexibility of adaptation. Finally, we show how other approaches from the literature can be classified according to the developed framework.
This paper is to present a new algorithm, called KNNcost, for learning feature weights for CBR systems used for classification. Unlike algorithms known so far, KNNcost considers the profits of a correct and the cost of a wrong decision. The need for this algorithm is motivated from two real-world applications, where cost and profits of decisions play a major role. We introduce a representation of accuracy, cost and profits of decisions and define the decision cost of a classification system. To compare accuracy optimization with cost optimization, we tested KNNacc against KNNcost. The first one optimizes classification accuracy with a conjugate gradient algorithm. The second one optimizes the decision cost of the CBR system, respecting cost and profits of the classifications. We present experiments with these two algorithms in a real application to demonstrate the usefulness of our approach.
When problems are solved through reasoning from cases, the primary kind of knowledge is contained in the specific cases which are stored in the case base. However, in many situations additional background-knowledge is required to cope with the requirements of an application. We describe an approach to integrate such general knowledge into the reasoning process in a way that it complements the knowledge contained in the cases. This general knowledge itself is not sufficient to perform any kind of model-based problem solving, but it is required to interpret the available cases appropriately. Background knowledge is expressed by two different kinds of rules that both must be formalized by the knowledge engineer: Completion rules describe how to infer additional features out of known features of an old case or the current query case. Adaptation rules describe how an old case can be adapted to fit the current query. This paper shows how these kinds of rules can be integrated into an object-oriented case representation.
We present an approach to systematically describing case-based reasoning systems bydifferent kinds of criteria. One main requirement was the practical relevance of these criteria and their usability for real-life applications. We report on the results we achieved from a case study carried out in the INRECA1 Esprit project.
In this paper we describe how explicit models of software or knowledge engineering processes can be used to guide and control the distributed development of complex systems. The paper focuses on techniques which automatically infer dependencies between decisions from a process model and methods which allow to integrate planning and execution steps. Managing dependencies between decisions is a basis for improving the traceability of develop- ment processes. Switching between planning and execution of subprocesses is an inherent need in the development of complex systems. The paper concludes with a description of the CoMo-Kit system which implements the technolo- gies mentioned above and which uses WWW technology to coordinate development processes. An on-line demonstration of the system can be found via the CoMo-Kit homepage:
Paris (Plan Abstraction and Refinement in an Integrated System) [4, 2] is a domain independent case-based planning system which allows the flexible reuse of planning cases by abstraction and refinement. This approach is mainly inspired by the observation that reuse of plans must not be restricted to a single description level. In domains with a high variation in the problems, the reuse of past solutions must be achieved at various levels of abstraction.
EADOCS (Expert Assisted Design of Composite Structures) is the implementation of a multi-level approach to conceptual design. Constraint-, case- and rule-based reasoning techniques are applied in different design phases to assemble and adapt designs at increasing levels of detail. This paper describes a strategic approach to decomposition, formulation of target design problems, and incremental retrieval and adaptation. Design problems considered, cannot be decomposed dynamically into tractable subproblems. Design cases are retrieved for requirements and preferences on both functionality and the solution. Cases are adapted in three phases: adaptation, modification and optimisation.
We present a novel approach to classification, based on a tight coupling of instancebased learning and a genetic algorithm. In contrast to the usual instance-based learning setting, we do not rely on (parts of) the given training set as the basis of a nearestneighbor classifier, but we try to employ artificially generated instances as concept prototypes. The extremely hard problem of finding an appropriate set of concept prototypes is tackled by a genetic search procedure with the classification accuracy on the given training set as evaluation criterion for the genetic fitness measure. Experiments with artificial datasets show that - due to the ability to find concise and accurate concept descriptions that contain few, but typical instances - this classification approach is considerably robust against noise, untypical training instances and irrelevant attributes. These favorable (theoretical) properties are corroborated using a number of hard real-world classification problems.
We present a similarity criterion based on feature weighting. Feature weights are recomputed dynamically according to the performance of cases during problem solving episodes. We will also present a novel algorithm to analyze and explain the performance of the retrieved cases and to determine the features whose weights need to be recomputed. We will perform experiments and show that the integration in a feature weighting model of our similarity criterion with our analysis algorithm improves the adaptability of the retrieved cases by converging to best weights for the features over a period of multiple problem solving episodes.
Planning for manufacturing workpieces is a complex task that requires the interaction of a domain-specific reasoner and a generic planning mechanism. In this paper we present an architecture for organizing the case base that is based on the information provided by a generic problem solver. A retrieval procedure is then presented that uses the information provided by the domain-specific reasoner in order to improve the accuracy of the cases retrieved. However, it is not realistic to suppose that the case retrieved will entirely fit into the new problem. We present a replay procedure to obtain a partial solution that replays not only the valid decisions taken for solving the case, but also justifications of rejected decisions made during the problem solving process. As a result, those completion alternatives of the partial solution are discarded that are already known to be invalid from the case.
Complete Eager Replay
(1996)
We present an algorithm for completely replaying previous problem solving experiences for plan-space planners. In our approach not only the solution trace is replayed, but also the explanations of failed attempts made by the first-principle planner. In this way, the capability of refitting previous solutions into new problems is improved.
In this report we give an overview of the development of our new Waldmeisterprover for equational theories. We elaborate a systematic stepwise design process, startingwith the inference system for unfailing Knuth - Bendix completion and ending up with animplementation which avoids the main diseases today's provers suffer from: overindulgencein time and space.Our design process is based on a logical three - level system model consisting of basicoperations for inference step execution, aggregated inference machine, and overall controlstrategy. Careful analysis of the inference system for unfailing completion has revealed thecrucial points responsible for time and space consumption. For the low level of our model,we introduce specialized data structures and algorithms speeding up the running system andcutting it down in size - both by one order of magnitude compared with standard techniques.Flexible control of the mid - level aggregation inside the resulting prover is made possible by acorresponding set of parameters. Experimental analysis shows that this flexibility is a pointof high importance. We go on with some implementation guidelines we have found valuablein the field of deduction.The resulting new prover shows that our design approach is promising. We compare oursystem's throughput with that of an established system and finally demonstrate how twovery hard problems could be solved by Waldmeister.
One of the many features needed to support the activities of autonomous systems is the ability of motion planning. It enables robots to move in their environment securely and to accomplish given tasks. Unfortunately, the control loop comprising sensing, planning, and acting has not yet been closed for robots in dynamic environments. One reason involves the long execution times of the motion planning component. A solution for this problem is offered by the use of highly computational parallelism. Thus, an important task is the parallelization of existing motion planning algorithms for robots so that they are suitable for highly computational parallelism. In several cases, completely new algorithms have to be designed, so that a parallelization is feasible. In this survey, we review recent approaches to motion planning using parallel computation.
We have presented a novel approach to parallel motion planning for robot manipulators in 3D workspaces. The approach is based on arandomized parallel search algorithm and focuses on solving the path planning problem for industrial robot arms working in a reasonably cluttered workspace. The path planning system works in the discretized con guration space, which needs not to be represented explicitly. The parallel search is conducted by a number of rule-based sequential search processes, which work to find a path connecting the initial con guration to the goal via a number of randomly generated subgoal con gurations. Since the planning performs only on-line collision tests with proper proximity information without using pre-computed information, the approach is suitable for planning problems with multirobot or dynamic environments. The implementation has been carried outontheparallel virtual machine (PVM) of a cluster of SUN4 workstations and SGI machines. The experimental results have shown that the approach works well for a 6-dof robot arm in a reasonably cluttered environment, and that parallel computation increases the e ciency of motion planning signi cantly.
Suppression of the magnetocrystalline bulk anisotropy in thin epitaxial Co(110) films on Cu(110)
(1996)
We report on an unexpected suppression of the magnetocrystalline anisotropy contribution in epitaxial fcc Co(110) films on Cu(110) below a thickness of dc=(50 +/- 10) Å. For film thicknesses larger than dc the measured anisotropy value agrees with published data. Measurements on films with reduced strain indicate a large strain dependence of dc . A model calculation based on a crystal-field formalism and discussed within the context of band theory, which explicitly takes tetragonal misfit strains into account, reproduces the experimen-tally observed anomalies. Our results indicate that the usually applied phenomenological description of anisotropies, assuming additive free energy terms for each anisotropy contribution, fails in this case.
The magnetic anisotropy of Co/Cu~001! films has been investigated by the magneto-optical Kerr effect, both in the pseudomorphic growth regime and above the critical thickness where strain relaxation sets in. A clear correlation between the onset of strain relaxation as measured by means of reflection high-energy electron diffraction and changes of the magnetic anisotropy has been found.
We present results of anisotropy and exchange-coupling studies of asymmetric Co/Cr/Fe trilayers and superlattices grown by molecular beam epitaxy on Cr~001!/Mg~001! buffers and substrates. The magnetic properties have been investigated using both the longitudinal magneto-optical Kerr effect and ferromagnetic resonance. The hysteresis data obtained from the trilayer system were fit to a theoretical model which contains both bilinear and biquadratic coupling. The effective in-plane anisotropy was found to be of fourfold symmetry with the same easy-axis orientation for both the Fe and Co layers. An analysis of the easy-axis hysteresis loops indicates long-period oscillatory coupling and also suggests a short periodic coupling. We show that weakly antiferromagnetically coupled asymmetric films might serve as potential candidates for improved spin-valve systems.
This paper considers a transmission boundary-value problem for the time-harmonic Maxwell equations neglecting displacement currents which is frequently used for the numerical computation of eddy-currents. Across material boundaries the tangential components of the magnetic field H and the normal component of the magnetization müH are assumed to be continuous. this problem admits a hyperplane of solutions if the domains under consideration are multiply connected. Using integral equation methods and singular perturbation theory it is shown that this hyperplane contains a unique point which is the limit of the classical electromagnetic transmission boundary-value problem for vanishing displacement currents. Considering the convergence proof, a simple contructive criterion how to select this solution is immediately derived.