### Refine

#### Year of publication

- 1999 (425) (remove)

#### Document Type

- Preprint (351)
- Article (66)
- Report (4)
- Diploma Thesis (1)
- Lecture (1)
- Periodical Part (1)
- Study Thesis (1)

#### Language

- English (425) (remove)

#### Keywords

- AG-RESY (6)
- Case-Based Reasoning (6)
- HANDFLEX (5)
- Location Theory (5)
- PARO (5)
- case-based problem solving (5)
- Abstraction (4)
- Knowledge Acquisition (4)
- resolution (4)
- Fallbasiertes Schließen (3)
- Internet (3)
- Knowledge acquisition (3)
- Multicriteria Optimization (3)
- Requirements/Specifications (3)
- case-based reasoning (3)
- distributed software development (3)
- distributed software development process (3)
- explanation-based learning (3)
- problem solving (3)
- theorem proving (3)
- Algebraic Optimization (2)
- Brillouin light scattering spectroscopy (2)
- CAPlan (2)
- Combinatorial Optimization (2)
- Deduction (2)
- Fallbasiertes Schliessen (2)
- Geometrical Algorithms (2)
- Kinetic Schemes (2)
- MOLTKE-Projekt (2)
- Network Protocols (2)
- Partial functions (2)
- SDL (2)
- Software Engineering (2)
- Wannier-Stark systems (2)
- Wissensakquisition (2)
- World Wide Web (2)
- analogy (2)
- application (2)
- average density (2)
- building automation (2)
- case based reasoning (2)
- conservative extension (2)
- consistency (2)
- design patterns (2)
- entropy (2)
- formal specification (2)
- frames (2)
- incompressible Navier-Stokes equations (2)
- lattice Boltzmann method (2)
- learning system (2)
- localization (2)
- low Mach number limit (2)
- many-valued logic (2)
- problem formulation (2)
- quantum mechanics (2)
- requirements engineering (2)
- resonances (2)
- reuse (2)
- spin wave quantization (2)
- tactics (2)
- 90° orientation (1)
- Abelian groups (1)
- Abstract ODE (1)
- Ad-hoc workflow (1)
- Adaption (1)
- Agents (1)
- Algebraic optimization (1)
- Analysis (1)
- Analytic semigroup (1)
- Applications (1)
- Approximation (1)
- Approximation Algorithms (1)
- Automated Reasoning (1)
- Automated theorem proving (1)
- Autonomous mobile robots (1)
- Autoregression (1)
- Banach lattice (1)
- Bayes risk (1)
- Bisector (1)
- Blackboard architecture (1)
- Brillouin light scattering (1)
- Brownian motion (1)
- CNC-Maschine (1)
- COMOKIT (1)
- Case Study (1)
- Case-based problem solving (1)
- Causal Ordering (1)
- Causality (1)
- Chapman Enskog distributions (1)
- Chorin's projection scheme (1)
- Classification (1)
- Classification Tasks (1)
- CoMo-Kit (1)
- Collocation Method plus (1)
- Complexity and performance of numerical algorithms (1)
- Computational Fluid Dynamics (1)
- Computer Assisted Tomograp (1)
- Computer supported cooperative work (1)
- Concept mapping (1)
- Concept maps (1)
- Constraint Graphs (1)
- Contract net (1)
- Control Design Styles (1)
- Convexity (1)
- Cooperative decision making (1)
- Correlation (1)
- Cosine function (1)
- Coxeter groups (1)
- Crofton's intersection formulae (1)
- Curie temperature (1)
- Damon-Eshbach spin wave modes (1)
- Decision Making (1)
- Declarative and Procedural Knowledge (1)
- Design Patterns (1)
- Design Styles (1)
- Diagnosesystem (1)
- Difference Reduction (1)
- Differential Cross-Sections (1)
- Discrete decision problems (1)
- Discrete velocity models (1)
- Distributed Computation (1)
- Distributed Deb (1)
- Distributed Multimedia Applications (1)
- Distributed Software Development (1)
- Distributed Software Development Projects (1)
- Distributed System (1)
- Distributed software development support (1)
- Distributed systems (1)
- Dynamic capillary pressure (1)
- EBG (1)
- Ecommerce (1)
- Elastic properties (1)
- Equality reasoning (1)
- Equational Reasoning (1)
- Experience Database (1)
- Experimental Data (1)
- Extensibility (1)
- Feature Technology (1)
- Ferromagnetism (1)
- Filter-Diagonalization (1)
- Forbidden Regions (1)
- Fredholm integral equation of the second kind (1)
- Fuzzy Programming (1)
- GPS-satellite-to-satellite tracking (1)
- Gauss-Manin connection (1)
- Generic Methods (1)
- Global Optimization (1)
- Global Predicate Detection (1)
- Global Software Highway (1)
- Global optimization (1)
- HOT (1)
- HTE (1)
- HTML (1)
- Hadwiger's recursive de nition of the Euler number (1)
- Hamiltonian groups (1)
- Hardy space (1)
- Helmholtz decomposition (1)
- High frequency switching (1)
- Homogeneous Relaxation (1)
- INRECA (1)
- Ill-posed Problems (1)
- Improperly posed problems (1)
- Impulse control (1)
- Intelligent Agents (1)
- Intelligent agents (1)
- Interacting Magnetic Dots and Wires (1)
- Interleaved Planning (1)
- Iterative Methods (1)
- Java (1)
- Jeffreys' prior (1)
- Kinetic Schems (1)
- Kinetic theory (1)
- Knuth-Bendix completion algorithm (1)
- Kullback Leibler distance (1)
- Lagrangian Functions (1)
- Language Constructs (1)
- Lattice Boltzmann Method (1)
- Lattice Boltzmann methods (1)
- Lexicographic Order (1)
- Lexicographic max-ordering (1)
- Linear membership function (1)
- Local completeness (1)
- Local existence uniqueness (1)
- Location theory (1)
- Logic Design (1)
- Logical Time (1)
- MHEG (1)
- MOO (1)
- Map Building (1)
- Markov process (1)
- Maturity of Software Engineering (1)
- Max-Ordering (1)
- Mechanical Engineering (1)
- Methods (1)
- Mie representation (1)
- Minkowski space (1)
- Mn-Si-C alloy films (1)
- Multicriteria Location (1)
- Multicriteria optimization (1)
- Multiple Criteria (1)
- Multiple Objective Programs (1)
- NP-completeness (1)
- Navier-Stokes equations (1)
- Nonstationary processes (1)
- Numerical Simulation (1)
- Object-Relational DataBase Management Systems (ORDBMS) (1)
- Object-Relational Database Systems (1)
- Open-Source (1)
- Optimization (1)
- PATDEX (1)
- Palm distribution (1)
- Palm distributions (1)
- Pareto Optimality (1)
- Pareto Points (1)
- Planning and Verification (1)
- Polynomial Eigenfunctions (1)
- Porous flow (1)
- Position- and Orientation Estimation (1)
- Potential transform (1)
- Problem Solvers (1)
- Process Management (1)
- Process support (1)
- Produktionsdesign (1)
- Project Management (1)
- Pullen Edmonds system (1)
- Quality Improvement Paradigm (QIP) (1)
- Quantum Chaos (1)
- Quantum mechanics (1)
- Random Errors (1)
- Rarefied Polyatomic Gases (1)
- Rectifiability (1)
- Repositories (1)
- Requirements engineering (1)
- Resonant tunneling diode (1)
- Reuse (1)
- SDL-pattern a (1)
- SKALP (1)
- SQUID magnetometry (1)
- Saddle Points (1)
- Sandercock-type multipath tandem Fabry-Perot interferometer (1)
- Scalar type operator (1)
- Self-Referencing (1)
- Semantics of Programming Languages (1)
- Shannon capacity (1)
- Shannon optimal priors (1)
- Shock Wave Problem (1)
- Similarity Assessment (1)
- Smalltalk (1)
- Software Agents (1)
- Software development (1)
- Software development environment (1)
- Software engineering (1)
- Spatial Binary Images (1)
- Spectral Analysis (1)
- Square-mean Convergence (1)
- Stark systems (1)
- Stoner-like magnetic particles (1)
- Structural Adaptation (1)
- Structuring Approach (1)
- Tactics (1)
- Term rewriting systems (1)
- Theorem of Plemelj-Privalov (1)
- Topology Preserving Networks (1)
- Translation planes (1)
- Triangular fuzzy number (1)
- Vector Time (1)
- Vector-valued holomorphic function (1)
- Vetor optimization (1)
- Virtual Corporation (1)
- Virtual Software Projects (1)
- Voronoi diagram (1)
- Wannier-Bloch states (1)
- Wide Area Multimedia Group Interaction (1)
- Wissenserwerb (1)
- Word problem (1)
- Workflow Replication (1)
- World-Wide Web (1)
- abstract description (1)
- adaption (1)
- analogical reasoning (1)
- anisotropic coupling between magnetic i (1)
- anisotropic coupling mechanism (1)
- approximation methods (1)
- arbitrary function (1)
- arrays of magnetic dots and wires (1)
- artificial intelligence (1)
- assembly sequence design (1)
- automated code generation (1)
- automated computer learning (1)
- automated synchronization (1)
- autonomes Lernen (1)
- autonomous learning (1)
- average densities (1)
- bcc-Fe(001) (1)
- bicriterion path problems (1)
- bipolar quantum drift diffusion model (1)
- biquadratic interlayer coupling (1)
- bootstrap (1)
- brillouin light scattering (1)
- business process modelling (1)
- cancer (1)
- case-based planner (1)
- case-based planning (1)
- cash management (1)
- center hyperplane (1)
- centrally symmetric polytope (1)
- chaotic dynamics (1)
- co-learning (1)
- combined systems with sha (1)
- common transversal (1)
- communication architectures (1)
- communication protocols (1)
- communication subsystem (1)
- compilation (1)
- complete presentations (1)
- completeness (1)
- complex energ (1)
- complex energy resonances (1)
- comprehensive reuse (1)
- compressible Navier Stokes equations (1)
- computer aided planning (1)
- computer control (1)
- computer-supported cooperative work (1)
- concept representation (1)
- conceptual representation (1)
- concurrent software (1)
- confluence (1)
- constraint satisfaction problem (CSP) (1)
- continuous media (1)
- convex distance funtion (1)
- convex operator (1)
- cooperative problem solving (1)
- critical thickness (1)
- cross-correlation (1)
- customization of communication protocols (1)
- decision support (1)
- decisions (1)
- decrease direction (1)
- deficiency (1)
- density distribution (1)
- dependency management (1)
- deposition temperature (1)
- design processes (1)
- diagnostic problems (1)
- dipole-exchange surface (1)
- direct product (1)
- directional derivative (1)
- discrete element method (1)
- discrete equilibrium distributions (1)
- discrete velocity models (1)
- discretization (1)
- disjoint union (1)
- distributed (1)
- distributed c (1)
- distributed deduction (1)
- distributed document management (1)
- distributed enterprise (1)
- distributed groupware environment (1)
- distributed multi-platform software development (1)
- distributed multi-platform software development projects (1)
- distributed software configuration management (1)
- distributed softwaredevelopment tools (1)
- dynamical systems (1)
- enhanced coercivity (1)
- epitaxial growth (1)
- evolutionary spectrum (1)
- exchange coupling (1)
- exchange rate (1)
- exchange-bias bilayer Fe/MnPd (1)
- exchange-coupled rare-earth (1)
- experience base (1)
- experimental software engineering (1)
- exponential rate (1)
- f-dissimilarity (1)
- fallbasiertes Schliessen (1)
- fallbasiertes planen (1)
- final prediction error (1)
- finite difference method (1)
- flexible workflows (1)
- formal description techniques (1)
- formal reasoning (1)
- formulation as integral equation (1)
- frequency splitting betwe (1)
- function of bounded variation (1)
- gauge (1)
- general multidimensional moment problem (1)
- generalized Gummel itera (1)
- generic design of a customized communication subsystem (1)
- geodetic (1)
- geomagnetic field modelling from MAGSAT data (1)
- geometric measure theory (1)
- geometrical algorithms (1)
- geopotential determination (1)
- global optimization (1)
- goal oriented completion (1)
- granular flow (1)
- growth optimal portfolios (1)
- harmonic WFT (1)
- head-on collisions (1)
- heterogeneous large-scale distributed DBMS (1)
- high-level caching of potentially shared networked documents (1)
- higher order logic (1)
- higher-order anisotropies (1)
- higher-order tableaux calculus (1)
- higher-order theorem prover (1)
- hybrid knowledge representation (1)
- hyperbolic systems of conservation laws (1)
- hyperplane transversal (1)
- industrial supervision (1)
- inelastic light scattering (1)
- information (1)
- information systems engineering (1)
- innermost termination (1)
- instanton method (1)
- intelligent agents (1)
- interest oriented portfolios (1)
- internal approximation (1)
- internet event synchronizer (1)
- intersection local time (1)
- inverse Fourier transform (1)
- inverse mathematical models (1)
- isochronous streams (1)
- knowledge space (1)
- lacunarity distribution (1)
- large deviations (1)
- layered magnetic systems (1)
- learning (1)
- level splitting (1)
- lifetime statistics (1)
- lifetimes (1)
- linked abstraction workflows (1)
- locally maximal clone (1)
- locally stationary process (1)
- location (1)
- location problem (1)
- logarithmic average (1)
- logarithmic utility (1)
- macroscopic quantum coherence (1)
- magnetic Ni80Fe20 wires (1)
- magnetization reversal process (1)
- magneto-optical Kerr effect (1)
- magnetostatic surface spin waves (1)
- martingale measu (1)
- mathematical concept (1)
- maximum-entropy (1)
- metastable Pd(001) (1)
- middleware (1)
- minimal paths (1)
- minimax rate (1)
- minimax risk (1)
- mobile agents (1)
- mobile agents approach (1)
- modelling time (1)
- modularity (1)
- moment realizability (1)
- monitoring and managing distributed development processes (1)
- monodromy (1)
- morphism (1)
- motion planning (1)
- multi-agent architecture (1)
- multicriteria minimal path problem is presented (1)
- multicriteria optimization (1)
- multidimensional Kohonen algorithm (1)
- multimedia (1)
- multiple objective linear programming problem (1)
- multiple-view product modeling (1)
- multiresolution analysis (1)
- narrowing (1)
- negotiation (1)
- neural networks (1)
- non-convex optimization (1)
- noninformative prior (1)
- nonlinear thresholding (1)
- nonlinear wavelet thresholding (1)
- norm (1)
- normal cone (1)
- numeraire portfolios (1)
- numerical computation (1)
- object frameworks (1)
- order selection (1)
- order-sorted logic (1)
- order-two densities (1)
- order-two density (1)
- ovoids (1)
- paramodulation (1)
- patterned magnetic permalloy films (1)
- phase space (1)
- phase-space (1)
- plan enactment (1)
- polyhedral norm (1)
- portfolio optimisation (1)
- preservation of relations (1)
- problem solvers (1)
- process model (1)
- process modelling (1)
- process support system (PROSYT) (1)
- process-centred environments (1)
- profiles (1)
- programmable client-server systems (1)
- project coordination (1)
- projected quasi-gradient method (1)
- proof plans (1)
- protocol (1)
- pseudo-compressibility method (1)
- qauntum mechanis (1)
- quadratic forms (1)
- quasi-one-dimensional spin wave envelope solitons (1)
- quasienergy (1)
- radiation therapy (1)
- rate control (1)
- reactive systems (1)
- real time (1)
- real-time (1)
- real-time temporal logic (1)
- receptive safety properties (1)
- reference prior (1)
- regularization by wavelets (1)
- rela (1)
- reliability (1)
- requirements (1)
- reuse repositories (1)
- robustness (1)
- scaled translates (1)
- search algorithms (1)
- second order logic (1)
- shape aniso-tropies (1)
- shear flow (1)
- short magnetic fieldpulses (1)
- short-time periodogram (1)
- shortest sequence (1)
- similarity measure (1)
- single domain uniaxial magnetic particles (1)
- singularities (1)
- software agents (1)
- software project (1)
- software project management (1)
- sorted logic (1)
- soundness (1)
- spin wave excitations (1)
- spinwaves (1)
- squares (1)
- statistical experiment (1)
- stochastic stability (1)
- structured permalloy films (1)
- switching properties (1)
- system behaviour (1)
- tableau (1)
- tangent measure distributions (1)
- temperature dependence (1)
- temporal logic (1)
- termination (1)
- theorem prover (1)
- thin h-BN films (1)
- time series (1)
- time-varying autoregression (1)
- topology preserving maps (1)
- traceability (1)
- transition rates (1)
- transition-metal (1)
- translation (1)
- transverse bias field (1)
- traveling salesman problem (1)
- treatment planning (1)
- trial systems (1)
- triple layer stacks (1)
- two-dimensional self-focused spin wave packets (1)
- typical examples (1)
- typical instance (1)
- uncertainty principle (1)
- uniform ergodicity (1)
- uniqueness (1)
- value preserving portfolios (1)
- vector measure (1)
- vector wavelets (1)
- virtual market place (1)
- viscosity solutions (1)
- visual process modelling environment (1)
- wall energy (1)
- wall thickness (1)
- wavelet estimators (1)
- wavelet transform (1)
- weak termination (1)
- windowed Fourier transform (1)
- work coordination (1)
- yttrium-iron garnet (YIG) fi (1)

#### Faculty / Organisational entity

Abstract: We present experimental and theoretical results of a detailed study of laser-induced continuum structures (LICS) in the photoionization continuum of helium out of the metastable state 2s^1 S_0. The continuum dressing with a 1064 nm laser, couples the same region of the continuum to the 4s^1 S_0 state. The experimental data, presented for a range of intensities, show pronounced ionization suppression (by asmuch as 70% with respect to the far-from-resonance value) as well as enhancement, in a Beutler-Fano resonance profile. This ionization suppression is a clear indication of population trapping mediated by coupling to a contiuum. We present experimental results demonstrating the effect of pulse delay upon the LICS, and for the behavior of LICS for both weak and strong probe pulses. Simulations based upon numerical solution of the Schrödinger equation model the experimental results. The atomic parameters (Rabi frequencies and Stark shifts) are calculated using a simple model-potential method for the computation of the needed wavefunctions. The simulations of the LICS profiles are in excellent agreement with experiment. We also present an analytic formulation of pulsed LICS. We show that in the case of a probe pulse shorter than the dressing one the LICS profile is the convolution of the power spectra of the probe pulse with the usual Fano profile of stationary LICS. We discuss some consequences of deviation from steady-state theory.

A new problem for the automated off-line programming of industrial robot application is investigated. The Multi-Goal Path Planning is to find the collision-free path connecting a set of goal poses and minimizing e.g. the total path length. Our solution is based on an earlier reported path planner for industrial robot arms with 6 degrees-of-freedom in an on-line given 3D environment. To control the path planner, four different goal selection methods are introduced and compared. While the Random and the Nearest Pair Selection methods can be used with any path planner, the Nearest Goal and the Adaptive Pair Selection method are favorable for our planner. With the latter two goal selection methods, the Multi-Goal Path Planning task can be significantly accelerated, because they are able to automatically solve the simplest path planning problems first. Summarizing, compared to Random or Nearest Pair Selection, this new Multi-Goal Path Planning approach results in a further cost reduction of the programming phase.

An interrupter for use in a daisy-chained VME bus interrupt system has beendesigned and implemented as an asynchronous sequential circuit. The concur-rency of the processes posed a design problem that was solved by means of asystematic design procedure that uses Petri nets for specifying system and in-terrupter behaviour, and for deriving a primitive flow table. Classical designand additional measures to cope with non-fundamental mode operation yieldeda coded state-machine representation. This was implemented on a GAL 22V10,chosen for its hazard-preventing structure and for rapid prototyping in studentlaboratories.

Hexagonal BN films have been deposited by rf-magnetron sputtering with simultaneous ion plating. The elastic properties of the films grown on silicon substrates under identical coating conditions have been de-termined by Brillouin light scattering from thermally excited surface phonons. Four of the five independent elastic constants of the deposited material are found to be c11 = 65 GPa, c13 = 7 GPa, c33 = 92 GPa and c44 = 53 GPa exhibiting an elastic anisotropy c11/c33 of 0.7. The Young's modulus determined with load indenta-tion is distinctly larger than the corresponding value taken from Brillouin light scattering. This discrepancy is attributed to the specific morphology of the material with nanocrystallites embedded in an amorphous matrix.

We present an inference system for clausal theorem proving w.r.t. various kinds of inductivevalidity in theories specified by constructor-based positive/negative-conditional equations. The reductionrelation defined by such equations has to be (ground) confluent, but need not be terminating. Our con-structor-based approach is well-suited for inductive theorem proving in the presence of partially definedfunctions. The proposed inference system provides explicit induction hypotheses and can be instantiatedwith various wellfounded induction orderings. While emphasizing a well structured clear design of theinference system, our fundamental design goal is user-orientation and practical usefulness rather thantheoretical elegance. The resulting inference system is comprehensive and relatively powerful, but requiresa sophisticated concept of proof guidance, which is not treated in this paper.This research was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D4-Projekt)

INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.

We describe a hybrid architecture supporting planning for machining workpieces. The archi- tecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.

We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is build on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been realized that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.

While most approaches to similarity assessment are oblivious of knowledge and goals, there is ample evidence that these elements of problem solving play an important role in similarity judgements. This paper is concerned with an approach for integrating assessment of similarity into a framework of problem solving that embodies central notions of problem solving like goals, knowledge and learning.

Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.

One of the problems of autonomous mobile systems is the continuous tracking of position and orientation. In most cases, this problem is solved by dead reckoning, based on measurement of wheel rotations or step counts and step width. Unfortunately dead reckoning leads to accumulation of drift errors and is very sensitive against slippery. In this paper an algorithm for tracking position and orientation is presented being nearly independent from odometry and its problems with slippery. To achieve this results, a rotating range-finder is used, delivering scans of the environmental structure. The properties of this structure are used to match the scans from different locations in order to find their translational and rotational displacement. For this purpose derivatives of range-finder scans are calculated which can be used to find position and orientation by crosscorrelation.

A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.

We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.

Starting from the uniqueness question for mixtures of distributions this review centers around the question under which formally weaker assumptions one can prove the existence of SPLIFs, in other words perfect statistics and tests. We mention a couple of positive and negative results which complement the basic contribution of David Blackwell in 1980. Typically the answers depend on the choice of the set theoretic axioms and on the particular concepts of measurability.

We study a model for learning periodic signals in recurrent neural networks proposed by Doya and Yoshizawa [7] that can be considered as a model for temporal pattern memory in animal motoric systems. A network receives an external oscillatory input and adjusts its weights so that this signal can be reproduced approximately as the network output after some time. We use tools from adaptive control theory to derive an algorithm for weight matrices with special structure. If the input is generated by a network of the same structure the algorithm converges globally and does not exhibit the deficiencies of the back-propagation based approach of Doya and Yoshizawa under a persistency of excitation condition. This simple algorithm can also be used for open loop identification under quite restructive assumptions. The persistency of excitation condition cannot be proven even for the matrices with special structure but for a 3d system. For higher dimensional systems we give connections to the theory of linear time-varying systems where this condition is generically true (under assumption which are also needed in the time-invariant case). However, we cannot show that the linearized system related to the nonlinear neural network fulfills these generic assumptions.

Problem specifications for classical planners based on a STRIPS-like representation typically consist of an initial situation and a partially defined goal state. Hierarchical planning approaches, e.g., Hierarchical Task Network (HTN) Planning, have not only richer representations for actions but also for the representation of planning problems. The latter are defined by giving an initial state and an initial task network in which the goals can be ordered with respect to each other. However, studies with a specification of the domain of process planning for the plan-space planner CAPlan (an extension of SNLP) have shown that even without hierarchical domain representation typical properties called goal orderings can be identified in this domain that allow more efficient and correct case retrieval strategies for the case-based planner CAPlan/CbC. Motivated by that, this report describes an extension of the classical problem specifications for plan-space planners like SNLP and descendants. These extended problem specifications allow to define a partial order on the planning goals which can interpreted as an order in which the solution plan should achieve the goals. These goal ordering can theoretically and empirically be shown to improve planning performance not only for case-based but also for generative planning. As a second but different way we show how goal orderings can be used to address the control problem of partial order planners. These improvements can be best understood with a refinement of Barrett's and Weld's extended taxonomy of subgoal collections.

Real world planning tasks like manufacturing process planning often don't allow to formalize all of the relevant knowledge. Especially, preferences between alternatives are hard to acquire but have high influence on the efficiency of the planning process and the quality of the solution. We describe the essential features of the CAPlan planning architecture that supports cooperative problem solving to narrow the gap caused by absent preference and control knowledge. The architecture combines an SNLP-like base planner with mechanisms for explict representation and maintenance of dependencies between planning decisions. The flexible control interface of CAPlan allows a combination of autonomous and interactive planning in which a user can participate in the problem solving process. Especially, the rejection of arbitrary decisions by a user or dependency-directed backtracking mechanisms are supported by CAPlan.

About the approach The approach of TOPO was originally developed in the FABEL project1[1] to support architects in designing buildings with complex installations. Supplementing knowledge-based design tools, which are available only for selected subtasks, TOPO aims to cover the whole design process. To that aim, it relies almost exclusively on archived plans. Input to TOPO is a partial plan, and output is an elaborated plan. The input plan constitutes the query case and the archived plans form the case base with the source cases. A plan is a set of design objects. Each design object is defined by some semantic attributes and by its bounding box in a 3-dimensional coordinate system. TOPO supports the elaboration of plans by adding design objects.

Software development is becoming a more and more distributed process, which urgently needs supporting tools in the field of configuration management, software process/w orkflow management, communication and problem tracking. In this paper we present a new distributed software configuration management framework COMAND. It offers high availabilit y through replication and a mechanism to easily change and adapt the project structure to new business needs. To better understand and formally prove some properties of COMAND, we have modeled it in a formal technique based on distributed graph transformations. This formalism provides an intuitive rule-based description technique mainly for the dynamic behavior of the system on an abstract level. We use it here to model the replication subsystem.

If \(A\) generates a bounded cosine function on a Banach space \(X\) then the negative square root \(B\) of \(A\) generates a holomorphic semigroup, and this semigroup is the conjugate potential transform of the cosine function. This connection is studied in detail, and it is used for a characterization of cosine function generators in terms of growth conditions on the semigroup generated by \(B\). This characterization relies on new results on the inversion of the vector-valued conjugate potential transform.

Let \(X\) be a Banach lattice. Necessary and sufficient conditions for a linear operator \(A:D(A) \to X\), \(D(A)\subseteq X\), to be of positive \(C^0\)-scalar type are given. In addition, the question is discussed which conditions on the Banach lattice imply that every operator of positive \(C^0\)-scalar type is necessarily of positive scalar type.

In the scalar case one knows that a complex normalized function of boundedvariation \(\phi\) on \([0,1]\) defines a unique complex regular Borel measure\(\mu\) on \([0,1]\). In this note we show that this is no longer true in generalin the vector valued case, even if \(\phi\) is assumed to be continuous. Moreover, the functions \(\phi\) which determine a countably additive vectormeasure \(\mu\) are characterized.

The following two norms for holomorphic functions \(F\), defined on the right complex half-plane \(\{z \in C:\Re(z)\gt 0\}\) with values in a Banach space \(X\), are equivalent:
\[\begin{eqnarray*} \lVert F \rVert _{H_p(C_+)} &=& \sup_{a\gt0}\left( \int_{-\infty}^\infty \lVert F(a+ib) \rVert ^p \ db \right)^{1/p}
\mbox{, and} \\ \lVert F \rVert_{H_p(\Sigma_{\pi/2})} &=& \sup_{\lvert \theta \lvert \lt \pi/2}\left( \int_0^\infty \left \lVert F(re^{i \theta}) \right \rVert ^p\ dr \right)^{1/p}.\end{eqnarray*}\] As a consequence, we derive a description of boundary values ofsectorial holomorphic functions, and a theorem of Paley-Wiener typefor sectorial holomorphic functions.

The thermal equilibrium state of a bipolar, isothermal quantum fluid confined to a bounded domain \(\Omega\subset I\!\!R^d,d=1,2\) or \( d=3\) is the minimizer of the total energy \({\mathcal E}_{\epsilon\lambda}\); \({\mathcal E}_{\epsilon\lambda}\) involves the squares of the scaled Planck's constant \(\epsilon\) and the scaled minimal Debye length \(\lambda\). In applications one frequently has \(\lambda^2\ll 1\). In these cases the zero-space-charge approximation is rigorously justified. As \(\lambda \to 0 \), the particle densities converge to the minimizer of a limiting quantum zero-space-charge functional exactly in those cases where the doping profile satisfies some compatibility conditions. Under natural additional assumptions on the internal energies one gets an differential-algebraic system for the limiting \((\lambda=0)\) particle densities, namely the quantum zero-space-charge model. The analysis of the subsequent limit \(\epsilon \to 0\) exhibits the importance of quantum gaps. The semiclassical zero-space-charge model is, for small \(\epsilon\), a reasonable approximation of the quantum model if and only if the quantum gap vanishes. The simultaneous limit \(\epsilon =\lambda \to 0\) is analyzed.

Convex Operators in Vector Optimization: Directional Derivatives and the Cone of Decrease Directions
(1999)

The paper is devoted to the investigation of directional derivatives and the cone of decrease directions for convex operators on Banach spaces. We prove a condition for the existence of directional derivatives which does not assume regularity of the ordering cone K. This result is then used to prove that for continuous convex operators the cone of decrease directions can be represented in terms of the directional derivatices . Decrease directions are those for which the directional derivative lies in the negative interior of the ordering cone K. Finally, we show that the continuity of the convex operator can be replaced by its K-boundedness.

We present detailed studies of the enhanced coercivity of exchange-bias bilayer Fe/MnPd, both experimentally and theoretically. We have demonstrated that the existence of large higher-order anisotropies due to exchange coupling between different Fe and MnPd layers can account for the large increase of coercivity in Fe/MnPd system. The linear dependence of coercivity on inverse Fe thickness are well explained by a phenomenological model by introducing higher-order anisotropy terms into the total free energy of the system.

The mathematical modelling of problems in science and engineering leads often to partial differential equations in time and space with boundary and initial conditions.The boundary value problems can be written as extremal problems(principle of minimal potential energy), as variational equations (principle of virtual power) or as classical boundary value problems.There are connections concerning existence and uniqueness results between these formulations, which will be investigated using the powerful tools of functional analysis.The first part of the lecture is devoted to the analysis of linear elliptic boundary value problems given in a variational form.The second part deals with the numerical approximation of the solutions of the variational problems.Galerkin methods as FEM and BEM are the main tools. The h-version will be discussed, and an error analysis will be done.Examples, especially from the elasticity theory, demonstrate the methods.

The asymptotic behaviour of a singular-perturbed two-phase Stefan problem due to slow diffusion in one of the two phases is investigated. In the limit the model equations reduce to a one-phase Stefan problem. A boundary layer at the moving interface makes it necessary to use a corrected interface condition obtained from matched asymptotic expansions. The approach is validated by numerical experiments using a front-tracking method.

The asymptotic analysis of IBVPs for the singularly perturbed parabolic PDE ... in the limit epsilon to zero motivate investigations of certain recursively defined approximative series ("ping-pong expansions"). The recursion formulae rely on operators assigning to a boundary condition at the left or the right boundary a solution of the parabolic PDE. Sufficient conditions for uniform convergence of ping-pong expansions are derived and a detailed analysis for the model problem ... is given.

An important property and also a crucial point ofa term rewriting system is its termination. Transformation or-derings, developed by Bellegarde & Lescanne strongly based on awork of Bachmair & Dershowitz, represent a general technique forextending orderings. The main characteristics of this method aretwo rewriting relations, one for transforming terms and the otherfor ensuring the well-foundedness of the ordering. The centralproblem of this approach concerns the choice of the two relationssuch that the termination of a given term rewriting system can beproved. In this communication, we present a heuristic-based al-gorithm that partially solves this problem. Furthermore, we showhow to simulate well-known orderings on strings by transformationorderings.

Orderings on polynomial interpretations of operators represent a powerful technique for proving thetermination of rewriting systems. One of the main problems of polynomial orderings concerns thechoice of the right interpretation for a given rewriting system. It is very difficult to develop techniquesfor solving this problem. Here, we present three new heuristic approaches: (i) guidelines for dealingwith special classes of rewriting systems, (ii) an algorithm for choosing appropriate special polynomialsas well as (iii) an extension of the original polynomial ordering which supports the generation ofsuitable interpretations. All these heuristics will be applied to examples in order to illustrate theirpractical relevance.

High frequency switching of single domain, uniaxial magnetic particles is discussed in terms of transition rates controlled by a small transverse bias field. It is shown that fast switching times can be achieved using bias fields an order of magnitude smaller than the effective anisotropy field. Analytical expressions for the switching time are derived in special cases and general configurations of practical interest are examined using numerical simulations.

It is generally agreed that one of the most challenging issues facing the case-based reasoning community is that of adaptation. To date the lion's share of CBR research has concentrated on the retrieval of similar cases, and the result is a wide range of quality retrieval techniques. However, retrieval is just the first part of the CBR equation, because once a similar case has been retrieved it must be adapted. Adaptation research is still in its earliest stages, and researchers are still trying to properly understand and formulate the important issues. In this paper I describe a treatment of adaptation in the context of a case-based reasoning system for software design, called Deja Vu. Deja Vu is particularly interesting, not only because it performs automatic adaptation of retrieved cases, but also because it uses a variety of techniques to try and reduce and predict the degree of adaptation necessary.

We compare different notions of differentiability of a measure along a vector field on a locally convex space. We consider in the L2-space of a differ entiable measure the analoga of the classical concepts of gradient, divergence and Laplacian (which coincides with the OrnsteinUhlenbeck operator in the Gaussian case). We use these operators for the extension of the basic results of Malliavin and Stroock on the smoothness of finite dimensional image measures under certain nonsmooth mappings to the case of non-Gaussian measures. The proof of this extension is quite direct and does not use any Chaos-decomposition. Finally, the role of this Laplacian in the procedure of quantization of anharmonic oscillators is discussed.

The multiple-view modeling of a product in a design context is discussed in this paper. We study the existing approaches for multiple-view modeling of a product and we give a brief analysis of them. Then we propose our approach which incorporates the multiple-model approach in STEP standard current works based on a single model. We propose a meta-model inspired by this approach for a multiple-view design environment. Next, we validate this meta-model with a case study. Finally we conclude and give some perspectives of this work. Keywords: product data modeling, multiple-view modeling, product data integration, STEP, functional model.

We present a mathematical knowledge base containing the factual know-ledge of the first of three parts of a textbook on semi-groups and automata,namely "P. Deussen: Halbgruppen und Automaten". Like almost all math-ematical textbooks this textbook is not self-contained, but there are somealgebraic and set-theoretical concepts not being explained. These concepts areadded to the knowledge base. Furthermore there is knowledge about the nat-ural numbers, which is formalized following the first paragraph of "E. Landau:Grundlagen der Analysis".The data base is written in a sorted higher-order logic, a variant of POST ,the working language of the proof development environment OmegaGamma mkrp. We dis-tinguish three different types of knowledge: axioms, definitions, and theorems.Up to now, there are only 2 axioms (natural numbers and cardinality), 149definitions (like that for a semi-group), and 165 theorems. The consistency ofsuch knowledge bases cannot be proved in general, but inconsistencies may beimported only by the axioms. Definitions and theorems should not lead to anyinconsistency since definitions form conservative extensions and theorems areproved to be consequences.

Algorithms in Singular
(1999)

In this survey we deal with the location of hyperplanes in n-dimensional normed spaces, i.e., we present all known results and a unifying approach to the so-called median hyperplane problem in Minkowski spaces. We describe how to find a hyperplane H minimizing the weighted sum f(H) of distances to a given, finite set of demand points. In robust statistics and operations research such an optimal hyperplane is called a median hyperplane.After summarizing the known results for the Euclidean and rectangular situation, we show that for all distance measures d derived from norms one of the hyperplanes minimizing f(H) is the affine hull of n of the demand points and, moreover, that each median hyperplane is a halving one (in a sense defined below) with respect to the geiven point set. Also an independence of norm result for finding optimal hyperplanes with fixed slope will be given. Furthermore we discuss how these geometric criteria can be used for algorithmical approaches to median hyperplanes, with an extra discussion for the case of polyhedral norms. And finally a characterizatio of all smooth norms by a sharpened incidence criterion for median hyperplanes is mentioned.

In this paper we deal with the location of hyperplanes in n-dimensional normed spaces. If d is a distance measure, our objective is to find a hyperplane H which minimizes f(H) = sum_{m=1}^{M} w_{m}d(x_m,H), where w_m ge 0 are non-negative weights, x_m in R^n, m=1, ... ,M demand points and d(x_m,H)=min_{z in H} d(x_m,z) is the distance from x_m to the hyperplane H. In robust statistics and operations research such an optimal hyperplane is called a median hyperplane. We show that for all distance measures d derived from norms, one of the hyperplanes minimizing f(H) is the affine hull of n of the demand points and, moreover, that each median hyperplane is (ina certain sense) a halving one with respect to the given point set.

In this paper we deal with locating a line in the plane. If d is a distance measure our objective is to find a straight line l which minimizes f(l) of g(l) (see the paper for the definition of these functions). We show that for all distance measures d derived from norms, one of the lines minimizing f(l) contains at least two of the existing facilities. For the center objective we always get an optimal line which is at maximum distance from at least three of the existing facilities. If all weights are equal, there is an optimal line which is parallel to one facet of the convex hull of the existing facilities.

In line location problems the objective is to find a straight line which minimizes the sum of distances, or the maximum distance, respectively to a given set of existing facilities in the plane. These problems have well solved. In this paper we deal with restricted line location problems, i.e. we have given a set in the plane where the line is not allowed to pass through. With the help of a geometric duality we solve such problems for the vertical distance and then extend these results to block norms and some of them even to arbitrary norms. For all norms we give a finite candidate set for the optimal line.

We consider a multiple objective linear program (MOLP) max{Cx|Ax = b,x in N_{0}^{n}} where C = (c_ij) is the p x n - matrix of p different objective functions z_i(x) = c_{i1}x_1 + ... + c_{in}x_n , i = 1,...,p and A is the m x n - matrix of a system of m linear equations a_{k1}x_1 + ... + a_{kn}x_n = b_k , k=1,...,m which form the set of constraints of the problem. All coefficients are assumed to be natural numbers or zero. The set M of admissable solutions {hat x} is an admissible solution such that there exists no other admissable solution x' with C{hat x} Cx'. The efficient solutions play the role of optimal solutions for the MOLP and it is our aim to determine the set of all efficient solutions

The paper shows that characterizing the causal relationship between significant events is an important but non-trivial aspect for understanding the behavior of distributed programs. An introduction to the notion of causality and its relation to logical time is given; some fundamental results concerning the characterization of causality are pre- sented. Recent work on the detection of causal relationships in distributed computations is surveyed. The relative merits and limitations of the different approaches are discussed, and their general feasibility is analyzed.

In order to reduce the elapsed time of a computation, a pop-ular approach is to decompose the program into a collection of largelyindependent subtasks which are executed in parallel. Unfortunately, it isoften observed that tightly-coupled parallel programs run considerablyslower than initially expected. In this paper, a framework for the anal-ysis of parallel programs and their potential speedup is presented. Twoparameters which strongly affect the scalability of parallelism are iden-tified, namely the grain of synchronization, and the degree to which thetarget hardware is available. It is shown that for certain classes of appli-cations speedup is inherently poor, even if the program runs under theidealized conditions of perfect load balance, unbounded communicationbandwidth and negligible communication and parallelization overhead.Upper bounds are derived for the speedup that can be obtained in threedifferent types of computations. An example illustrates the main find-ings.

Nonlinear dissipativity, asymptotical stability, and contractivity of (ordinary) stochastic differential equations (SDEs) with some dissipative structure and their discretizations are studied in terms of their moments in the spirit of Pliss (1977). For this purpose, we introduce the notions and discuss related concepts of dissipativity, growth- bounded and monotone coefficient systems, asymptotical stability and contractivity in wide and narrow sense, nonlinear A-stability, AN-stability, B-stability and BN-stability for stochastic dynamical systems - more or less as stochastic counterparts to deterministic concepts. The test class of in a broad sense interpreted dissipative SDEs as natural analogon to dissipative deterministic differential systems is suggested for stochastic-numerical methods. Then, in particular, a kind of mean square calculus is developed, although most of ideas and analysis can be carried over to general "stochastic Lp-case" (p * 1). By this natural restriction, the new stochastic concepts are theoretically meaningful, as in deterministic analysis. Since the choice of step sizes then plays no essential role in related proofs, we even obtain nonlinear A-stability, AN-stability, B-stability and BN-stability in the mean square sense for this implicit method with respect to appropriate test classes of moment-dissipative SDEs.

Nonlinear stochastic dynamical systems as ordinary stochastic differential equations and stochastic difference methods are in the center of this presentation in view of the asymptotical behaviour of their moments. We study the exponential p-th mean growth behaviour of their solutions as integration time tends to infinity. For this purpose, the concepts of nonlinear contractivity and stability exponents for moments are introduced as generalizations of well-known moment Lyapunov exponents of linear systems. Under appropriate monotonicity assumptions we gain uniform estimates of these exponents from above and below. Eventually, these concepts are generalized to describe the exponential growth behaviour along certain Lyapunov-type functionals.

We consider a scale discrete wavelet approach on the sphere based on spherical radial basis functions. If the generators of the wavelets have a compact support, the scale and detail spaces are finite-dimensional, so that the detail information of a function is determined by only finitely many wavelet coefficients for each scale. We describe a pyramid scheme for the recursive determination of the wavelet coefficients from level to level, starting from an initial approximation of a given function. Basic tools are integration formulas which are exact for functions up to a given polynomial degree and spherical convolutions.

Many interesting problems arise from the study of the behavior of fluids. From a theoretical point of view Fluid Dynamics works with a well defined set of equat ions for which it is expected to get a clear description of the solutions. Unfortunately, in ge neral this is not easy even if the many experiments performed in the field seem to indicate which path to follow. Some of the basic questions are still either partially or widely open. For example we would like to have a better understanding on : 1. Questions for both bounded and unbounded domains on regularity, uniqueness, long time behavior of the solutions. 2. How well do solutions to the fluid equations fit to the real flow. Depending on the type of data most of the answers to these questions are knonw, when we work in two dimensions. For solutions in three dimensions, in general, we have only partial answers.

In 1979, J.M. Bernardo argued heuristically that in the case of regular product experiments his information theoretic reference prior is equal to Jeffreys' prior. In this context, B.S. Clarke and A.R. Barron showed in 1994, that in the same class of experiments Jeffreys' prior is asymptotically optimal in the sense of Shannon, or, in Bayesian terms, Jeffreys' prior is asymptotically least favorable under Kullback Leibler risk. In the present paper, we prove, based on Clarke and Barron's results, that every sequence of Shannon optimal priors on a sequence of regular iid product experiments converges weakly to Jeffreys' prior. This means that for increasing sample size Kullback Leibler least favorable priors tend to Jeffreys' prior.

We consider regularizing iterative procedures for ill-possed problems with random and nonrandom additive errors. The rate of square-mean convergence for iterative procedures with random errors is studied. The comparison theorem is established for the convergence of procedures with and without additive errors.

Abstract: We utilize the generation of large atomic coherence to enhance the resonant nonlinear magneto-optic effect by several orders of magnitude, thereby eliminating power broadening and improving the fundamental signal-to-noise ratio. A proof-of-principle experiment is carried out in a dense vapor of Rb atoms. Detailed numerical calculations are in good agreement with the experimental results. Applications such as optical magnetometry or the search for violations of parity and time reversal symmetry are feasible.

We first show that ground term-rewriting systems can be completed in apolynomial number of rewriting steps, if the appropriate data structure for termsis used. We then apply this result to study the lengths of critical pair proofs innon-ground systems, and obtain bounds on the lengths of critical pair proofsin the non-ground case. We show how these bounds depend on the types ofinference steps that are allowed in the proofs.

We will answer a question posed in [DJK91], and will show that Huet's completion algorithm [Hu81] becomes incomplete, i.e. it may generate a term rewriting system that is not confluent, if it is modified in a way that the reduction ordering used for completion can be changed during completion provided that the new ordering is compatible with the actual rules. In particular, we will show that this problem may not only arise if the modified completion algorithm does not terminate: Even if the algorithm terminates without failure, the generated finite noetherian term rewriting system may be non-confluent. Most existing implementations of the Knuth-Bendix algorithm provide the user with help in choosing a reduction ordering: If an unorientable equation is encountered, then the user has many options, especially, the one to orient the equation manually. The integration of this feature is based on the widespread assumption that, if equations are oriented by hand during completion and the completion process terminates with success, then the generated finite system is a maybe non terminating but locally confluent system (see e.g. [KZ89]). Our examples will show that this assumption is not true.

Here we consider the Kohonen algorithm with a constant learning rate as a Markov process evolving in a topological space. it is shown that the process is an irreducible and aperiodic T-chain, regardless of the dimension of both data space and network and the special shape of the neighborhood function. Moreover the validity of Deoblin's condition is proved. These imply the convergence in distribution of the process to a finite invariant measure with a uniform geometric rate. In addition we show the process is positive Harris recurrent, which enables us to use statistical devices to measure its centrality and variability as the time goes to infinity.

We consider wavelet estimation of the time-dependent (evolutionary) power spectrum of a locally stationary time series. Allowing for departures from stationary proves useful for modelling, e.g., transient phenomena, quasi-oscillating behaviour or spectrum modulation. In our work wavelets are used to provide an adaptive local smoothing of a short-time periodogram in the time-freqeuncy plane. For this, in contrast to classical nonparametric (linear) approaches we use nonlinear thresholding of the empirical wavelet coefficients of the evolutionary spectrum. We show how these techniques allow for both adaptively reconstructing the local structure in the time-frequency plane and for denoising the resulting estimates. To this end a threshold choice is derived which is motivated by minimax properties w.r.t. the integrated mean squared error. Our approach is based on a 2-d orthogonal wavelet transform modified by using a cardinal Lagrange interpolation function on the finest scale. As an example, we apply our procedure to a time-varying spectrum motivated from mobile radio propagation.

Rules are an important knowledge representation formalism in constructive problem solving. On the other hand, object orientation is an essential key technology for maintaining large knowledge bases as well as software applications. Trying to take advantage of the benefits of both paradigms, we integrated Prolog and Smalltalk to build a common base architecture for problem solving. This approach has proven to be useful in the development of two knowledge-based systems for planning and configuration design (CAPlan and Idax). Both applications use Prolog as an efficient computational source for the evaluation of knowledge represented as rules.

Locally Maximal Clones II
(1999)

Epitaxial growth of metastable Pd(001) at high deposition temperatures up to a critical thickness of 6 monolayers on bcc-Fe(001) is reported, the critical thickness being depending dramatically on the deposition temperature. For larger thicknesses the Pd film undergoes a roughening transition with strain relaxation by forming a top polycrystalline layer. These results allow to make a correlation between previ-ously reported unusual magnetic properties of Fe/Pd double layers and the crystallographic structure of the Pd overlayer.

A new approach for modelling time that does not rely on the concept of a clock is proposed. In order to establish a notion of time, system behaviour is represented as a joint progression of multiple threads of control, which satisfies a certain set of axioms. We show that the clock-independent time model is related to the well-known concept of a global clock and argue that both approaches establish the same notion of time.

Several activities around the world aim at integrating object-oriented data models with relational ones in order to improve database management systems. As a first result of these activities, object-relational database management systems (ORDBMS) are already commercially available and, simultaneously, are subject to several research projects. This (position) paper reports on our activities in exploiting object-relational database technology for establishing repository manager functionality supporting software engineering (SE) processes. We argue that some of the key features of ORDBMS can directly be exploited to fulfill many of the needs of SE processes. Thus, ORDBMS, as we think, are much better suited to support SE applications than any others. Nevertheless, additional functionality, e. g., providing adequate version management, is required in order to gain a completely satisfying SE repository. In order to remain flexible, we have developed a generative approach for providing this additional functionality. It remains to be seen whether this approach, in turn, can effectively exploit ORDBMS features. This paper, therefore, wants to show that ORDBMS can substantially contribute to both establishing and running SE repositories.

Patdex is an expert system which carries out case-based reasoning for the fault diagnosis of complex machines. It is integrated in the Moltke workbench for technical diagnosis, which was developed at the university of Kaiserslautern over the past years, Moltke contains other parts as well, in particular a model-based approach; in Patdex where essentially the heuristic features are located. The use of cases also plays an important role for knowledge acquisition. In this paper we describe Patdex from a principal point of view and embed its main concepts into a theoretical framework.

The background of this paper is the area of case-based reasoning. This is a reasoning technique where one tries to use the solution of some problem which has been solved earlier in order to obta in a solution of a given problem. As example of types of problems where this kind of reasoning occurs very often is the diagnosis of diseases or faults in technical systems. In abstract terms this reduces to a classification task. A difficulty arises when one has not just one solved problem but when there are very many. These are called "cases" and they are stored in the case-base. Then one has to select an appropriate case which means to find one which is "similar" to the actual problem. The notion of similarity has raised much interest in this context. We will first introduce a mathematical framework and define some basic concepts. Then we will study some abstract phenomena in this area and finally present some methods developed and realized in a system at the University of Kaiserslautern.

This paper deals with the robust manipulation of deformable linear objects such as hoses or wires. We propose manipulation based on thequalitative contact state between the deformable workpiece and a rigid environment. First, we give an enumeration of possible contact states and discuss the main characteristics of each state. Second, we investigate the transitions which are possible between the contact states and derive criteria and conditions for each of them. Finally, we apply the concept of contact states and state transitions to the description of a typical assembly task.

This paper deals with the problem of picking-up deformable linear workpieces such as cables or ropes with an industrial robot. First, we give a motivation and problem definition. Based on a brief conceptual discussion of possible approaches we derive an algorithm for picking-up hanging deformable linear objects using two light barriers as sensor system. For this hardware, a skill-based approach is described and the parameters and major influence factors are discussed. In an experi- mental study, the feasibility and reliability under diverse conditions are investigated. The algorithm is found to be very reliable, if certain boundary conditions are met.

In this paper, we investigate the efficient simulation of deformable linear objects. Based on the state of the art, we extend the principle of minimizing the potential energy by considering plastic deformation and describe a novel approach for treating workpiece dynamics. The major influence factors on precision and computation time are identified and investigated experimentally. Finally, we discuss the usage of parallel processing in order to reduce the computation time.

In this report, we first propose a dichotomy of topology preserving network models based on the degree to which the structure of a network is determined by the given task. We then look closer at one of those groups and investigate the information that is contained in the graph structure of a topology preserving neural network. The task we have in mind is the usage of the network's topology for the retrieval of nearest neighbors of a neuron or a query, as it is of importance, e.g., in medical diagnosis systems. In general considerations, we propose certain properties of the structure and formulate the respective expectable results of network interpretation. From the results we conclude that both topology preservation as well as neuron distribution are highly influential for the network semantics. After a short survey on hierarchical models for data analysis, we propose a new network model that fits both needs. This so called SplitNet model dynamically constructs a hierarchically structured network that provides interpretability by neuron distribution, network topology and hierarchy of the network layers. We present empirical results for this new model and demonstrate its application in the medical domain of nerve lesion diagnosis. Further, we explain a view how the interpretation of the hierarchy in models like SplitNet can be understood in the context of integration of symbolic and connectionist learning.

This technical report is a compilation of several papers on the task of solving diagnostic problems with the help of topology preserving maps. It first reviews the application of Kohonen's Self- Organizing Feature Map (SOFM) for a technical diagnosis task, namely the fault detection in CNC-Machines with the KoDiag system [RW93], [RW94]. For emergent problems with coding attribute values, we then introduce fuzzy coding, similarity assignment and weight updating schemes for three crucial data types (continuous values, ordered and unordered symbols). These techniques result in a SOFM type network based on user defined local similarities, thus being able to incorporate a priori knowledge about the domain [Rah95].

Wall energy and wall thickness of exchange-coupled rare-earth transition-metal triple layer stacks
(1999)

The room-temperature wall energy sw 54.0310 23 J/m 2 of an exchange-coupled Tb 19.6 Fe 74.7 Co 5.7 /Dy 28.5 Fe 43.2 Co 28.3 double layer stack can be reduced by introducing a soft magnetic intermediate layer in between both layers exhibiting a significantly smaller anisotropy compared to Tb+- FeCo and Dy+- FeCo. sw will decrease linearly with increasing intermediate layer thickness, d IL , until the wall is completely located within the intermediate layer for d IL d w , where d w denotes the wall thickness. Thus, d w can be obtained from the plot sw versus d IL .We determined sw and d w on Gd+- FeCo intermediate layers with different anisotropy behavior ~perpendicular and in-plane easy axis! and compared the results with data obtained from Brillouin light-scattering measurements, where exchange stiffness, A, and uniaxial anisotropy, K u , could be determined. With the knowledge of A and K u , wall energy and thickness were calculated and showed an excellent agreement with the magnetic measurements. A ten times smaller perpendicular anisotropy of Gd 28.1 Fe 71.9 in comparison to Tb+- FeCo and Dy+- FeCo resulted in a much smaller sw 51.1310 23 J/m 2 and d w 524 nm at 300 K. A Gd 34.1 Fe 61.4 Co 4.5 with in-plane anisotropy at room temperature showed a further reduced sw 50.3310 23 J/m 2 and d w 517 nm. The smaller wall energy was a result of a different wall structure compared to perpendicular layers.

We present a framework for the integration of the Knuth-Bendix completion algorithm with narrowing methods, compiled rewrite rules, and a heuristic difference reduction mechanism for paramodulation. The possibility of embedding theory unification algorithms into this framework is outlined. Results are presented and discussed for several examples of equality reasoning problems in the context of an actual implementation of an automated theorem proving system (the Mkrp-system) and a fast C implementation of the completion procedure. The Mkrp-system is based on the clause graph resolution procedure. The thesis shows the indispensibility of the constraining effects of completion and rewriting for equality reasoning in general and quantifies the amount of speed-up caused by various enhancements of the basic method. The simplicity of the superposition inference rule allows to construct an abstract machine for completion, which is presented together with computation times for a concrete implementation.

Accelerating the maturation process within the software engineering discipline may result in boosts of development productivity. One way to enable this acceleration is to develop tools and processes to mimic evolution of traditional engineering disciplines. Principles established in traditional engineering disciplines represent high-level guidance to constructing these tools and processes. This paper discusses two principles found in the traditional engineering disciplines and how these principles can apply to mature the software engineering discipline. The discussion is concretized through description of the Collaborative Management Environment, a software system under collaborative development among several national laboratories.

Recent studies on planning, comparing plan re-use and plan generation, have shown that both the above tasks may have the same degree of computational complexity, even if we deal with very similar problems. The aim of this paper is to show that the same kind of results apply also for diagnosis. We propose a theoretical complexity analysis coupled with some experimental tests, intended to evaluate the adequacy of adaptation strategies which re-use the solutions of past diagnostic problems in order to build a solution to the problem to be solved. Results of such analysis show that, even if diagnosis re-use falls into the same complexity class of diagnosis generation (they are both NP-complete problems), practical advantages can be obtained by exploiting a hybrid architecture combining case-based and modelbased diagnostic problem solving in a unifying framework.

Comparison of kinetic theory and discrete element schemes for modelling granular Couette flows
(1999)

Discrete element based simulations of granular flow in a 2d velocity space are compared with a particle code that solves kinetic granular flow equations in two and three dimensions. The binary collisions of the latter are governed by the same forces as for the discrete elements. Both methods are applied to a granular shear flow of equally sized discs and spheres. The two dimensional implementation of the kinetic approach shows excellent agreement with the results of the discrete element simulations. When changing to a three dimensional velocity space, the qualitative features of the flow are maintained. However, some flow properties change quantitatively.

Abstract: We aim to establish a link between path-integral formulations of quantum and classical field theories via diagram expansions. This link should result in an independent constructive characterisation of the measure in Feynman path integrals in terms of a stochastic differential equation (SDE) and also in the possibility of applying methods of quantum field theory to classical stochastic problems. As a first step we derive in the present paper a formal solution to an arbitrary c-number SDE in a form which coincides with that of Wick's theorem for interacting bosonic quantum fields. We show that the choice of stochastic calculus in the SDE may be regarded as a result of regularisation, which in turn removes ultraviolet divergences from the corresponding diagram series.

We show that the solution to an arbitrary c-number stochastic differential equation (SDE) can be represented as a diagram series. Both the diagram rules and the properties of the graphical elements reflect causality properties of the SDE and this series is therefore called a causal diagram series. We also discuss the converse problem, i.e. how to construct an SDE of which a formal solution is a given causal diagram series. This then allows for a nonperturbative summation of the diagram series by solving this SDE, numerically or analytically.

This paper is concerned with numerical algorithms for the bipolar quantum drift diffusion model. For the thermal equilibrium case a quasi-gradient method minimizing the energy functional is introduced and strong convergence is proven. The computation of current - voltage characteristics is performed by means of an extended emph{Gummel - iteration}. It is shown that the involved fixed point mapping is a contraction for small applied voltages. In this case the model equations are uniquely solvable and convergence of the proposed iteration scheme follows. Numerical simulations of a one dimensional resonant tunneling diode are presented. The computed current - voltage characteristics are in good qualitative agreement with experimental measurements. The appearance of negative differential resistances is verified for the first time in a Quantum Drift Diffusion model.

Integrated project management means that design and planning are interleaved with plan execution, allowing both the design and plan to be changed as necessary. This requires that the right effects of change are propagated through the plan and design. When this is distributed among designers and planners, no one may have all of the information to perform such propagation and it is important to identify what effects should be propagated to whom when. We describe a set of dependencies among plan and design elements that allow such notification by a set of message-passing software agents. The result is to provide a novel level of computer support for complex projects.

Cooperative decision making involves a continuous process, assessing the validity ofdata, information and knowledge acquired and inferred by the colleagues, that is, the shared knowledge space must be transparent. The ACCORD methodology provides aninterpretation framework for the mapping of domain facts - constituting the world model of the expert - onto conceptual models, which can be expressed in formalrepresentations. The ACCORD-BPM framework allows a stepwise and inarbitrary reconstruction of the problem solving competence of BPM experts as a prerequisite foran appropriate architecture of both BPM knowledge bases and the BPM-"reasoning device".

An a posteriori stopping rule connected with monitoringthe norm of second residual is introduced forBrakhage's implicit nonstationary iteration method, applied to ill-posed problems involving linear operatorswith closed range. It is also shown that for someclasses of equations with such operators the algorithmconsisting in combination of Brakhage's method withsome new discretization scheme is order optimal in the sense of Information Complexity.

In this paper we discuss a special class of regularization methods for solving the satellite gravity gradiometry problem in a spherical framework based on band-limited spherical regularization wavelets. Considering such wavelets as a reesult of a combination of some regularization methods with Galerkin discretization based on the spherical harmonic system we obtain the error estimates of regularized solutions as well as the estimates for regularization parameters and parameters of band-limitation.

Tomorrow's ways of doing business are likely to be far more challenging and interesting than today's due to technological advances that allow people to operate or cooperate anytime, anywhere. Today's workers are becoming mobile without the need of a work home base. Organizations are evolving from the hierarchical lines of control and information flow into more dynamic and flexible structures, where "teams" and individuals are the building blocks for forming task forces and work groups to deal with short and long term project tasks, issues and opportunities. Those individuals and teams will collaborate from their mobile desktops, whether at their offices, home or on the road. A revised paradigm for conducting small and large-scale development and integration is emerging, sometimes called the "virtual enterprise", both in the military and industrial environments. This new paradigm supports communication, cooperation and collaboration of geographically dispersed teams. In this paper we discuss experiences with specific technologies that were investigated by TRW's Infrastructure for Collaboration among Distributed Teams (ICaDT) project; an Independent Research and Development (IR&D) effort.

Wicksell's corpuscle problem deals with the estimation of the size distribution of a population of particles, all having the same shape, using a lower imensional sampling probe. This problem was originary formulated for particle systems occurring in life sciences but its solution is of actual and increasing interest in materials science. From a mathematical point of view, Wicksell's problem is an inverse problem where the interesting size distribution is the unknown part of a Volterra equation. The problem is often regarded ill-posed, because the structure of the integrand implies unstable numerical solutions. The accuracy of the numerical solutions is considered here using the condition number, which allows to compare different numerical methods with different (equidistant) class sizes and which indicates, as one result, that a finite section thickness of the probe reduces the numerical problems. Furthermore, the relative error of estimation is computed which can be split into two parts. One part consists of the relative discretization error that increases for increasing class size, and the second part is related to the relative statistical error which increases with decreasing class size. For both parts, upper bounds can be given and the sum of them indicates an optimal class width depending on some specific constants.

The paper explores the role of artificial intelligence techniques in the development of an enhanced software project management tool, which takes account of the emerging requirement for support systems to address the increasing trend towards distributed multi-platform software development projects. In addressing these aims this research devised a novel architecture and framework for use as the basis of an intelligent assistance system for use by software project managers, in the planning and managing of a software project. This paper also describes the construction of a prototype system to implement this architecture and the results of a series of user trials on this prototype system.

An agent-based approach to managing distributed, multi-platform software development projects
(1999)

This paper describes work undertaken within the context of the P3 (Project and Process Prompter) Project which aims to develop the Prompter tool, a 'decision-support tool to assist in the planning and managing of a software development project'. Prompter will have the ability to help software project managers to assimilate best practice and 'know how' in the field of software project management and incorporate expert critiquing to assist with solving the complex problems associated with software project management. This paper focuses on Prompters agent- based approach to tackling the problems of distributed, platform independent support.

Starting from the Hamiltonian operator of the noncompensated two-sublattice model of a small antiferromagnetic particle, we derive the e effective Lagrangian of a biaxial antiferromagnetic particle in an external magnetic field with the help of spin-coherent-state path integrals. Two unequal level-shifts induced by tunneling through two types of barriers are obtained using the instanton method. The energy spectrum is found from Bloch theory regarding the periodic potential as a superlattice. The external magnetic field indeed removes Kramers' degeneracy, however a new quenching of the energy splitting depending on the applied magnetic field is observed for both integer and half-integer spins due to the quantum interference between transitions through two types of barriers.

An approach to generating all efficient solutions of multiple objective programs with piecewise linear objective functions and linear constraints is presented. The approach is based on the decomposition of the feasible set into subsets, referred to as cells, so that the original problem reduces to a series of lenear multiple objective programs over the cells. The concepts of cell-efficiency and complex-efficiency are introduced and their relationship with efficiency is examined. A generic algorithm for finding efficent solutions is proposed. Applications in location theory as well as in worst case analysis are highlighted.

In this paper we consider the problem of optimizing a piecewise-linear objective function over a non-convex domain. In particular we do not allow the solution to lie in the interior of a prespecified region R. We discuss the geometrical properties of this problems and present algorithms based on combinatorial arguments. In addition we show how we can construct quite complicated shaped sets R while maintaining the combinatorial properties.

In continous location problems we are given a set of existing facilities and we are looking for the location of one or several new facilities. In the classical approaches weights are assigned to existing facilities expressing the importance of the new facilities for the existing ones. In this paper, we consider a pointwise defined objective function where the weights are assigned to the existing facilities depending on the location of the new facility. This approach is shown to be a generalization of the median, center and centdian objective functions. In addition, this approach allows to formulate completely new location models. Efficient algorithms as well as structure results for this algebraic approach for location problems are presented. Extensions to the multifacility and restricted case are also considered.

In this paper we deal with the determination of the whole set of Pareto-solutions of location problems with respect to Q general criteria. These criteria include as particular instances median, center or cent-dian objective functions. The paper characterizes the set of Pareto-solutions of all these multicriteria problems. An efficient algorithm for the planar case is developed and its complexity is established. the proposed approach is more general than the previously published approaches to multicriteria location problems and includes almost all of them as particular instances.

In this paper we deal with the determination of the whole set of Pareto-solutions of location problems with respect to Q general criteria.These criteria include median, center or cent-dian objective functions as particular instances.The paper characterizes the set of Pareto-solutions of a these multicriteria problems. An efficient algorithm for the planar case is developed and its complexity is established. Extensions to higher dimensions as well as to the non-convexcase are also considered.The proposed approach is more general than the previously published approaches to multi-criteria location problems and includes almost all of them as particular instances.