### Refine

#### Year of publication

- 1999 (425) (remove)

#### Document Type

- Preprint (351)
- Article (66)
- Report (4)
- Diploma Thesis (1)
- Lecture (1)
- Periodical Part (1)
- Study Thesis (1)

#### Language

- English (425) (remove)

#### Keywords

- AG-RESY (6)
- Case-Based Reasoning (6)
- HANDFLEX (5)
- Location Theory (5)
- PARO (5)
- case-based problem solving (5)
- Abstraction (4)
- Knowledge Acquisition (4)
- resolution (4)
- Fallbasiertes Schließen (3)
- Internet (3)
- Knowledge acquisition (3)
- Multicriteria Optimization (3)
- Requirements/Specifications (3)
- case-based reasoning (3)
- distributed software development (3)
- distributed software development process (3)
- explanation-based learning (3)
- problem solving (3)
- theorem proving (3)
- Algebraic Optimization (2)
- Brillouin light scattering spectroscopy (2)
- CAPlan (2)
- Combinatorial Optimization (2)
- Deduction (2)
- Fallbasiertes Schliessen (2)
- Geometrical Algorithms (2)
- Kinetic Schemes (2)
- MOLTKE-Projekt (2)
- Network Protocols (2)
- Partial functions (2)
- SDL (2)
- Software Engineering (2)
- Wannier-Stark systems (2)
- Wissensakquisition (2)
- World Wide Web (2)
- analogy (2)
- application (2)
- average density (2)
- building automation (2)
- case based reasoning (2)
- conservative extension (2)
- consistency (2)
- design patterns (2)
- entropy (2)
- formal specification (2)
- frames (2)
- incompressible Navier-Stokes equations (2)
- lattice Boltzmann method (2)
- learning system (2)
- localization (2)
- low Mach number limit (2)
- many-valued logic (2)
- problem formulation (2)
- quantum mechanics (2)
- requirements engineering (2)
- resonances (2)
- reuse (2)
- spin wave quantization (2)
- tactics (2)
- 90° orientation (1)
- Abelian groups (1)
- Ad-hoc workflow (1)
- Adaption (1)
- Agents (1)
- Algebraic optimization (1)
- Analysis (1)
- Analytic semigroup (1)
- Applications (1)
- Approximation (1)
- Approximation Algorithms (1)
- Automated Reasoning (1)
- Automated theorem proving (1)
- Autonomous mobile robots (1)
- Autoregression (1)
- Banach lattice (1)
- Bayes risk (1)
- Bisector (1)
- Blackboard architecture (1)
- Brillouin light scattering (1)
- Brownian motion (1)
- CNC-Maschine (1)
- COMOKIT (1)
- Case Study (1)
- Case-based problem solving (1)
- Causal Ordering (1)
- Causality (1)
- Chapman Enskog distributions (1)
- Chorin's projection scheme (1)
- Classification (1)
- Classification Tasks (1)
- CoMo-Kit (1)
- Collocation Method plus (1)
- Complexity and performance of numerical algorithms (1)
- Computational Fluid Dynamics (1)
- Computer Assisted Tomograp (1)
- Computer supported cooperative work (1)
- Concept mapping (1)
- Concept maps (1)
- Constraint Graphs (1)
- Contract net (1)
- Control Design Styles (1)
- Convexity (1)
- Cooperative decision making (1)
- Correlation (1)
- Cosine function (1)
- Coxeter groups (1)
- Crofton's intersection formulae (1)
- Curie temperature (1)
- Damon-Eshbach spin wave modes (1)
- Decision Making (1)
- Declarative and Procedural Knowledge (1)
- Design Patterns (1)
- Design Styles (1)
- Diagnosesystem (1)
- Difference Reduction (1)
- Differential Cross-Sections (1)
- Discrete decision problems (1)
- Discrete velocity models (1)
- Distributed Computation (1)
- Distributed Deb (1)
- Distributed Multimedia Applications (1)
- Distributed Software Development (1)
- Distributed Software Development Projects (1)
- Distributed System (1)
- Distributed software development support (1)
- Distributed systems (1)
- EBG (1)
- Ecommerce (1)
- Elastic properties (1)
- Equality reasoning (1)
- Equational Reasoning (1)
- Experience Database (1)
- Experimental Data (1)
- Extensibility (1)
- Feature Technology (1)
- Ferromagnetism (1)
- Filter-Diagonalization (1)
- Forbidden Regions (1)
- Fredholm integral equation of the second kind (1)
- Fuzzy Programming (1)
- GPS-satellite-to-satellite tracking (1)
- Gauss-Manin connection (1)
- Generic Methods (1)
- Global Optimization (1)
- Global Predicate Detection (1)
- Global Software Highway (1)
- Global optimization (1)
- HOT (1)
- HTE (1)
- HTML (1)
- Hadwiger's recursive de nition of the Euler number (1)
- Hamiltonian groups (1)
- Hardy space (1)
- Helmholtz decomposition (1)
- High frequency switching (1)
- Homogeneous Relaxation (1)
- INRECA (1)
- Ill-posed Problems (1)
- Improperly posed problems (1)
- Impulse control (1)
- Intelligent Agents (1)
- Intelligent agents (1)
- Interacting Magnetic Dots and Wires (1)
- Interleaved Planning (1)
- Iterative Methods (1)
- Java (1)
- Jeffreys' prior (1)
- Kinetic Schems (1)
- Kinetic theory (1)
- Knuth-Bendix completion algorithm (1)
- Kullback Leibler distance (1)
- Lagrangian Functions (1)
- Language Constructs (1)
- Lattice Boltzmann Method (1)
- Lattice Boltzmann methods (1)
- Lexicographic Order (1)
- Lexicographic max-ordering (1)
- Linear membership function (1)
- Local completeness (1)
- Location theory (1)
- Logic Design (1)
- Logical Time (1)
- MHEG (1)
- MOO (1)
- Map Building (1)
- Markov process (1)
- Maturity of Software Engineering (1)
- Max-Ordering (1)
- Mechanical Engineering (1)
- Methods (1)
- Mie representation (1)
- Minkowski space (1)
- Mn-Si-C alloy films (1)
- Multicriteria Location (1)
- Multicriteria optimization (1)
- Multiple Criteria (1)
- Multiple Objective Programs (1)
- NP-completeness (1)
- Navier-Stokes equations (1)
- Nonstationary processes (1)
- Numerical Simulation (1)
- Object-Relational DataBase Management Systems (ORDBMS) (1)
- Object-Relational Database Systems (1)
- Open-Source (1)
- Optimization (1)
- PATDEX (1)
- Palm distribution (1)
- Palm distributions (1)
- Pareto Optimality (1)
- Pareto Points (1)
- Planning and Verification (1)
- Polynomial Eigenfunctions (1)
- Position- and Orientation Estimation (1)
- Potential transform (1)
- Problem Solvers (1)
- Process Management (1)
- Process support (1)
- Produktionsdesign (1)
- Project Management (1)
- Pullen Edmonds system (1)
- Quality Improvement Paradigm (QIP) (1)
- Quantum Chaos (1)
- Quantum mechanics (1)
- Random Errors (1)
- Rarefied Polyatomic Gases (1)
- Rectifiability (1)
- Repositories (1)
- Requirements engineering (1)
- Resonant tunneling diode (1)
- Reuse (1)
- SDL-pattern a (1)
- SKALP (1)
- SQUID magnetometry (1)
- Saddle Points (1)
- Sandercock-type multipath tandem Fabry-Perot interferometer (1)
- Scalar type operator (1)
- Self-Referencing (1)
- Semantics of Programming Languages (1)
- Shannon capacity (1)
- Shannon optimal priors (1)
- Shock Wave Problem (1)
- Similarity Assessment (1)
- Smalltalk (1)
- Software Agents (1)
- Software development (1)
- Software development environment (1)
- Software engineering (1)
- Spatial Binary Images (1)
- Spectral Analysis (1)
- Square-mean Convergence (1)
- Stark systems (1)
- Stoner-like magnetic particles (1)
- Structural Adaptation (1)
- Structuring Approach (1)
- Tactics (1)
- Term rewriting systems (1)
- Theorem of Plemelj-Privalov (1)
- Topology Preserving Networks (1)
- Translation planes (1)
- Triangular fuzzy number (1)
- Vector Time (1)
- Vector-valued holomorphic function (1)
- Vetor optimization (1)
- Virtual Corporation (1)
- Virtual Software Projects (1)
- Voronoi diagram (1)
- Wannier-Bloch states (1)
- Wide Area Multimedia Group Interaction (1)
- Wissenserwerb (1)
- Word problem (1)
- Workflow Replication (1)
- World-Wide Web (1)
- abstract ODE (1)
- abstract description (1)
- adaption (1)
- analogical reasoning (1)
- anisotropic coupling between magnetic i (1)
- anisotropic coupling mechanism (1)
- approximation methods (1)
- arbitrary function (1)
- arrays of magnetic dots and wires (1)
- artificial intelligence (1)
- assembly sequence design (1)
- automated code generation (1)
- automated computer learning (1)
- automated synchronization (1)
- autonomes Lernen (1)
- autonomous learning (1)
- average densities (1)
- bcc-Fe(001) (1)
- bicriterion path problems (1)
- bipolar quantum drift diffusion model (1)
- biquadratic interlayer coupling (1)
- bootstrap (1)
- brillouin light scattering (1)
- business process modelling (1)
- cancer (1)
- case-based planner (1)
- case-based planning (1)
- cash management (1)
- center hyperplane (1)
- centrally symmetric polytope (1)
- chaotic dynamics (1)
- co-learning (1)
- combined systems with sha (1)
- common transversal (1)
- communication architectures (1)
- communication protocols (1)
- communication subsystem (1)
- compilation (1)
- complete presentations (1)
- completeness (1)
- complex energ (1)
- complex energy resonances (1)
- comprehensive reuse (1)
- compressible Navier Stokes equations (1)
- computer aided planning (1)
- computer control (1)
- computer-supported cooperative work (1)
- concept representation (1)
- conceptual representation (1)
- concurrent software (1)
- confluence (1)
- constraint satisfaction problem (CSP) (1)
- continuous media (1)
- convex distance funtion (1)
- convex operator (1)
- cooperative problem solving (1)
- critical thickness (1)
- cross-correlation (1)
- customization of communication protocols (1)
- decision support (1)
- decisions (1)
- decrease direction (1)
- deficiency (1)
- density distribution (1)
- dependency management (1)
- deposition temperature (1)
- design processes (1)
- diagnostic problems (1)
- dipole-exchange surface (1)
- direct product (1)
- directional derivative (1)
- discrete element method (1)
- discrete equilibrium distributions (1)
- discrete velocity models (1)
- discretization (1)
- disjoint union (1)
- distributed (1)
- distributed c (1)
- distributed deduction (1)
- distributed document management (1)
- distributed enterprise (1)
- distributed groupware environment (1)
- distributed multi-platform software development (1)
- distributed multi-platform software development projects (1)
- distributed software configuration management (1)
- distributed softwaredevelopment tools (1)
- dynamic capillary pressure (1)
- dynamical systems (1)
- enhanced coercivity (1)
- epitaxial growth (1)
- evolutionary spectrum (1)
- exchange coupling (1)
- exchange rate (1)
- exchange-bias bilayer Fe/MnPd (1)
- exchange-coupled rare-earth (1)
- experience base (1)
- experimental software engineering (1)
- exponential rate (1)
- f-dissimilarity (1)
- fallbasiertes Schliessen (1)
- fallbasiertes planen (1)
- final prediction error (1)
- finite difference method (1)
- flexible workflows (1)
- formal description techniques (1)
- formal reasoning (1)
- formulation as integral equation (1)
- frequency splitting betwe (1)
- function of bounded variation (1)
- gauge (1)
- general multidimensional moment problem (1)
- generalized Gummel itera (1)
- generic design of a customized communication subsystem (1)
- geodetic (1)
- geomagnetic field modelling from MAGSAT data (1)
- geometric measure theory (1)
- geometrical algorithms (1)
- geopotential determination (1)
- global optimization (1)
- goal oriented completion (1)
- granular flow (1)
- growth optimal portfolios (1)
- harmonic WFT (1)
- head-on collisions (1)
- heterogeneous large-scale distributed DBMS (1)
- high-level caching of potentially shared networked documents (1)
- higher order logic (1)
- higher-order anisotropies (1)
- higher-order tableaux calculus (1)
- higher-order theorem prover (1)
- hybrid knowledge representation (1)
- hyperbolic systems of conservation laws (1)
- hyperplane transversal (1)
- industrial supervision (1)
- inelastic light scattering (1)
- information (1)
- information systems engineering (1)
- innermost termination (1)
- instanton method (1)
- intelligent agents (1)
- interest oriented portfolios (1)
- internal approximation (1)
- internet event synchronizer (1)
- intersection local time (1)
- inverse Fourier transform (1)
- inverse mathematical models (1)
- isochronous streams (1)
- knowledge space (1)
- lacunarity distribution (1)
- large deviations (1)
- layered magnetic systems (1)
- learning (1)
- level splitting (1)
- lifetime statistics (1)
- lifetimes (1)
- linked abstraction workflows (1)
- local existence, uniqueness (1)
- locally maximal clone (1)
- locally stationary process (1)
- location (1)
- location problem (1)
- logarithmic average (1)
- logarithmic utility (1)
- macroscopic quantum coherence (1)
- magnetic Ni80Fe20 wires (1)
- magnetization reversal process (1)
- magneto-optical Kerr effect (1)
- magnetostatic surface spin waves (1)
- martingale measu (1)
- mathematical concept (1)
- maximum-entropy (1)
- metastable Pd(001) (1)
- middleware (1)
- minimal paths (1)
- minimax rate (1)
- minimax risk (1)
- mobile agents (1)
- mobile agents approach (1)
- modelling time (1)
- modularity (1)
- moment realizability (1)
- monitoring and managing distributed development processes (1)
- monodromy (1)
- morphism (1)
- motion planning (1)
- multi-agent architecture (1)
- multicriteria minimal path problem is presented (1)
- multicriteria optimization (1)
- multidimensional Kohonen algorithm (1)
- multimedia (1)
- multiple objective linear programming problem (1)
- multiple-view product modeling (1)
- multiresolution analysis (1)
- narrowing (1)
- negotiation (1)
- neural networks (1)
- non-convex optimization (1)
- noninformative prior (1)
- nonlinear thresholding (1)
- nonlinear wavelet thresholding (1)
- norm (1)
- normal cone (1)
- numeraire portfolios (1)
- numerical computation (1)
- object frameworks (1)
- order selection (1)
- order-sorted logic (1)
- order-two densities (1)
- order-two density (1)
- ovoids (1)
- paramodulation (1)
- patterned magnetic permalloy films (1)
- phase space (1)
- phase-space (1)
- plan enactment (1)
- polyhedral norm (1)
- porous flow (1)
- portfolio optimisation (1)
- preservation of relations (1)
- problem solvers (1)
- process model (1)
- process modelling (1)
- process support system (PROSYT) (1)
- process-centred environments (1)
- profiles (1)
- programmable client-server systems (1)
- project coordination (1)
- projected quasi-gradient method (1)
- proof plans (1)
- protocol (1)
- pseudo-compressibility method (1)
- qauntum mechanis (1)
- quadratic forms (1)
- quasi-one-dimensional spin wave envelope solitons (1)
- quasienergy (1)
- radiation therapy (1)
- rate control (1)
- reactive systems (1)
- real time (1)
- real-time (1)
- real-time temporal logic (1)
- receptive safety properties (1)
- reference prior (1)
- regularization by wavelets (1)
- rela (1)
- reliability (1)
- requirements (1)
- reuse repositories (1)
- robustness (1)
- scaled translates (1)
- search algorithms (1)
- second order logic (1)
- shape aniso-tropies (1)
- shear flow (1)
- short magnetic fieldpulses (1)
- short-time periodogram (1)
- shortest sequence (1)
- similarity measure (1)
- single domain uniaxial magnetic particles (1)
- singularities (1)
- software agents (1)
- software project (1)
- software project management (1)
- sorted logic (1)
- soundness (1)
- spin wave excitations (1)
- spinwaves (1)
- squares (1)
- statistical experiment (1)
- stochastic stability (1)
- structured permalloy films (1)
- switching properties (1)
- system behaviour (1)
- tableau (1)
- tangent measure distributions (1)
- temperature dependence (1)
- temporal logic (1)
- termination (1)
- theorem prover (1)
- thin h-BN films (1)
- time series (1)
- time-varying autoregression (1)
- topology preserving maps (1)
- traceability (1)
- transition rates (1)
- transition-metal (1)
- translation (1)
- transverse bias field (1)
- traveling salesman problem (1)
- treatment planning (1)
- trial systems (1)
- triple layer stacks (1)
- two-dimensional self-focused spin wave packets (1)
- typical examples (1)
- typical instance (1)
- uncertainty principle (1)
- uniform ergodicity (1)
- uniqueness (1)
- value preserving portfolios (1)
- vector measure (1)
- vector wavelets (1)
- virtual market place (1)
- viscosity solutions (1)
- visual process modelling environment (1)
- wall energy (1)
- wall thickness (1)
- wavelet estimators (1)
- wavelet transform (1)
- weak termination (1)
- windowed Fourier transform (1)
- work coordination (1)
- yttrium-iron garnet (YIG) fi (1)

#### Faculty / Organisational entity

A fundamental variance reduction technique for Monte Carlo integration in the framework of integro-approximation problems is
presented. Using the method of dependent tests a successive hierarchical function approximation algorithm is developed, which
captures discontinuities and exploits smoothness in the target function. The general mathematical scheme and its highly efficient
implementation are illustrated for image generation by ray tracing,
yielding new and much faster image synthesis algorithms.

We study the global solution of Fredholm integral equations of the second kind by the help of Monte Carlo methods. Global solution means that we seek to approximate the full solution function. This is opposed to the usual applications of Monte Carlo, were one only wants to approximate a functional of the solution. In recent years several researchers developed Monte Carlo methods also for the global problem. In this paper we present a new Monte Carlo algorithm for the global solution of integral equations. We use multiwavelet expansions to approximate the solution. We study the behaviour of variance on increasing levels, and based on this, develop a new variance reduction technique. For classes of smooth kernels and right hand sides we determine the convergence rate of this algorithm and show that it is higher
than those of previously developed algorithms for the global problem. Moreover, an information-based complexity analysis shows that our algorithm is optimal among all stochastic algorithms of the same computational
cost and that no deterministic algorithm of the same cost can reach its convergence rate.

Approximation properties of the underlying estimator are used to improve the efficiency of the method of dependent tests. A multilevel approximation procedure is developed such that in each level the number of samples is balanced with the level-dependent variance, resulting in a considerable reduction of the overall computational cost. The new technique is applied to the Monte Carlo estimation of integrals depending on a parameter.

We consider a Darcy flow model with saturation-pressure relation extended
with a dynamic term, namely, the time derivative of the saturation.
This model was proposed in works of J.Hulshof and J.R.King (1998), S.M.Hassanizadeh and W.G.Gray (1993),
F.Stauffer (1978).
We restrict ourself to one spatial dimension and strictly positive
initial saturation. For this case we transform the initial-boundary value
problem into combination of elliptic boundary-value problem and initial
value problem for abstract Ordinary Differential Equation. This splitting
is rather helpful both for theoretical aspects and numerical methods.

Abstract: Random matrix theory (RMT) is a powerful statistical tool to model spectral fluctuations. In addition, RMT provides efficient means to separate different scales in spectra. Recently RMT has found application in quantum chromodynamics (QCD). In mesoscopic physics, the Thouless energy sets the universal scale for which RMT applies. We try to identify the equivalent of a Thouless energy in complete spectra of the QCD Dirac operator with staggered fermions and SU_(2) lattice gauge fields. Comparing lattice data with RMT predictions we find deviations which allow us to give an estimate for this scale.

Beyond the Thouless energy
(1999)

Abstract: The distribution and the correlations of the small eigenvalues of the Dirac operator are described by random matrix theory (RMT) up to the Thouless energy E_= 1 / sqrt (V), where V is the physical volume. For somewhat larger energies, the same quantities can be described by chiral perturbation theory (chPT). For most quantities there is an intermediate energy regime, roughly 1/V < E < 1/sqrt (V), where the results of RMT and chPT agree with each other. We test these predictions by constructing the connected and disconnected scalar susceptibilities from Dirac spectra obtained in quenched SU(2) and SU(3) simulations with staggered fermions for a variety of lattice sizes and coupling constants. In deriving the predictions of chPT, it is important totake into account only those symmetries which are exactly realized on the lattice.

Abstract: Recently, the chiral logarithms predicted by quenched chiral perturbation theory have been extracted from lattice calculations of hadron masses. We argue that the deviations of lattice results from random matrix theory starting around the so-called Thouless energy can be understood in terms of chiral perturbation theory as well. Comparison of lattice data with chiral perturbation theory formulae allows us to compute the pion decay constant. We present results from a calculation for quenched SU(2) with Kogut-Susskind fermions at ß = 2.0 and 2.2.

Abstract: Recently, the contributions of chiral logarithms predicted by quenched chiral perturbation theory have been extracted from lattice calculations of hadron masses. We argue that a detailed comparison of random matrix theory and lattice calculations allows for a precise determination of such corrections. We estimate the relative size of the m log(m), m, and m^2 corrections to the chiral condensate for quenched SU(2).

Abstract: We describe a general technique that allows for an ideal transfer of quantum correlations between light fields and metastable states of matter. The technique is based on trapping quantum states of photons in coherently driven atomic media, in which the group velocity is adiabatically reduced to zero. We discuss possible applications such as quantum state memories, generation of squeezed atomic states, preparation of entangled atomic ensembles and quantum information processing.

Abstract: We show that it is possible to "store" quantum states of single-photon fields by mapping them onto collective meta-stable states of an optically dense, coherently driven medium inside an optical resonator. An adiabatic technique is suggested which allows to transfer non-classical correlations from traveling-wave single-photon wave-packets into atomic states and vise versa with nearly 100% efficiency. In contrast to previous approaches involving single atoms, the present technique does not require the strong coupling regime corresponding to high-Q micro-cavities. Instead, intracavity Electromagnetically Induced Transparency is used to achieve a strong coupling between the cavity mode and the atoms.

Mirrorless oscillation based on resonantly enhanced 4-wave mixing: All-order analytic solutions
(1999)

Abstract: The phase transition to mirrorless oscillation in resonantly enhanced four-wave mixing in double-A systems are studied analytically for the ideal case of infinite lifetimes of ground-state coherences. The stationary susceptibilities are obtained in all orders of the generated fields and analytic solutions of the coupled nonlinear differential equations for the field amplitudes are derived and discussed.

Abstract: We utilize the generation of large atomic coherence to enhance the resonant nonlinear magneto-optic effect by several orders of magnitude, thereby eliminating power broadening and improving the fundamental signal-to-noise ratio. A proof-of-principle experiment is carried out in a dense vapor of Rb atoms. Detailed numerical calculations are in good agreement with the experimental results. Applications such as optical magnetometry or the search for violations of parity and time reversal symmetry are feasible.

Abstract: Spontaneous emission and Lamb shift of atoms in absorbing dielectrics are discussed. A Green's-function approach is used based on the multipolar interaction Hamiltonian of a collection of atomic dipoles with the quantised radiation field. The rate of decay and level shifts are determined by the retarded Green's-function of the interacting electric displacement field, which is calculated from a Dyson equation describing multiple scattering. The positions of the atomic dipoles forming the dielectrics are assumed to be uncorrelated and a continuum approximation is used. The associated unphysical interactions between different atoms at the same location is eliminated by removing the point-interaction term from the free-space Green's-function (local field correction). For the case of an atom in a purely dispersive medium the spontaneous emission rate is altered by the well-known Lorentz local-field factor. In the presence of absorption a result different from previously suggested expressions is found and nearest-neighbour interactions are shown to be important.

Abstract: We aim to establish a link between path-integral formulations of quantum and classical field theories via diagram expansions. This link should result in an independent constructive characterisation of the measure in Feynman path integrals in terms of a stochastic differential equation (SDE) and also in the possibility of applying methods of quantum field theory to classical stochastic problems. As a first step we derive in the present paper a formal solution to an arbitrary c-number SDE in a form which coincides with that of Wick's theorem for interacting bosonic quantum fields. We show that the choice of stochastic calculus in the SDE may be regarded as a result of regularisation, which in turn removes ultraviolet divergences from the corresponding diagram series.

We show that the solution to an arbitrary c-number stochastic differential equation (SDE) can be represented as a diagram series. Both the diagram rules and the properties of the graphical elements reflect causality properties of the SDE and this series is therefore called a causal diagram series. We also discuss the converse problem, i.e. how to construct an SDE of which a formal solution is a given causal diagram series. This then allows for a nonperturbative summation of the diagram series by solving this SDE, numerically or analytically.

Abstract: We propose a simple method for measuring the populations and the relative phase in a coherent superposition of two atomic states. The method is based on coupling the two states to a third common (excited) state by means of two laser pulses, and measuring the total fluorescence from the third state for several choices of the excitation pulses.

Abstract: We present experimental and theoretical results of a detailed study of laser-induced continuum structures (LICS) in the photoionization continuum of helium out of the metastable state 2s^1 S_0. The continuum dressing with a 1064 nm laser, couples the same region of the continuum to the 4s^1 S_0 state. The experimental data, presented for a range of intensities, show pronounced ionization suppression (by asmuch as 70% with respect to the far-from-resonance value) as well as enhancement, in a Beutler-Fano resonance profile. This ionization suppression is a clear indication of population trapping mediated by coupling to a contiuum. We present experimental results demonstrating the effect of pulse delay upon the LICS, and for the behavior of LICS for both weak and strong probe pulses. Simulations based upon numerical solution of the Schrödinger equation model the experimental results. The atomic parameters (Rabi frequencies and Stark shifts) are calculated using a simple model-potential method for the computation of the needed wavefunctions. The simulations of the LICS profiles are in excellent agreement with experiment. We also present an analytic formulation of pulsed LICS. We show that in the case of a probe pulse shorter than the dressing one the LICS profile is the convolution of the power spectra of the probe pulse with the usual Fano profile of stationary LICS. We discuss some consequences of deviation from steady-state theory.

We present results from a study of the coherence properties of a system involving three discrete states coupled to each other by two-photon processes via a common continuum. This tripod linkage is an extension of the standard laser-induced continuum structure (LICS) which involves two discrete states and two lasers. We show that in the tripod scheme, there exist two population trapping conditions; in some cases these conditions are easier to satisfy than the single trapping condition in two-state LICS. Depending on the pulse timing, various effects can be observed. We derive some basic properties of the tripod scheme, such as the solution for coincident pulses, the behaviour of the system in the adiabatic limit for delayed pulses, the conditions for no ionization and for maximal ionization, and the optimal conditions for population transfer between the discrete states via the continuum. In the case when one of the discrete states is strongly coupled to the continuum, the population dynamics reduces to a standard two-state LICS problem (involving the other two states) with modified parameters; this provides the opportunity to customize the parameters of a given two-state LICS system.

Abstract: In this paper we present a renormalizability proof for spontaneously broken SU (2) gauge theory. It is based on Flow Equations, i.e. on the Wilson renormalization group adapted to perturbation theory. The power counting part of the proof, which is conceptually and technically simple, follows the same lines as that for any other renormalizable theory. The main difficulty stems from the fact that the regularization violates gauge invariance. We prove that there exists a class of renormalization conditions such that the renormalized Green functions satisfy the Slavnov-Taylor identities of SU (2) Yang-Mills theory on which the gauge invariance of the renormalized theory is based.

Magnetic anisotropies of MBE-grown fcc Co(110)-films on Cu(110) single crystal substrates have been determined by using Brillouin light scattering(BLS) and have been correlated with the structural properties determined by low energy electron diffraction (LEED) and scanning tunneling microscopy (STM). Three regimes of film growth and associated anisotropy behavior are identified: coherent growth in the Co film thickness regime of up to 13 Å, in-plane anisotropic strain relaxation between 13 Å and about 50 Å and inplane isotropic strain relaxation above 50 Å. The structural origin of the transition between anisotropic and isotropic strain relaxation was studied using STM. In the regime of anisotropic strain relaxation long Co stripes with a preferential [ 110 ]-orientation are observed, which in the isotropic strain relaxation regime are interrupted in the perpendicular in-plane direction to form isotropic islands. In the Co film thickness regime below 50 Å an unexpected suppression of the magnetocrystalline anisotropy contribution is observed. A model calculation based on a crystal field formalism and discussed within the context of band theory, which explicitly takes tetragonal misfit strains into account, reproduces the experimentally observed anomalies despite the fact that the thick Co films are quite rough.

Absract: We report on measurements of the two-dimensional intensity distribtion of linear and non-linear spin wave excitations in a LuBiFeO film. The spin wave intensity was detected with a high-resolution Brillouinlight scatteringspectroscopy setup. The observed snake-like structure of the spin wave intensity distribution is understood as a mode beating between modes with different lateral spin wave intensity distributions. The theoretical treatment of the linear regime is performed analytically, whereas the propagation of non-linear spin waves is simulated by a numerical solution of a non-linear Schrödinger equation with suitable boundary conditions.

Abstract: The periodic bounce configurations responsible for quantum tunneling are obtained explicitly and are extended to the finite energy case for minisuperspace models of the Universe. As a common feature of the tunneling models at finite energy considered here we observe that the period of the bounce increases with energy monotonically. The periodic bounces do not have bifurcations and make no contribution to the nucleation rate except the one with zero energy. The sharp first order phase transition from quantum tunneling to thermal activation is verified with the general criterions.

We consider a (2 + 1)-dimensional mechanical system with the Lagrangian linear in the torsion of a light-like curve. We give Hamiltonian formulation of this system and show that its mass and spin spectra are defined by one-dimensional nonrelativistic mechanics with a cubic potential. Consequently, this system possesses the properties typical of resonance-like particles.

Starting from the Hamiltonian operator of the noncompensated two-sublattice model of a small antiferromagnetic particle, we derive the e effective Lagrangian of a biaxial antiferromagnetic particle in an external magnetic field with the help of spin-coherent-state path integrals. Two unequal level-shifts induced by tunneling through two types of barriers are obtained using the instanton method. The energy spectrum is found from Bloch theory regarding the periodic potential as a superlattice. The external magnetic field indeed removes Kramers' degeneracy, however a new quenching of the energy splitting depending on the applied magnetic field is observed for both integer and half-integer spins due to the quantum interference between transitions through two types of barriers.

The development of complex software systems is driven by many diverse and sometimes contradictory requirements such as correctness and maintainability of resulting products, development costs, and time-to-market. To alleviate these difficulties, we propose a development method for distributed systems that integrates different basic approaches. First, it combines the use of the formal description technique SDL with software reuse concepts. This results in the definition of a use-case driven, incremental development method with SDL-patterns as the main reusable artifacts. Experience with this approach has shown that there are several other factors of influence, such as the quality of reuse artifacts or the experience of the development team. Therefore, we further combined our SDL-pattern approach with an improvement methodology known from the area of experimental software engineering. In order to demonstrate the validity of this integrating approach, we sketch some representative outcomings of a case study.

We consider three applications of impulse control in financial mathematics, a cash management problem, optimal control of an exchange rate, and portfolio optimisation under transaction costs. We sketch the different ways of solving these problems with the help of quasi-variational inequalities. Further, some viscosity solution results are presented.

Continuous and discrete superselection rules induced by the interaction with the environment are investigated for a class of exactly soluble Hamiltonian models. The environment is given by a Boson field. Stable superselection sectors can only emerge if the low frequences dominate and the ground state of the Boson field disappears due to infrared divergence. The models allow uniform estimates of all transition matrix elements between different superselection sectors.

The paper studies quantum states of a Bloch particle in presence of external ac and dc fields. Provided the period of the ac field and the Bloch period are commensurate, an effective scattering matrix is introduced, the complex poles of which are the system quasienergy spectrum. The statistics of the resonance width and the Wigner delay time shows a close relation of the problem to random matrix theory of chaotic scattering.

A novel method is presented which allows a fast computation of complex energy resonance states in Stark systems, i.e. systems in a homogeneous field. The technique is based on the truncation of a shift-operator in momentum space. Numerical results for space periodic and non-periodic systems illustrate the extreme simplicity of the method.

The paper studies metastable states of a Bloch electron in the presence of external ac and dc fields. Provided resonance condition between period of the driving frequency and the Bloch period, the complex quasienergies are numerically calculated for two qualitatively different regimes (quasiregular and chaotic) of the system dynamics. For the chaotic regime an effect of quantum stabilization, which suppresses the classical decay mechanism, is found. This effect is demonstrated to be a kind of quantum interference phenomenon sensitive to the resonance condition.

A new method for calculating Stark resonances is presented and applied for illustration to the simple case of a one-particle, one-dimensional model Hamiltonian. The method is applicable for weak and strong dc fields. The only need, also for the case of many particles in multi-dimensional space, are either the short time evolution matrix elements or the eigenvalues and Fourier components of the eigenfunctions of the field-free Hamiltonian.

We present an entropy concept measuring quantum localization in dynamical systems based on time averaged probability densities. The suggested entropy concept is a generalization of a recently introduced [PRL 75, 326 (1995)] phase-space entropy to any representation chosen according to the system and the physical question under consideration. In this paper we inspect the main characteristics of the entropy and the relation to other measures of localization. In particular the classical correspondence is discussed and the statistical properties are evaluated within the framework of random vector theory. In this way we show that the suggested entropy is a suitable method to detect quantum localization phenomena in dynamical systems.

The Filter-Diagonalization Method is applied to time periodic Hamiltonians and used to find selectively the regular and chaotic quasienergies of a driven 2D rotor. The use of N cross-correlation probability amplitudes enables a selective calculation of the quasienergies from short time propagation to the time T (N). Compared to the propagation time T (1) which is required for resolving the quasienergy spectrum with the same accuracy from auto-correlation calculations, the cross-correlation time T (N) is shorter by the factor N , that is T (1) = N T (N).

The global dynamical properties of a quantum system can be conveniently visualized in phase space by means of a quantum phase space entropy in analogy to a Poincare section in classical dynamics for two-dimensional time independent systems. Numerical results for the Pullen Edmonds systems demonstrate the properties of the method for systems with mixed chaotic and regular dynamics.

In this paper we deal with the determination of the whole set of Pareto-solutions of location problems with respect to Q general criteria.These criteria include median, center or cent-dian objective functions as particular instances.The paper characterizes the set of Pareto-solutions of a these multicriteria problems. An efficient algorithm for the planar case is developed and its complexity is established. Extensions to higher dimensions as well as to the non-convexcase are also considered.The proposed approach is more general than the previously published approaches to multi-criteria location problems and includes almost all of them as particular instances.

In a discrete-time financial market setting, the paper relates various concepts introduced for dynamic portfolios (both in discrete and in continuous time). These concepts are: value preserving portfolios, numeraire portfolios, interest oriented portfolios, and growth optimal portfolios. It will turn out that these concepts are all associated with a unique martingale measure which agrees with the minimal martingale measure only for complete markets.

Facility Location Problems are concerned with the optimal location of one or several new facilities, with respect to a set of existing ones. The objectives involve the distance between new and existing facilities, usually a weighted sum or weighted maximum. Since the various stakeholders (decision makers) will have different opinions of the importance of the existing facilities, a multicriteria problem with several sets of weights, and thus several objectives, arises. In our approach, we assume the decision makers to make only fuzzy comparisons of the different existing facilities. A geometric mean method is used to obtain the fuzzy weights for each facility and each decision maker. The resulting multicriteria facility location problem is solved using fuzzy techniques again. We prove that the final compromise solution is weakly Pareto optimal and Pareto optimal, if it is unique, or under certain assumptions on the estimates of the Nadir point. A numerical example is considered to illustrate the methodology.

Value Preserving Strategies and a General Framework for Local Approaches to Optimal Portfolios
(1999)

We present some new general results on the existence and form of value preserving portfolio strategies in a general semimartingale setting. The concept of value preservation will be derived via a mean-variance argument. It will also be embedded into a framework for local approaches to the problem of portfolio optimisation.

Discretizations for the Incompressible Navier-Stokes Equations based on the Lattice Boltzmann Method
(1999)

A discrete velocity model with spatial and velocity discretization based on a lattice Boltzmann method is considered in the low Mach number limit. A uniform numerical scheme for this model is investigated. In the limit, the scheme reduces to a finite difference scheme for the incompressible Navier-Stokes equation which is a projection method with a second order spatial discretization on a regular grid. The discretization is analyzed and the method is compared to Chorin's original spatial discretization. Numerical results supporting the analytical statements are presented.

In this paper we derive fluid dynamic equations byperforming asymptotic analysis for the generalized Boltzmann equationfor polyatomic gases. In particular, we consider the steady state,one-dimensional Boltzmann equation with one additional internal energyand different relaxation times. Moreover, we present a new approachto define coupling procedures for the Boltzmann equation and Navier-Stokesequations based on the 14-moments expansion of Levermore. These coupledmodels are validated by numerical simulations.

We consider a scale discrete wavelet approach on the sphere based on spherical radial basis functions. If the generators of the wavelets have a compact support, the scale and detail spaces are finite-dimensional, so that the detail information of a function is determined by only finitely many wavelet coefficients for each scale. We describe a pyramid scheme for the recursive determination of the wavelet coefficients from level to level, starting from an initial approximation of a given function. Basic tools are integration formulas which are exact for functions up to a given polynomial degree and spherical convolutions.

Moment inequalities for the Boltzmann equation and applications to spatially homogeneous problems
(1999)

Some inequalities for the Boltzmann collision integral are proved. These inequalities can be considered as a generalization of the well-known Povzner inequality. The inequalities are used to obtain estimates of moments of solution to the spatially homogeneous Boltzmann equation for a wide class of intermolecular forces. We obtained simple necessary and sufficient conditions (on the potential) for the uniform boundedness of all moments. For potentials with compact support the following statement is proved. .....

Using an experience factory is one possible concept for supporting and improving reuse in software development. (i.e., reuse of products, processes, quality models, ...). In the context of the Sonderforschungsbereich 501: "Development of Large Systems with Generic methods" (SFB501), the Software Engineering Laboratory (SE Lab) runs such an experience factory as part of the infrastructure services it offers. The SE Lab also provides several tools to support the planning, developing, measuring, and analyzing activities of software development processes. Among these tools, the SE Lab runs and maintains an experience base, the SFB-EB. When an experience factory is utilized, support for experience base maintenance is an important issue. Furthermore, it might be interesting to evaluate experience base usage with regard to the number of accesses to certain experience elements stored in the database. The same holds for the usage of the tools provided by the SE LAB. This report presents a set of supporting tools that were designed to aid in these tasks. These supporting tools check the experience base's consistency and gather information on the usage of SFB-EB and the tools installed in the SE Lab. The results are processed periodically and displayed as HTML result reports (consistency checking) or bar charts (usage profiles).

Manipulating deformable linear objects - Vision-based recognition of contact state transitions -
(1999)

A new and systematic approach to machine vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation of the object with respect to an obstacle and are derived from the object image and its features. Therefore, the object is segmented from a standard video frame using a fast segmentation algorithm. Several object features are presented which allow the state recognition of the object while being manipulated by the robot.

Comprehensive reuse and systematic evolution of reuse artifacts as proposed by the Quality Improvement Paradigm (QIP) do not only require tool support for mere storage and retrieval. Rather, an integrated management of (potentially reusable) experience data as well as project-related data is needed. This paper presents an approach exploiting object-relational database technology to implement the QIP-driven reuse repository of the SFB 501. Requirements, concepts, and implementational aspects are discussed and illustrated through a running example, namely the reuse and continuous improvement of SDL patterns for developing distributed systems. Based on this discussion, we argue that object-relational database management systems (ORDBMS) are best suited to implement such a comprehensive reuse repository. It is demonstrated how this technology can be used to support all phases of a reuse process and the accompanying improvement cycle. Although the discussions of this paper are strongly related to the requirements of the SFB 501 experience base, the basic realization concepts, and, thereby, the applicability of ORDBMS, can easily be extended to similar applications, i. e., reuse repositories in general.

The paper shows that characterizing the causal relationship between significant events is an important but non-trivial aspect for understanding the behavior of distributed programs. An introduction to the notion of causality and its relation to logical time is given; some fundamental results concerning the characterization of causality are pre- sented. Recent work on the detection of causal relationships in distributed computations is surveyed. The relative merits and limitations of the different approaches are discussed, and their general feasibility is analyzed.

The Hamiltonian of the \(N\)-particle Calogero model can be expressed in terms of generators of a Lie algebra for a definite class of representations. Maintaining this Lie algebra, its representations, and the flatness of the Riemannian metric belonging to the second order differential operator, the set of all possible quadratic Lie algebra forms is investigated. For \(N = 3\) and \(N = 4\) such forms are constructed explicitly and shown to correspond to exactly solvable Sutherland models. The results can be carried over easily to all \(N\).

Trigonometric invariants are defined for each Weyl group orbit on the root lattice. They are real and periodic on the coroot lattice. Their polynomial algebra is spanned by a basis which is calculated by means of an algorithm. The invariants of the basis can be used as coordinates in any cell of the coroot space and lead to an exactly solvable model of Sutherland type. We apply this construction to the \(F_4\) case.

The task of handling non-rigid one-dimensional objects by a robot manipulation system is investigated. To distinguish between different non-rigid object behaviors, five classes of deformable objects from a robotic point of view are proposed. Additionally, an enumeration of all possible contact states of one-dimensional objects with polyhedral obstacles is provided. Finally, the qualitative motion behavior of linear objects is analyzed for stable point contacts. Experiments with different materials validate the analytical results.

We present an approach to learning cooperative behavior of agents. Our ap-proach is based on classifying situations with the help of the nearest-neighborrule. In this context, learning amounts to evolving a set of good prototypical sit-uations. With each prototypical situation an action is associated that should beexecuted in that situation. A set of prototypical situation/action pairs togetherwith the nearest-neighbor rule represent the behavior of an agent.We demonstrate the utility of our approach in the light of variants of thewell-known pursuit game. To this end, we present a classification of variantsof the pursuit game, and we report on the results of our approach obtained forvariants regarding several aspects of the classification. A first implementationof our approach that utilizes a genetic algorithm to conduct the search for a setof suitable prototypical situation/action pairs was able to handle many differentvariants.

The common wisdom that goal orderings can be used to improve planning performance is nearly as old as planning itself. During the last decades of research several approaches emerged that computed goal orderings for different planning paradigms, mostly in the area of state-space planning. For partial-order, plan-space planners goal orderings have not been investigated in much detail. Mechanisms developed for statespace planning are not directly applicable because partial-order planners do not have a current (world) state. Further, it is not completely clear how plan-space planners should make use of goal orderings. This paper describes an approach to extract goal orderings to be used by the plan-space planner CAPlan. The extraction of goal orderings is based on the analysis of an extended version of operator graphs which previously have been found useful for the analysis of interactions and recursion of plan-space planners.

We describe a hybrid architecture supporting planning for machining workpieces. The archi- tecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.

We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is build on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been realized that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.

The feature interaction problem in telecommunications systems increasingly obstructsthe evolution of such systems. We develop formal detection criteria which render anecessary (but less than sufficient) condition for feature interactions. It can be checkedmechanically and points out all potentially critical spots. These have to be analyzedmanually. The resulting resolution decisions are incorporated formally. Some prototypetool support is already available. A prerequisite for formal criteria is a formal definitionof the problem. Since the notions of feature and feature interaction are often used in arather fuzzy way, we attempt a formal definition first and discuss which aspects can beincluded in a formalization (and therefore in a detection method). This paper describeson-going work.

We present two techniques for reasoning from cases to solve classification tasks: Induction and case-based reasoning. We contrast the two technologies (that are often confused) and show how they complement each other. Based on this, we describe how they are integrated in one single platform for reasoning from cases: The Inreca system.

Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.

Collecting Experience on the Systematic Development of CBR Applications using the INRECA Methodology
(1999)

This paper presents an overview of the INRECA methodology for building and maintaining CBR applications. This methodology supports the collection and reuse of experience on the systematic development of CBR applications. It is based on the experience factory and the software process modeling approach from software engineering. CBR development experience is documented using software process models and stored in different levels of generality in a three-layered experience base. Up to now, experience from 9 industrial projects enacted by all INRECA II partners has been collected.

Automata-Theoretic vs. Property-Oriented Approaches for the Detection of Feature Interactions in IN
(1999)

The feature interaction problem in Intelligent Networks obstructs more and morethe rapid introduction of new features. Detecting such feature interactions turns out to be a big problem. The size of the systems and the sheer computational com-plexity prevents the system developer from checking manually any feature against any other feature. We give an overview on current (verification) approaches and categorize them into property-oriented and automata-theoretic approaches. A comparisonturns out that each approach complements the other in a certain sense. We proposeto apply both approaches together in order to solve the feature interaction problem.

Planning means constructing a course of actions to achieve a specified set of goals when starting from an initial situation. For example, determining a sequence of actions (a plan) for transporting goods from an initial location to some destination is a typical planning problem in the transportation domain. Many planning problems are of practical interest.

MOLTKE is a research project dealing with a complex technical application. After describing the domain of CNCmachining centers and the applied KA methods, we summarize the concrete KA problems which we have to handle. Then we describe a KA mechanism which supports an engineer in developing a diagnosis system. In chapter 6 weintroduce learning techniques operating on diagnostic cases and domain knowledge for improving the diagnostic procedure of MOLTKE. In the last section of this chapter we outline some essential aspects of organizationalknowledge which is heavily applied by engineers for analysing such technical systems (Qualitative Engineering). Finally we give a short overview of the actual state of realization and our future plans.

Most automated theorem provers suffer from the problem that theycan produce proofs only in formalisms difficult to understand even forexperienced mathematicians. Efforts have been made to transformsuch machine generated proofs into natural deduction (ND) proofs.Although the single steps are now easy to understand, the entire proofis usually at a low level of abstraction, containing too many tedioussteps. Therefore, it is not adequate as input to natural language gen-eration systems.To overcome these problems, we propose a new intermediate rep-resentation, called ND style proofs at the assertion level . After illus-trating the notion intuitively, we show that the assertion level stepscan be justified by domain-specific inference rules, and that these rulescan be represented compactly in a tree structure. Finally, we describea procedure which substantially shortens ND proofs by abstractingthem to the assertion level, and report our experience with furthertransformation into natural language.

In this paper we show that distributing the theorem proving task to several experts is a promising idea. We describe the team work method which allows the experts to compete for a while and then to cooperate. In the cooperation phase the best results derived in the competition phase are collected and the less important results are forgotten. We describe some useful experts and explain in detail how they work together. We establish fairness criteria and so prove the distributed system to be both, complete and correct. We have implementedour system and show by non-trivial examples that drastical time speed-ups are possible for a cooperating team of experts compared to the time needed by the best expert in the team.

Constructing an analogy between a known and already proven theorem(the base case) and another yet to be proven theorem (the target case) oftenamounts to finding the appropriate representation at which the base and thetarget are similar. This is a well-known fact in mathematics, and it was cor-roborated by our empirical study of a mathematical textbook, which showedthat a reformulation of the representation of a theorem and its proof is in-deed more often than not a necessary prerequisite for an analogical inference.Thus machine supported reformulation becomes an important component ofautomated analogy-driven theorem proving too.The reformulation component proposed in this paper is embedded into aproof plan methodology based on methods and meta-methods, where the latterare used to change and appropriately adapt the methods. A theorem and itsproof are both represented as a method and then reformulated by the set ofmetamethods presented in this paper.Our approach supports analogy-driven theorem proving at various levels ofabstraction and in principle makes it independent of the given and often acci-dental representation of the given theorems. Different methods can representfully instantiated proofs, subproofs, or general proof methods, and hence ourapproach also supports these three kinds of analogy respectively. By attachingappropriate justifications to meta-methods the analogical inference can oftenbe justified in the sense of Russell.This paper presents a model of analogy-driven proof plan construction andfocuses on empirically extracted meta-methods. It classifies and formally de-scribes these meta-methods and shows how to use them for an appropriatereformulation in automated analogy-driven theorem proving.

Following Buchberger's approach to computing a Gröbner basis of a poly-nomial ideal in polynomial rings, a completion procedure for finitely generatedright ideals in Z[H] is given, where H is an ordered monoid presented by a finite,convergent semi - Thue system (Sigma; T ). Taking a finite set F ' Z[H] we get a(possibly infinite) basis of the right ideal generated by F , such that using thisbasis we have unique normal forms for all p 2 Z[H] (especially the normal formis 0 in case p is an element of the right ideal generated by F ). As the orderingand multiplication on H need not be compatible, reduction has to be definedcarefully in order to make it Noetherian. Further we no longer have p Delta x ! p 0for p 2 Z[H]; x 2 H. Similar to Buchberger's s - polynomials, confluence criteriaare developed and a completion procedure is given. In case T = ; or (Sigma; T ) is aconvergent, 2 - monadic presentation of a group providing inverses of length 1 forthe generators or (Sigma; T ) is a convergent presentation of a commutative monoid ,termination can be shown. So in this cases finitely generated right ideals admitfinite Gröbner bases. The connection to the subgroup problem is discussed.

The hallmark of traditional Artificial Intelligence (AI) research is the symbolic representation and processing of knowledge. This is in sharp contrast to many forms of human reasoning, which to an extraordinary extent, rely on cases and (typical) examples. Although these examples could themselves be encoded into logic, this raises the problem of restricting the corresponding model classes to include only the intended models.There are, however, more compelling reasons to argue for a hybrid representa-tion based on assertions as well as examples. The problems of adequacy, availability of information, compactness of representation, processing complexity, and last but not least, results from the psychology of human reasoning, all point to the same conclusion: Common sense reasoning requires different knowledge sources and hybrid reasoning principles that combine symbolic as well as semantic-based inference. In this paper we address the problem of integrating semantic representations of examples into automateddeduction systems. The main contribution is a formal framework for combining sentential with direct representations. The framework consists of a hybrid knowledge base, made up of logical formulae on the one hand and direct representations of examples on the other, and of a hybrid reasoning method based on the resolution calculus. The resulting hybrid resolution calculus is shown to be sound and complete.

This case study examines in detail the theorems and proofs that are shownby analogy in a mathematical textbook on semigroups and automata, thatis widely used as an undergraduate textbook in theoretical computer scienceat German universities (P. Deussen, Halbgruppen und Automaten, Springer1971). The study shows the important role of restructuring a proof for findinganalogous subproofs, and of reformulating a proof for the analogical trans-formation. It also emphasizes the importance of the relevant assumptions ofa known proof, i.e., of those assumptions actually used in the proof. In thisdocument we show the theorems, the proof structure, the subproblems andthe proofs of subproblems and their analogues with the purpose to providean empirical test set of cases for automated analogy-driven theorem proving.Theorems and their proofs are given in natural language augmented by theusual set of mathematical symbols in the studied textbook. As a first step weencode the theorems in logic and show the actual restructuring. Secondly, wecode the proofs in a Natural Deduction calculus such that a formal analysisbecomes possible and mention reformulations that are necessary in order toreveal the analogy.

We provide an overview of UNICOM, an inductive theorem prover for equational logic which isbased on refined rewriting and completion techniques. The architecture of the system as well as itsfunctionality are described. Moreover, an insight into the most important aspects of the internalproof process is provided. This knowledge about how the central inductive proof componentof the system essentially works is crucial for human users who want to solve non-trivial prooftasks with UNICOM and thoroughly analyse potential failures. The presentation is focussedon practical aspects of understanding and using UNICOM. A brief but complete description ofthe command interface, an installation guide, an example session, a detailed extended exampleillustrating various special features and a collection of successfully handled examples are alsoincluded.

While most approaches to similarity assessment are oblivious of knowledge and goals, there is ample evidence that these elements of problem solving play an important role in similarity judgements. This paper is concerned with an approach for integrating assessment of similarity into a framework of problem solving that embodies central notions of problem solving like goals, knowledge and learning.

To prove difficult theorems in a mathematical field requires substantial know-ledge of that field. In this thesis a frame-based knowledge representation formal-ism including higher-order sorted logic is presented, which supports a conceptualrepresentation and to a large extent guarantees the consistency of the built-upknowledge bases. In order to operationalize this knowledge, for instance, in anautomated theorem proving system, a class of sound morphisms from higher-orderinto first-order logic is given, in addition a sound and complete translation ispresented. The translations are bijective and hence compatible with a later proofpresentation.In order to prove certain theorems the comprehension axioms are necessary,(but difficult to handle in an automated system); such theorems are called trulyhigher-order. Many apparently higher-order theorems (i.e. theorems that arestated in higher-order syntax) however are essentially first-order in the sense thatthey can be proved without the comprehension axioms: for proving these theoremsthe translation technique as presented in this thesis is well-suited.

We transform a user-friendly formulation of aproblem to a machine-friendly one exploiting the variabilityof first-order logic to express facts. The usefulness of tacticsto improve the presentation is shown with several examples.In particular it is shown how tactical and resolution theoremproving can be combined.

There are well known examples of monoids in literature which do not admit a finite andcanonical presentation by a semi-Thue system over a fixed alphabet, not even over an arbi-trary alphabet. We introduce conditional Thue and semi-Thue systems similar to conditionalterm rewriting systems as defined by Kaplan. Using these conditional semi-Thue systems wegive finite and canonical presentations of the examples mentioned above. Furthermore weshow, that each finitely generated monoid with decidable word problem is embeddable in amonoid which has a finite canonical conditional presentation.

Typical examples, that is, examples that are representative for a particular situationor concept, play an important role in human knowledge representation and reasoning.In real life situations more often than not, instead of a lengthy abstract characteriza-tion, a typical example is used to describe the situation. This well-known observationhas been the motivation for various investigations in experimental psychology, whichalso motivate our formal characterization of typical examples, based on a partial orderfor their typicality. Reasoning by typical examples is then developed as a special caseof analogical reasoning using the semantic information contained in the correspondingconcept structures. We derive new inference rules by replacing the explicit informa-tion about connections and similarity, which are normally used to formalize analogicalinference rules, by information about the relationship to typical examples. Using theseinference rules analogical reasoning proceeds by checking a related typical example,this is a form of reasoning based on semantic information from cases.

This paper concerns a knowledge structure called method , within a compu-tational model for human oriented deduction. With human oriented theoremproving cast as an interleaving process of planning and verification, the body ofall methods reflects the reasoning repertoire of a reasoning system. While weadopt the general structure of methods introduced by Alan Bundy, we make anessential advancement in that we strictly separate the declarative knowledgefrom the procedural knowledge. This is achieved by postulating some stand-ard types of knowledge we have identified, such as inference rules, assertions,and proof schemata, together with corresponding knowledge interpreters. Ourapproach in effect changes the way deductive knowledge is encoded: A newcompound declarative knowledge structure, the proof schema, takes the placeof complicated procedures for modeling specific proof strategies. This change ofparadigm not only leads to representations easier to understand, it also enablesus modeling the even more important activity of formulating meta-methods,that is, operators that adapt existing methods to suit novel situations. In thispaper, we first introduce briefly the general framework for describing methods.Then we turn to several types of knowledge with their interpreters. Finally,we briefly illustrate some meta-methods.

We present a framework for the integration of the Knuth-Bendix completion algorithm with narrowing methods, compiled rewrite rules, and a heuristic difference reduction mechanism for paramodulation. The possibility of embedding theory unification algorithms into this framework is outlined. Results are presented and discussed for several examples of equality reasoning problems in the context of an actual implementation of an automated theorem proving system (the Mkrp-system) and a fast C implementation of the completion procedure. The Mkrp-system is based on the clause graph resolution procedure. The thesis shows the indispensibility of the constraining effects of completion and rewriting for equality reasoning in general and quantifies the amount of speed-up caused by various enhancements of the basic method. The simplicity of the superposition inference rule allows to construct an abstract machine for completion, which is presented together with computation times for a concrete implementation.

This report presents the main ideas underlyingtheOmegaGamma mkrp-system, an environmentfor the development of mathematical proofs. The motivation for the development ofthis system comes from our extensive experience with traditional first-order theoremprovers and aims to overcome some of their shortcomings. After comparing the benefitsand drawbacks of existing systems, we propose a system architecture that combinesthe positive features of different types of theorem-proving systems, most notably theadvantages of human-oriented systems based on methods (our version of tactics) andthe deductive strength of traditional automated theorem provers.In OmegaGamma mkrp a user first states a problem to be solved in a typed and sorted higher-order language (called POST ) and then applies natural deduction inference rules inorder to prove it. He can also insert a mathematical fact from an integrated data-base into the current partial proof, he can apply a domain-specific problem-solvingmethod, or he can call an integrated automated theorem prover to solve a subprob-lem. The user can also pass the control to a planning component that supports andpartially automates his long-range planning of a proof. Toward the important goal ofuser-friendliness, machine-generated proofs are transformed in several steps into muchshorter, better-structured proofs that are finally translated into natural language.This work was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D2, D3)

An important property and also a crucial point ofa term rewriting system is its termination. Transformation or-derings, developed by Bellegarde & Lescanne strongly based on awork of Bachmair & Dershowitz, represent a general technique forextending orderings. The main characteristics of this method aretwo rewriting relations, one for transforming terms and the otherfor ensuring the well-foundedness of the ordering. The centralproblem of this approach concerns the choice of the two relationssuch that the termination of a given term rewriting system can beproved. In this communication, we present a heuristic-based al-gorithm that partially solves this problem. Furthermore, we showhow to simulate well-known orderings on strings by transformationorderings.

Unification in an Extensional Lambda Calculus with Ordered Function Sorts and Constant Overloading
(1999)

We develop an order-sorted higher-order calculus suitable forautomatic theorem proving applications by extending the extensional simplytyped lambda calculus with a higher-order ordered sort concept and constantoverloading. Huet's well-known techniques for unifying simply typed lambdaterms are generalized to arrive at a complete transformation-based unificationalgorithm for this sorted calculus. Consideration of an order-sorted logicwith functional base sorts and arbitrary term declarations was originallyproposed by the second author in a 1991 paper; we give here a correctedcalculus which supports constant rather than arbitrary term declarations, aswell as a corrected unification algorithm, and prove in this setting resultscorresponding to those claimed there.

An important research problem is the incorporation of "declarative" knowledge into an automated theorem prover that can be utilized in the search for a proof. An interesting pro-posal in this direction is Alan Bundy's approach of using explicit proof plans that encapsulatethe general form of a proof and is instantiated into a particular proof for the case at hand. Wegive some examples that show how a "declarative" highlevel description of a proof can be usedto find proofs of apparently "similiar" theorems by analogy. This "analogical" information isused to select the appropriate axioms from the database so that the theorem can be proved.This information is also used to adjust some options of a resolution theorem prover. In orderto get a powerful tool it is necessary to develop an epistemologically appropriate language todescribe proofs, for which a large set of examples should be used as a testbed. We presentsome ideas in this direction.

This report presents a methodology to guide equational reasoningin a goal directed way. Suggested by rippling methods developed inthe field of inductive theorem proving we use attributes of terms andheuristics to determine bridge lemmas, i.e. lemmas which have tobe used during the proof of the theorem. Once we have found sucha bridge lemma we use the techniques of difference unification andrippling to enable its use.

This paper develops a sound and complete transformation-based algorithm forunification in an extensional order-sorted combinatory logic supporting constantoverloading and a higher-order sort concept. Appropriate notions of order-sortedweak equality and extensionality - reflecting order-sorted fij-equality in thecorresponding lambda calculus given by Johann and Kohlhase - are defined, andthe typed combinator-based higher-order unification techniques of Dougherty aremodified to accommodate unification with respect to the theory they generate. Thealgorithm presented here can thus be viewed as a combinatory logic counterpartto that of Johann and Kohlhase, as well as a refinement of that of Dougherty, andprovides evidence that combinatory logic is well-suited to serve as a framework forincorporating order-sorted higher-order reasoning into deduction systems aimingto capitalize on both the expressiveness of extensional higher-order logic and theefficiency of order-sorted calculi.

We consider the problem of verifying confluence and termination of conditionalterm rewriting systems (TRSs). For unconditional TRSs the critical pair lemmaholds which enables a finite test for confluence of (finite) terminating systems.And for ensuring termination of unconditional TRSs a couple of methods forconstructing appropiate well-founded term orderings are known. If however ter-mination is not guaranteed then proving confluence is much more difficult. Re-cently we have obtained some interesting results for unconditional TRSs whichprovide sufficient criteria for termination plus confluence in terms of restrictedtermination and confluence properties. In particular, we have shown that anyinnermost terminating and locally confluent overlay system is complete, i.e. ter-minating and confluent. Here we generalize our approach to the conditional caseand show how to solve the additional complications due to the presence of con-ditions in the rules. Our main result can be stated as follows: Any conditionalTRS which is an innermost terminating semantical overlay system such that all(conditional) critical pairs are joinable is complete.

We will answer a question posed in [DJK91], and will show that Huet's completion algorithm [Hu81] becomes incomplete, i.e. it may generate a term rewriting system that is not confluent, if it is modified in a way that the reduction ordering used for completion can be changed during completion provided that the new ordering is compatible with the actual rules. In particular, we will show that this problem may not only arise if the modified completion algorithm does not terminate: Even if the algorithm terminates without failure, the generated finite noetherian term rewriting system may be non-confluent. Most existing implementations of the Knuth-Bendix algorithm provide the user with help in choosing a reduction ordering: If an unorientable equation is encountered, then the user has many options, especially, the one to orient the equation manually. The integration of this feature is based on the widespread assumption that, if equations are oriented by hand during completion and the completion process terminates with success, then the generated finite system is a maybe non terminating but locally confluent system (see e.g. [KZ89]). Our examples will show that this assumption is not true.

Even though it is not very often admitted, partial functions do play asignificant role in many practical applications of deduction systems. Kleenehas already given a semantic account of partial functions using three-valuedlogic decades ago, but there has not been a satisfactory mechanization. Recentyears have seen a thorough investigation of the framework of many-valuedtruth-functional logics. However, strong Kleene logic, where quantificationis restricted and therefore not truth-functional, does not fit the frameworkdirectly. We solve this problem by applying recent methods from sorted logics.This paper presents a resolution calculus that combines the proper treatmentof partial functions with the efficiency of sorted calculi.

The team work method is a concept for distributing automated theoremprovers and so to activate several experts to work on a given problem. We haveimplemented this for pure equational logic using the unfailing KnuthADBendixcompletion procedure as basic prover. In this paper we present three classes ofexperts working in a goal oriented fashion. In general, goal oriented experts perADform their job "unfair" and so are often unable to solve a given problem alone.However, as a team member in the team work method they perform highly effiADcient, even in comparison with such respected provers as Otter 3.0 or REVEAL,as we demonstrate by examples, some of which can only be proved using teamwork.The reason for these achievements results from the fact that the team workmethod forces the experts to compete for a while and then to cooperate by exADchanging their best results. This allows one to collect "good" intermediate resultsand to forget "useless" ones. Completion based proof methods are frequently reADgarded to have the disadvantage of being not goal oriented. We believe that ourapproach overcomes this disadvantage to a large extend.

In this paper we are interested in using a firstorder theorem prover to prove theorems thatare formulated in some higher order logic. Tothis end we present translations of higher or-der logics into first order logic with flat sortsand equality and give a sufficient criterion forthe soundness of these translations. In addi-tion translations are introduced that are soundand complete with respect to L. Henkin's gen-eral model semantics. Our higher order logicsare based on a restricted type structure in thesense of A. Church, they have typed functionsymbols and predicate symbols, but no sorts.

In 1978, Klop demonstrated that a rewrite system constructed by adding the untyped lambda calculus, which has the Church-Rosser property, to a Church-Rosser first-order algebraic rewrite system may not be Church-Rosser. In contrast, Breazu-Tannen recently showed that argumenting any Church-Rosser first-order algebraic rewrite system with the simply-typed lambda calculus results in a Church-Rosser rewrite system. In addition, Breazu-Tannen and Gallier have shown that the second-order polymorphic lambda calculus can be added to such rewrite systems without compromising the Church-Rosser property (for terms which can be provably typed). There are other systems for which a Church-Rosser result would be desirable, among them being X^t+SP+FIX, the simply-typed lambda calculus extended with surjective pairing and fixed points. This paper will show that Klop's untyped counterexample can be lifted to a typed system to demonstrate that X^t+SP+FIX is not Church-Rosser.

Over the past thirty years there have been significant achievements in the field of auto-mated theorem proving with respect to the reasoning power of the inference engines.Although some effort has also been spent to facilitate more user friendliness of the de-duction systems, most of them failed to benefit from more recent developments in therelated fields of artificial intelligence (AI), such as natural language generation and usermodeling. In particular, no model is available which accounts both for human deductiveactivities and for human proof presentation. In this thesis, a reconstructive architecture issuggested which substantially abstracts, reorganizes and finally translates machine-foundproofs into natural language. Both the procedures and the intermediate representationsof our architecture find their basis in computational models for informal mathematicalreasoning and for proof presentation. User modeling is not incorporated into the currenttheory, although we plan to do so later.

In this article we formally describe a declarative approach for encoding plan operatorsin proof planning, the so-called methods. The notion of method evolves from the much studiedconcept tactic and was first used by Bundy. While significant deductive power has been achievedwith the planning approach towards automated deduction, the procedural character of the tacticpart of methods, however, hinders mechanical modification. Although the strength of a proofplanning system largely depends on powerful general procedures which solve a large class ofproblems, mechanical or even automated modification of methods is nevertheless necessary forat least two reasons. Firstly methods designed for a specific type of problem will never begeneral enough. For instance, it is very difficult to encode a general method which solves allproblems a human mathematician might intuitively consider as a case of homomorphy. Secondlythe cognitive ability of adapting existing methods to suit novel situations is a fundamentalpart of human mathematical competence. We believe it is extremely valuable to accountcomputationally for this kind of reasoning.The main part of this article is devoted to a declarative language for encoding methods,composed of a tactic and a specification. The major feature of our approach is that the tacticpart of a method is split into a declarative and a procedural part in order to enable a tractableadaption of methods. The applicability of a method in a planning situation is formulatedin the specification, essentially consisting of an object level formula schema and a meta-levelformula of a declarative constraint language. After setting up our general framework, wemainly concentrate on this constraint language. Furthermore we illustrate how our methodscan be used in a Strips-like planning framework. Finally we briefly illustrate the mechanicalmodification of declaratively encoded methods by so-called meta-methods.

This paper presents a new way to use planning in automated theorem provingby means of distribution. To overcome the problem that often subtasks fora proof problem can not be detected a priori (which prevents the use of theknown planning and distribution techniques) we use a team of experts that workindependently with different heuristics on the problem. After a certain amount oftime referees judge their results using the impact of the results on the behaviourof the expert and a supervisor combines the selected results to a new startingpoint.This supervisor also selects the experts that can work on the problem inthe next round. This selection is a reactive planning task. We outline whichinformation the supervisor can use to fulfill this task and how this informationis processed to result in a plan or to revise a plan. We also show that the useof planning for the assignment of experts to the team allows the system to solvemany different examples in an acceptable time with the same start configurationand without any consultation of the user.Plans are always subject to changeShin'a'in proverb

The background of this paper is the area of case-based reasoning. This is a reasoning technique where one tries to use the solution of some problem which has been solved earlier in order to obta in a solution of a given problem. As example of types of problems where this kind of reasoning occurs very often is the diagnosis of diseases or faults in technical systems. In abstract terms this reduces to a classification task. A difficulty arises when one has not just one solved problem but when there are very many. These are called "cases" and they are stored in the case-base. Then one has to select an appropriate case which means to find one which is "similar" to the actual problem. The notion of similarity has raised much interest in this context. We will first introduce a mathematical framework and define some basic concepts. Then we will study some abstract phenomena in this area and finally present some methods developed and realized in a system at the University of Kaiserslautern.

The introduction of sorts to first-order automated deduction has broughtgreater conciseness of representation and a considerable gain in efficiency byreducing the search space. It is therefore promising to treat sorts in higherorder theorem proving as well.In this paper we present a generalization of Huet's Constrained Resolutionto an order-sorted type theory SigmaT with term declarations. This system buildscertain taxonomic axioms into the unification and conducts reasoning withthem in a controlled way. We make this notion precise by giving a relativizationoperator that totally and faithfully encodes SigmaT into simple type theory.

In this report we present a case study of employing goal-oriented heuristics whenproving equational theorems with the (unfailing) Knut-Bendix completion proce-dure. The theorems are taken from the domain of lattice ordered groups. It will bedemonstrated that goal-oriented (heuristic) criteria for selecting the next critical paircan in many cases significantly reduce the search effort and hence increase per-formance of the proving system considerably. The heuristic, goalADoriented criteriaare on the one hand based on so-called "measures" measuring occurrences andnesting of function symbols, and on the other hand based on matching subterms.We also deal with the property of goal-oriented heuristics to be particularly helpfulin certain stages of a proof. This fact can be addressed by using them in a frame-work for distributed (equational) theorem proving, namely the "teamwork-method".

A straightforward formulation of a mathematical problem is mostly not ad-equate for resolution theorem proving. We present a method to optimize suchformulations by exploiting the variability of first-order logic. The optimizingtransformation is described as logic morphisms, whose operationalizations aretactics. The different behaviour of a resolution theorem prover for the sourceand target formulations is demonstrated by several examples. It is shown howtactical and resolution-style theorem proving can be combined.

We show how to buildup mathematical knowledge bases usingframes. We distinguish three differenttypes of knowledge: axioms, definitions(for introducing concepts like "set" or"group") and theorems (for relating theconcepts). The consistency of such know-ledge bases cannot be proved in gen-eral, but we can restrict the possibilit-ies where inconsistencies may be impor-ted to very few cases, namely to the oc-currence of axioms. Definitions and the-orems should not lead to any inconsisten-cies because definitions form conservativeextensions and theorems are proved to beconsequences.