Refine
Year of publication
- 1999 (525) (remove)
Document Type
- Preprint (397)
- Article (73)
- Doctoral Thesis (28)
- Course Material (6)
- Master's Thesis (6)
- Report (5)
- Lecture (3)
- Study Thesis (3)
- Working Paper (2)
- Diploma Thesis (1)
Keywords
- Case-Based Reasoning (11)
- AG-RESY (6)
- Praktikum (6)
- Fallbasiertes Schliessen (5)
- HANDFLEX (5)
- Location Theory (5)
- PARO (5)
- case-based problem solving (5)
- Abstraction (4)
- Fallbasiertes Schließen (4)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Informatik (267)
- Kaiserslautern - Fachbereich Mathematik (131)
- Kaiserslautern - Fachbereich Physik (76)
- Kaiserslautern - Fachbereich Chemie (19)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (10)
- Fraunhofer (ITWM) (6)
- Kaiserslautern - Fachbereich ARUBI (6)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (5)
- Kaiserslautern - Fachbereich Biologie (2)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (2)
In this paper we deal with the location of hyperplanes in n-dimensional normed spaces. If d is a distance measure, our objective is to find a hyperplane H which minimizes f(H) = sum_{m=1}^{M} w_{m}d(x_m,H), where w_m ge 0 are non-negative weights, x_m in R^n, m=1, ... ,M demand points and d(x_m,H)=min_{z in H} d(x_m,z) is the distance from x_m to the hyperplane H. In robust statistics and operations research such an optimal hyperplane is called a median hyperplane. We show that for all distance measures d derived from norms, one of the hyperplanes minimizing f(H) is the affine hull of n of the demand points and, moreover, that each median hyperplane is (ina certain sense) a halving one with respect to the given point set.
There are several good reasons to introduce classification schemes for optimization problems including, for instance, the ability for concise problem statement opposed to verbal, often ambiguous, descriptions or simple data encoding and information retrieval in bibliographical information systems or software libraries. In some branches like scheduling and queuing theory classification is therefore a widely accepted and appreciated tool. The aim of this paper is to propose a 5-position classification which can be used to cover all location problems. We will provide a list of currentliy available symbols and indicate its usefulness in a - necessarily non-comprehensive - list of classical location problems. The classification scheme is in use since 1992 and has since proved to be useful in research, software development, classroom, and for overview articles.
In order to improve the quality of software systems and to set up a more effective process for their development, many attempts have been made in the field of software engineering. Reuse of existing knowledge is seen as a promising way to solve the outstanding problems in this field. In previous work we have integrated the design pattern concept with the formal design language SDL, resulting in a certain kind of pattern formalization. For the domain of communication systems we have also developed a pool of SDL patterns with an accompanying process model for pattern application. In this paper we present an extension that combines the SDL pattern approach with the experience base concept. This extension supports a systematic method for empirical evaluation and continuous improvement of the SDL pattern approach. Thereby the experience base serves as a repository necessary for effective reuse of the captured knowledge. A comprehensive usage scenario is described which shows the advantages of the combined approach. To demonstrate its feasibility, first results of a research case study are given.
Ziel dieser Arbeit ist es, eine Methode zur Verfügung zu stellen, mit der ein Simulator für gebäudespezifische Aufgaben modelliert werden kann. Die Modellierung muß dabei so angelegt sein, daß sowohl einfache als auch sehr komplexe Simulatoren für spezielle Gebäude entworfen werden können. Aus dem erstellten Modell ist es anschließend möglich, mit Hilfe von Generatoren automatisch ein Programm zu erzeugen. Dadurch kann ein Entwerfer ohne spezielle Kenntnisse auf dem Gebiet der Simulation einen Gebäude-Simulator entwickeln. Zur Modellierung wurde ein domänenspezifischer Katalog von Entwurfsmustern erstellt. Dabei können die einzelnen Muster direkt zur Modellierung und Codegenerierung eingesetzt werden.
The purpose of this expose is to explain the generic design of a customized communication subsystem. The expose addresses both functional and non-functional aspects. Starting point is a real-time requirement from the application area building automation. We show how this application requirement and some background information about the application area lead to a system architecture, a communication service, a protocol architecture and to the selection, adaptation, and composition of protocol functionalities. The reader will probably be surprised how much effort is necessary in order to implement the innocuous, innocent, inconspicuous looking application requirement. Formal description techniques (FDTs) will be used in all design phases.
Today's communication systems are typically structured into several layers, where each layer realizes a fixed set of protocol functionalities. These functionalities have been carefully chosen such that a wide range of applications can be supported and protocols work in a general environment of networks. However, due to evolving network technologies as well as increased and varying demands of modern applications general-purpose protocol stacks are not always adequate. To improve this situation new flexible communication architectures have been developed which enable the configuration of customized communication subsystems by composing a proper set of reusable building blocks. In particular, several approaches to automatic configuration of communication subsystems have been reported in the literature. This report gives an overview of theses approaches (F-CCS, Da CaPo, x-Kernel, and ADAPTIVE) and, in particular, defines a framework, which identifies common architectural issues and configuration tasks.
A new approach for modelling time that does not rely on the concept of a clock is proposed. In order to establish a notion of time, system behaviour is represented as a joint progression of multiple threads of control, which satisfies a certain set of axioms. We show that the clock-independent time model is related to the well-known concept of a global clock and argue that both approaches establish the same notion of time.
Due to the large variety of modern applications and evolving network technologies, a small number of general-purpose protocol stacks will no longer be sufficient. Rather, customization of communication protocols will play a major role. In this paper, we present an approach that has the potential to substantially reduce the effort for designing customized protocols. Our approach is based on the concept of design patterns, which is well-established in object oriented software development. We specialize this concept to communication protocols, and - in addition - use formal description techniques (FDTs) to specify protocol design patterns as well as rules for their instantiation and composition. The FDTs of our choice are SDL-92 and MSCs, which offer suitable language support. We propose an SDL pattern description template and relate pattern-based configuring of communication protocols to existing SDL methodologies. Particular SDL patterns and the configuring of a customized resource reservation protocol are presented in detail.
A non-trivial real-time requirement obeying a pattern that can be foundin various instantiations in the application domain building automation, and which is therefore called generic, is investigated in detail. Starting point is a description of a real-time problem in natural language augmented by a diagram, in a style often found in requirements documents. Step by step, this description is made more precise and finally transformed into a surprisingly concise formal specification, written in real-time temporal logic with customized operators. Wereason why this formal specification precisely captures the original description- as far as this is feasible due to the lack of precision of natural language.
A Tailored Real Time Temporal Logic for Specifying Requirements of Building Automation Systems
(1999)
A tailored real time temporal logic for specifying requirements of building automation systems is introduced and analyzed. The logic features several new real time operators, which are chosen with regard to the application area. The new operators improve the conciseness and readability of requirements as compared to a general-purpose real time temporal logic. In addition, some of the operators also enhance the expressiveness of the logic. A number of properties of the new operators are presented and proven.
A generic approach to the formal specification of system requirements is presented. It is based on a pool of requirement patterns, which are related to design patterns well-known in object-oriented software development. The application of such patterns enhances the reusability and genericity as well as the intelligibility of the formal requirement specification. The approach is instantiated by a tailored real-time temporal logic and by selecting building automation systems as application domain. With respect to this domain, the pattern discovery and reuse tasks are explained and illustrated, and a set of typical requirement patterns is presented. Finally, the results of a case study where the approach has been applied are summarized.
We consider wavelet estimation of the time-dependent (evolutionary) power spectrum of a locally stationary time series. Allowing for departures from stationary proves useful for modelling, e.g., transient phenomena, quasi-oscillating behaviour or spectrum modulation. In our work wavelets are used to provide an adaptive local smoothing of a short-time periodogram in the time-freqeuncy plane. For this, in contrast to classical nonparametric (linear) approaches we use nonlinear thresholding of the empirical wavelet coefficients of the evolutionary spectrum. We show how these techniques allow for both adaptively reconstructing the local structure in the time-frequency plane and for denoising the resulting estimates. To this end a threshold choice is derived which is motivated by minimax properties w.r.t. the integrated mean squared error. Our approach is based on a 2-d orthogonal wavelet transform modified by using a cardinal Lagrange interpolation function on the finest scale. As an example, we apply our procedure to a time-varying spectrum motivated from mobile radio propagation.
Today, the worlds and terminologies of mechanical engineering and software engineering coexist, but they do not always work together seamlessly. Both worlds have developed their own separate formal vocabulary for expressing their concepts as well as for capturing and communicating their respective domain knowledge. But, these two vocabularies are not unified, interwoven, or at least interconnected in a reasonable manner. Thus, the subject of this paper is a comparison of the vocabularies of the two fields, namely feature technology from the area of mechanical engineering and software design patterns from the software engineering domain. Therefore, a certain amount of definitions, history, examples, etc. is presented for features as well as for design patterns. After this, an analysis is carried out to identify analogies and differences. The main intention of this paper is to inform both worlds - mechanical and software engineering - about the other side's terminology and to start a discussion about potential mutual benefits and possibilities to bridge the gap between these two worlds, e.g. to improve the manageability of CAx product development processes.
We consider nonparametric estimation of the coefficients a_i(.), i=1,...,p, on a time-varying autoregressive process. Choosing an orthonormal wavelet basis representation of the functions a_i(.), the empirical wavelet coefficients are derived from the time series data as the solution of a least squares minimization problem. In order to allow the a_i(.) to be functions of inhomogeneous regularity, we apply nonlinear thresholding to the empirical coefficients and obtain locally smoothed estimates of the a_i(.). We show that the resulting estimators attain the usual minimax L_2-rates up to a logarithmic factor, simultaneously in a large scale of Besov classes. The finite-sample behaviour of our procedure is demonstrated by application to two typical simulated examples.
Several activities around the world aim at integrating object-oriented data models with relational ones in order to improve database management systems. As a first result of these activities, object-relational database management systems (ORDBMS) are already commercially available and, simultaneously, are subject to several research projects. This (position) paper reports on our activities in exploiting object-relational database technology for establishing repository manager functionality supporting software engineering (SE) processes. We argue that some of the key features of ORDBMS can directly be exploited to fulfill many of the needs of SE processes. Thus, ORDBMS, as we think, are much better suited to support SE applications than any others. Nevertheless, additional functionality, e. g., providing adequate version management, is required in order to gain a completely satisfying SE repository. In order to remain flexible, we have developed a generative approach for providing this additional functionality. It remains to be seen whether this approach, in turn, can effectively exploit ORDBMS features. This paper, therefore, wants to show that ORDBMS can substantially contribute to both establishing and running SE repositories.
The background of this paper is the area of case-based reasoning. This is a reasoning technique where one tries to use the solution of some problem which has been solved earlier in order to obta in a solution of a given problem. As example of types of problems where this kind of reasoning occurs very often is the diagnosis of diseases or faults in technical systems. In abstract terms this reduces to a classification task. A difficulty arises when one has not just one solved problem but when there are very many. These are called "cases" and they are stored in the case-base. Then one has to select an appropriate case which means to find one which is "similar" to the actual problem. The notion of similarity has raised much interest in this context. We will first introduce a mathematical framework and define some basic concepts. Then we will study some abstract phenomena in this area and finally present some methods developed and realized in a system at the University of Kaiserslautern.
The development of software products has become a highly cooperative and distributed activity involving working groups at geographically distinct places. These groups show an increasing mobility and a very flexible organizational structure. Process methodology and technology have to take such evolutions into account. A possible direction for the emergence of new process technology and methodology is to take benefit from recent advances within multiagent systems engineering : innovative methodologies for adaptable and autonomous architectures; they exhibit interesting features to support distributed software processes.
Coordinating distributed software development projectsbecomes more difficult, as software becomes more complex, team sizes and organisational overheads increase,and software components are sourced from disparate places. We describe the development of a range of softwaretools to support coordination of such projects. Techniques we use include asynchronous and semi -synchronousediting, software process modelling and enactment, developer-specified coordination agents, and component-based tool integration.
SmallSync, an internet event synchronizer, is intended to provide a monitoring and visualization methodology for permitting simultaneous analysis and control of multiple remote processes on the web. The current SmallSync includes: (1) a mechanism to multicast web window-based commands, message passing events and process execution events among processes; (2) an event synchronizer to allow concurrent execution of some functions on multiple machines; (3) a means to report when these events cause errors in the processes; and (4) ad hoc visualization of process states using existing visualizers.
Geographically distributed software development holds much promise for increasing market penetration and speeding up development cycles. However, it also comes with a set of new challenges for those developing the software, bought about by the distance among colleagues.This paper outlines a new research project underway to explore those issues and their implications for organizing geographically distributed software development efforts. We also describe the approaches we are taking towards providing solutions - in the form of processes and technology - to address the challenges of working remotely.
We present a new software architecture in which all concepts necessary to achieve fault tolerance can be added to an appli- cation automatically without any source code changes. As a case study, we consider the problem of providing a reliable service despite node failures by executing a group of replicat- ed servers. Replica creation and management as well as fail- ure detection and recovery are performed automatically by a separate fault tolerance layer (ft-layer) which is inserted be- tween the server application and the operating system kernel. The layer is invisible for the application since it provides the same functional interface as the operating system kernel, thus making the fault tolerance property of the service completely transparent for the application. A major advantage of our ar- chitecture is that the layer encapsulates both fault tolerance mechanisms and policies. This allows for maximum flexibility in the choice of appropriate methods for fault tolerance with- out any changes in the application code.
PANDA is a run-time package based on a very small operating system kernel which supports distributed applications written in C++. It provides powerful abstractions such as very efficient user-level threads, a uniform global address space, object and thread mobility, garbage collection, and persistent objects. The paper discusses the design ration- ales underlying the PANDA system. The fundamental features of PANDA are surveyed, and their implementation in the current prototype environment is outlined.
Requirements engineering (RE) is a necessary part of the software development process, as it helps customers and designers identify necessary system requirements. If these stakeholders are separated by distance, we argue that a distributed groupware environment supporting a cooperative requirements engineering process must be supplied that allows them to negotiate software requirements. Such a groupware environment must support aspects of joint work relevant to requirements negotiation: synchronous and asynchronous collaboration, telepresence, and teledata. It should also add explicit support for a structured RE process, which includes the team's ability to discuss multiple perspectives during requirements acquisition and traceability. We chose the TeamWave software platform as an environment that supplied the basic collaboration capabilities, and tailored it to fit the specific needs of RE.
Accelerating the maturation process within the software engineering discipline may result in boosts of development productivity. One way to enable this acceleration is to develop tools and processes to mimic evolution of traditional engineering disciplines. Principles established in traditional engineering disciplines represent high-level guidance to constructing these tools and processes. This paper discusses two principles found in the traditional engineering disciplines and how these principles can apply to mature the software engineering discipline. The discussion is concretized through description of the Collaborative Management Environment, a software system under collaborative development among several national laboratories.
Magnetic anisotropies of MBE-grown fcc Co(110)-films on Cu(110) single crystal substrates have been determined by using Brillouin light scattering(BLS) and have been correlated with the structural properties determined by low energy electron diffraction (LEED) and scanning tunneling microscopy (STM). Three regimes of film growth and associated anisotropy behavior are identified: coherent growth in the Co film thickness regime of up to 13 Å, in-plane anisotropic strain relaxation between 13 Å and about 50 Å and inplane isotropic strain relaxation above 50 Å. The structural origin of the transition between anisotropic and isotropic strain relaxation was studied using STM. In the regime of anisotropic strain relaxation long Co stripes with a preferential [ 110 ]-orientation are observed, which in the isotropic strain relaxation regime are interrupted in the perpendicular in-plane direction to form isotropic islands. In the Co film thickness regime below 50 Å an unexpected suppression of the magnetocrystalline anisotropy contribution is observed. A model calculation based on a crystal field formalism and discussed within the context of band theory, which explicitly takes tetragonal misfit strains into account, reproduces the experimentally observed anomalies despite the fact that the thick Co films are quite rough.
An a posteriori stopping rule connected with monitoringthe norm of second residual is introduced forBrakhage's implicit nonstationary iteration method, applied to ill-posed problems involving linear operatorswith closed range. It is also shown that for someclasses of equations with such operators the algorithmconsisting in combination of Brakhage's method withsome new discretization scheme is order optimal in the sense of Information Complexity.
We show that the intersection local times \(\mu_p\) on the intersection of \(p\) independent planar Brownian paths have an average density of order three with respect to the gauge function \(r^2\pi\cdot (log(1/r)/\pi)^p\), more precisely, almost surely, \[ \lim\limits_{\varepsilon\downarrow 0} \frac{1}{log |log\ \varepsilon|} \int_\varepsilon^{1/e} \frac{\mu_p(B(x,r))}{r^2\pi\cdot (log(1/r)/\pi)^p} \frac{dr}{r\ log (1/r)} = 2^p \mbox{ at $\mu_p$-almost every $x$.} \] We also show that the lacunarity distributions of \(\mu_p\), at \(\mu_p\)-almost every point, is given as the distribution of the product of \(p\) independent gamma(2)-distributed random variables. The main tools of the proof are a Palm distribution associated with the intersection local time and an approximation theorem of Le Gall.
It is proved that if a finite non-trivial quasi-order is nota linear order then there exist continuum many clones, whichconsist of functions preserving the quasi-order and containall unary functions with this property. It is shown that, fora linear order on a three-element set, there are only 7 suchclones
In this paper we show that for each prime p=7 there exists a translation plane of order p^2 of Mason-Ostrom type. These planes occur as 6-dimensional ovoids being projections of the 8-dimensional binary ovoids of Conway, Kleidman and Wilson. In order to verify the existence of such projections we prove certain properties of two particular quadratic forms using classical methods form number theory.
Tangent measure distributions are a natural tool to describe the local geometry of arbitrary measures of any dimension. We show that for every measure on a Euclidean space and every s, at almost every point, all s-dimensional tangent measure distributions define statistically self-similar random measures. Consequently, the local geometry of general measures is not different from the local geometry of self-similar sets. We illustrate the strength of this result by showing how it can be used to improve recently proved relations between ordinary and average densities.
We present an entropy concept measuring quantum localization in dynamical systems based on time averaged probability densities. The suggested entropy concept is a generalization of a recently introduced [PRL 75, 326 (1995)] phase-space entropy to any representation chosen according to the system and the physical question under consideration. In this paper we inspect the main characteristics of the entropy and the relation to other measures of localization. In particular the classical correspondence is discussed and the statistical properties are evaluated within the framework of random vector theory. In this way we show that the suggested entropy is a suitable method to detect quantum localization phenomena in dynamical systems.
The Filter-Diagonalization Method is applied to time periodic Hamiltonians and used to find selectively the regular and chaotic quasienergies of a driven 2D rotor. The use of N cross-correlation probability amplitudes enables a selective calculation of the quasienergies from short time propagation to the time T (N). Compared to the propagation time T (1) which is required for resolving the quasienergy spectrum with the same accuracy from auto-correlation calculations, the cross-correlation time T (N) is shorter by the factor N , that is T (1) = N T (N).
The global dynamical properties of a quantum system can be conveniently visualized in phase space by means of a quantum phase space entropy in analogy to a Poincare section in classical dynamics for two-dimensional time independent systems. Numerical results for the Pullen Edmonds systems demonstrate the properties of the method for systems with mixed chaotic and regular dynamics.
A novel method is presented which allows a fast computation of complex energy resonance states in Stark systems, i.e. systems in a homogeneous field. The technique is based on the truncation of a shift-operator in momentum space. Numerical results for space periodic and non-periodic systems illustrate the extreme simplicity of the method.
Quantum Chaos
(1999)
The study of dynamical quantum systems, which are classically chaotic, and the search for quantum manifestations of classical chaos, require large scale numerical computations. Special numerical techniques developed and applied in such studies are discussed: The numerical solution of the time-dependent Schr-odinger equation, the construction of quantum phase space densities, quantum dynamics in phase space, the use of phase space entropies for characterizing localization phenomena, etc. As an illustration, the dynamics of a driven one-dimensional anharmonic oscillator is studied, both classically and quantum mechanically. In addition, spectral properties and chaotic tunneling are addressed.
Absract: We report on measurements of the two-dimensional intensity distribtion of linear and non-linear spin wave excitations in a LuBiFeO film. The spin wave intensity was detected with a high-resolution Brillouinlight scatteringspectroscopy setup. The observed snake-like structure of the spin wave intensity distribution is understood as a mode beating between modes with different lateral spin wave intensity distributions. The theoretical treatment of the linear regime is performed analytically, whereas the propagation of non-linear spin waves is simulated by a numerical solution of a non-linear Schrödinger equation with suitable boundary conditions.
The paper studies metastable states of a Bloch electron in the presence of external ac and dc fields. Provided resonance condition between period of the driving frequency and the Bloch period, the complex quasienergies are numerically calculated for two qualitatively different regimes (quasiregular and chaotic) of the system dynamics. For the chaotic regime an effect of quantum stabilization, which suppresses the classical decay mechanism, is found. This effect is demonstrated to be a kind of quantum interference phenomenon sensitive to the resonance condition.
Two possible substitutes of the Fourier transform in geopotential determination are windowed Fourier transform (WFT) and wavelet transform (WT). In this paper we introduce harmonic WFT and WT and show how it can be used to give information about the geopotential simultaneously in the space domain and the frequency (angular momentum) domain. The counterparts of the inverse Fourier transform are derived, which allow us to reconstruct the geopotential from its WFT and WT, respectively. Moreover, we derive a necessary and sufficient condition that an otherwise arbitrary function of space and frequency has to satisfy to be the WFT or WT of a potential. Finally, least - squares approximation and minimum norm (i.e. least - energy) representation, which will play a particular role in geodetic applications of both WFT and WT, are discussed in more detail.
This paper deals with the characterization of microscopically heterogeneous, but macroscopically homogeneous spatial structures. A new method is presented which is strictly based on integral-geometric formulae such as Crofton's intersection formulae and Hadwiger's recursive de nition of the Euler number. The corresponding algorithms have clear advantages over other techniques. As an example of application we consider the analysis of spatial digital images produced by means of Computer Assisted Tomo- graphy.
For some decades radiation therapy has been proved successful in cancer treatment. It is the major task of clinical radiation treatment planning to realise on the one hand a high level dose of radiation in the cancer tissue in order to obtain maximum tumour control. On the other hand it is obvious that it is absolutely necessary to keep in the tissue outside the tumour, particularly in organs at risk, the unavoidable radiation as low as possible. No doubt, these two objectives of treatment planning high level dose in the tumour, low radiation outside the tumour have a basically contradictory nature. Therefore, it is no surprise that inverse mathematical models with dose distribution bounds tend to be infeasible in most cases. Thus, there is need for approximations compromising between overdosing the organs at risk and underdosing the target volume. Differing from the currently used time consuming iterative approach, which measures deviation from an ideal (non-achievable) treatment plan using recursively trial-and-error weights for the organs of interest, we go a new way trying to avoid a priori weight choices and consider the treatment planning problem as a multiple objective linear programming problem: with each organ of interest, target tissue as well as organs at risk, we associate an objective function measuring the maximal deviation from the prescribed doses. We build up a data base of relatively few efficient solutions representing and approximating the variety of Pareto solutions of the multiple objective linear programming problem. This data base can be easily scanned by physicians looking for an adequate treatment plan with the aid of an appropriate online tool.
A general approach to the construction of discrete equilibrium dis- tributions is presented. Such distribution functions can be used to set up Kinetic Schemes as well as Lattice Boltzmann methods. The general principles are also applied to the construction of Chapman Enskog dis- tributions which are used in Kinetic Schemes for compressible Navier Stokes equations.
The relation between the Lattice Boltzmann Method, which has re- cently become popular, and the Kinetic Schemes, which are routinely used in Computational Fluid Dynamics, is explored. A new discrete velocity model for the numerical solution of Navier-Stokes equations for incom- pressible uid ow is presented by combining both the approaches. The new scheme can be interpreted as a pseudo-compressibility method and, for a particular choice of parameters, this interpretation carries over to the Lattice Boltzmann Method.
A class of regularization methods using unbounded regularizing operators is considered for obtaining stable approximate solutions for ill-posed operator equations. With an a posteriori as well as an priori parameter choice strategy, it is shown that the method yields optimal order. Error estimates have also been obtained under stronger assumptions on the the generalized solution. The results of the paper unify and simplify many of the results available in the literature. For example, the optimal results of the paper includes, as particular cases for Tikhonov regularization, the main result of Mair (1994) with an a priori parameter choice and a result of Nair (1999) with an a posteriori parameter choice. Thus the observations of Mair (1994) on Tikhonov regularization of ill-posed problems involving finitely and infinitely smoothing operators is applicable to various other regularization procedures as well. Subsequent results on error estimates include, as special cases, an optimal result of Vainikko (1987) and also recent results of Tautenhahn (1996) in the setting Hilbert scales.
A multiscale method is introduced using spherical (vector) wavelets for the computation of the earth's magnetic field within source regions of ionospheric and magnetospheric currents. The considerations are essentially based on two geomathematical keystones, namely (i) the Mie representation of solenoidal vector fields in terms of toroidal and poloidal parts and (ii) the Helmholtz decomposition of spherical (tangential) vector fields. Vector wavelets are shown to provide adequate tools for multiscale geomagnetic modelling in form of a multiresolution analysis, thereby completely circumventing the numerical obstacles caused by vector spherical harmonics. The applicability and efficiency of the multiresolution technique is tested with real satellite data.
We report results of the switching properties of Stoner-like magnetic particles subject to short magnetic field pulses, obtained by numerical investigations. We discuss the switching properties as a function of the external field pulse strength and direction, the pulse length and the pulse shape. For field pulses long compared to the ferromagnetic resonance precession time the switching behavior is governed by the magnetic damping term, whereas in the limit of short field pulses the switching properties are dominated by the details of the precession of the magnetic moment. In the latter case, by choosing the right field pulse parameters, the magnetic damping term is of minor importance and ultrafast switching can be achieved. Switching can be obtained in an enlarged angular range of the direction of the applied field compared to the case of long pulses.
An unusual interlayer coupling, recently discovered in layered magnetic systems, is analysed from the experimental and theoretical points of view. This coupling favours the 90° orientation of the magnetization of the adjacent magnetic films. It can be phenomenologically described by a term in the energy expression, which is biquadratic with respect to the magnetizations of the two films. The main experimental findings, as well as the theoretical models, explaining the phenomenon are discussed.
The static and spin wave properties of regular square lattices of magnetic dots of 0.5-2 microm dot diameter and 1-4 microm periodicity patterned in permalloy films have been investigated by Brillouin light scattering. The samples have been structured using x-ray lithography and ion beam etching. The Brillouin light scattering spectra reveal both surface and bulk spin wave modes. The spin wave frequencies can be well described taking into account the demagnetization factor of each single dot. For the samples with smallest dot separation of 0.1 microm a fourfold in-plane magnetic anisotropy with the easy axis directed along the pattern diagonal is observed, indicating anisotropic coupling between the dots.
A computer control for a Sandercock-type multipath tandem Fabry-Perot interferometer is described, which offers many advantages over conventionally used analog control: The range of stability is increased due to active control of the laser light intensity and the mirror dither amplitude. The alignment is fully automated enabling start of a measurement within a minute after start of align, including optionally finding the optimum focus on the sample. The software control enables a programmable series of measurements with control of, e.g., the position and rotation of the sample, the angle of light incidence, the sample temperature, or the strength and direction of an applied magnetic field. Built-in fitting routines allow for a precise determination of frequency positions of excitation peaks combined with increased frequency accuracy due to a correction of a residual nonlinearity of the mirror stage drive.
An experimental study of spin wave quantization in arrays of micron size magnetic Ni80Fe20 islands (dots and wires) by means of Brillouin light scattering spectroscopy is reported. Dipolar-dominated spin wave modes laterally quantized in a single island with quantized wavevector values determined by the size of the island are studied. In the case of wires the frequencies of the modes and the transferred wavevector interval, where each mode is observed, are calculated. The results of the calculations are in a good agreement with the experimental data. In the case of circular dots the frequencies of the lowest observed modes decrease with increasing distance between the dots, thus indicating an essential dynamic magnetic dipole interaction between the dots with small interdot distances.
In this paper we present a renormalizability proof for spontaneously broken SU (2) gauge theory. It is based on Flow Equations, i.e. on the Wilson renormalization group adapted to perturbation theory. The power counting part of the proof, which is conceptually and technically simple, follows the same lines as that for any other renormalizable theory. The main difficulty stems from the fact that the regularization violates gauge invariance. We prove that there exists a class of renormalization conditions such that the renormalized Green functions satisfy the Slavnov-Taylor identities of SU (2) Yang-Mills theory on which the gauge invariance of the renormalized theory is based.
A new method for calculating Stark resonances is presented and applied for illustration to the simple case of a one-particle, one-dimensional model Hamiltonian. The method is applicable for weak and strong dc fields. The only need, also for the case of many particles in multi-dimensional space, are either the short time evolution matrix elements or the eigenvalues and Fourier components of the eigenfunctions of the field-free Hamiltonian.
Hexagonal BN films have been deposited by rf-magnetron sputtering with simultaneous ion plating. The elastic properties of the films grown on silicon substrates under identical coating conditions have been de-termined by Brillouin light scattering from thermally excited surface phonons. Four of the five independent elastic constants of the deposited material are found to be c11 = 65 GPa, c13 = 7 GPa, c33 = 92 GPa and c44 = 53 GPa exhibiting an elastic anisotropy c11/c33 of 0.7. The Young's modulus determined with load indenta-tion is distinctly larger than the corresponding value taken from Brillouin light scattering. This discrepancy is attributed to the specific morphology of the material with nanocrystallites embedded in an amorphous matrix.
We report on the observation of spin wave quantization in square arrays of micron size circular magnetic Ni80Fe20 dots by means of Brillouin light scattering spectroscopy. For a large wavevector interval several discrete, dispersionless modes with a frequency splitting of up to 2.5 GHz were observed. The modes are identified as magnetostatic surface spin waves laterally quantized due to in- plane confinement in each single dot. The frequencies of the lowest observed modes decrease with increasing distance between the dots, thus indicating an essential dynamic magnetic dipole interaction between the dots with small interdot distances.
Epitaxial growth of metastable Pd(001) at high deposition temperatures up to a critical thickness of 6 monolayers on bcc-Fe(001) is reported, the critical thickness being depending dramatically on the deposition temperature. For larger thicknesses the Pd film undergoes a roughening transition with strain relaxation by forming a top polycrystalline layer. These results allow to make a correlation between previ-ously reported unusual magnetic properties of Fe/Pd double layers and the crystallographic structure of the Pd overlayer.
It is shown, that recently constructed PST Lagrangians for chiral supergravities follow directly from earlier Kavalov-Mkrtchyan Lagrangians by an Ansatz for the ' tensor by expressing this in terms of the PST scalar. The susy algebra which included earlier ff-symmetry in the commutator of supersymmetry transformations, is now shown to include both PST symmetries, which arise from the single ff-symmetry term. The Lagrangian for the 5-brane is not described by this correspondence, and probably can be obtained from more general Lagrangians, posessing ff-symmetry.
An overview of the current status of the study of spin wave excitations in arrays of magnetic dots and wires is given. We describe both the status of theory and recent inelastic light scattering experiments addressing the three most important issues: the modification of magnetic properties by patterning due to shape aniso-tropies, anisotropic coupling between magnetic islands, and the quantization of spin waves due to the in-plane confinement of spin waves in islands.
We investigate the temperature dependence of the magnetization reversal process and of spinwaves in epi-taxially grown (001)-oriented [Fem/Aun]30 multilayers (m = 1, 2; n = 1- 6). Both polar magneto-optic Kerrr effect and Brillouin light scattering measurements reveal that all investigated multilayers, apart from the [Fe2/Au1]30-sample, are magnetized perpendicular to the film plane. The out-of-plane anisotropy constants are obtained. At high temperature, the magnetization curves are well described by an alternating stripe domain structure with free mobile domain walls, and at low temperature by a thermal activation model for the domain wall motion.
An experimental study of spin wave quantization in arrays of micron size magnetic Ni80Fe20 wires by means of Brillouin light scattering spectroscopy is reported. Dipolar-dominated Damon-Eshbach spin wave modes laterally quantized in a single wire with quantized wavevector values determined by the width of the wire are studied. The frequency splitting between quantized modes, which decreases with increasing mode number, depends on the wire sizes and is up to 1.5 GHz. The transferred wavevector interval, where each mode is observed, is calculated using a light scattering theory for confined geometries. The frequen-cies of the modes are calculated, taking into account finite size effects. The results of the calculations are in a good agreement with the experimental data.
Collisions of Spin Wave Envelope Solitons and Self-Focused Spin Wave Packets in Magnetic Films
(1999)
Head-on collisions between two-dimensional self-focused spin wave packets and between quasi-one-dimensional spin wave envelope solitons have been directly observed for the first time in yttrium-iron garnet (YIG) films by means of a space- and time-resolved Brillouin light scattering technique. We show that quasi-one-dimensional envelope solitons formed in narrow film strips ("waveguides") retain their shapes after collision, while the two-dimensional self-focused spin wave packets formed in wide YIG films are destroyed in collision.
High frequency switching of single domain, uniaxial magnetic particles is discussed in terms of transition rates controlled by a small transverse bias field. It is shown that fast switching times can be achieved using bias fields an order of magnitude smaller than the effective anisotropy field. Analytical expressions for the switching time are derived in special cases and general configurations of practical interest are examined using numerical simulations.
We present detailed studies of the enhanced coercivity of exchange-bias bilayer Fe/MnPd, both experimentally and theoretically. We have demonstrated that the existence of large higher-order anisotropies due to exchange coupling between different Fe and MnPd layers can account for the large increase of coercivity in Fe/MnPd system. The linear dependence of coercivity on inverse Fe thickness are well explained by a phenomenological model by introducing higher-order anisotropy terms into the total free energy of the system.
Static and dynamic properties of patterned magnetic permalloy films are investigated. In square lattices of circular shaped permalloy dots an anisotropic coupling mechanism has been found, which is identified as being due to intrinsically unsaturated parts of the dots caused by spatial variations of demagnetizing field. In arrays of magnetic wires a quantization of the surface spin wave mode in several dispersionless modes is observed and quantitatively described. For large wavevectors the frequency separation between the modes becomes smaller and the frequencies converge to the dispersion of the dipole-exchange surface mode of a continuous film.
Wall energy and wall thickness of exchange-coupled rare-earth transition-metal triple layer stacks
(1999)
The room-temperature wall energy sw 54.0310 23 J/m 2 of an exchange-coupled Tb 19.6 Fe 74.7 Co 5.7 /Dy 28.5 Fe 43.2 Co 28.3 double layer stack can be reduced by introducing a soft magnetic intermediate layer in between both layers exhibiting a significantly smaller anisotropy compared to Tb+- FeCo and Dy+- FeCo. sw will decrease linearly with increasing intermediate layer thickness, d IL , until the wall is completely located within the intermediate layer for d IL d w , where d w denotes the wall thickness. Thus, d w can be obtained from the plot sw versus d IL .We determined sw and d w on Gd+- FeCo intermediate layers with different anisotropy behavior ~perpendicular and in-plane easy axis! and compared the results with data obtained from Brillouin light-scattering measurements, where exchange stiffness, A, and uniaxial anisotropy, K u , could be determined. With the knowledge of A and K u , wall energy and thickness were calculated and showed an excellent agreement with the magnetic measurements. A ten times smaller perpendicular anisotropy of Gd 28.1 Fe 71.9 in comparison to Tb+- FeCo and Dy+- FeCo resulted in a much smaller sw 51.1310 23 J/m 2 and d w 524 nm at 300 K. A Gd 34.1 Fe 61.4 Co 4.5 with in-plane anisotropy at room temperature showed a further reduced sw 50.3310 23 J/m 2 and d w 517 nm. The smaller wall energy was a result of a different wall structure compared to perpendicular layers.
Mn-Si-C alloy films are prepared by e-beam coevaporation onto a Si substrate held at 600 °C. Ferromagnetism is observed below T = (360 +/- 5) K with SQUID magnetometry and magneto-optical Kerr effect. This is the highest Curie temperature T yet observed for a Mn-based alloy. Although the composition determined by Auger depth profiling varies appreciably for different films, their T is the same indicating that ferromagnetism is caused by an alloy of well-defined composition independent of precipitations.
Collecting Experience on the Systematic Development of CBR Applications using the INRECA Methodology
(1999)
This paper presents an overview of the INRECA methodology for building and maintaining CBR applications. This methodology supports the collection and reuse of experience on the systematic development of CBR applications. It is based on the experience factory and the software process modeling approach from software engineering. CBR development experience is documented using software process models and stored in different levels of generality in a three-layered experience base. Up to now, experience from 9 industrial projects enacted by all INRECA II partners has been collected.
Automata-Theoretic vs. Property-Oriented Approaches for the Detection of Feature Interactions in IN
(1999)
The feature interaction problem in Intelligent Networks obstructs more and morethe rapid introduction of new features. Detecting such feature interactions turns out to be a big problem. The size of the systems and the sheer computational com-plexity prevents the system developer from checking manually any feature against any other feature. We give an overview on current (verification) approaches and categorize them into property-oriented and automata-theoretic approaches. A comparisonturns out that each approach complements the other in a certain sense. We proposeto apply both approaches together in order to solve the feature interaction problem.
Planning means constructing a course of actions to achieve a specified set of goals when starting from an initial situation. For example, determining a sequence of actions (a plan) for transporting goods from an initial location to some destination is a typical planning problem in the transportation domain. Many planning problems are of practical interest.
Integrated project management means that design and planning are interleaved with plan execution, allowing both the design and plan to be changed as necessary. This requires that the right effects of change are propagated through the plan and design. When this is distributed among designers and planners, no one may have all of the information to perform such propagation and it is important to identify what effects should be propagated to whom when. We describe a set of dependencies among plan and design elements that allow such notification by a set of message-passing software agents. The result is to provide a novel level of computer support for complex projects.
Recent studies on planning, comparing plan re-use and plan generation, have shown that both the above tasks may have the same degree of computational complexity, even if we deal with very similar problems. The aim of this paper is to show that the same kind of results apply also for diagnosis. We propose a theoretical complexity analysis coupled with some experimental tests, intended to evaluate the adequacy of adaptation strategies which re-use the solutions of past diagnostic problems in order to build a solution to the problem to be solved. Results of such analysis show that, even if diagnosis re-use falls into the same complexity class of diagnosis generation (they are both NP-complete problems), practical advantages can be obtained by exploiting a hybrid architecture combining case-based and modelbased diagnostic problem solving in a unifying framework.
Verfahren des Maschinellen Lernens haben heute eine Reife erreicht, die zu ersten erfolgreichen industriellen Anwendungen geführt hat. In der Prozessdiagnose und -steuerung ermöglichen Lernverfahren die Klassifikation und Bewertung von Betriebszuständen, d.h. eine Grobmodellierung eines Prozesses, wenn dieser nicht oder nur teilweise mathematisch beschreibbar ist. Ausserdem gestatten Lernverfahren die automatische Generierung von Klassifizierungsprozeduren, die deterministisch abgearbeitet werden und daher für die Belange der Echtzeitdiagnose und -steuerung u.U. zeiteffektiver als Inferenzmechanismen auf logischer bzw. Produktionsregelbasis sind, da letztere immer mit zeitaufwendigen Suchprozessen verbunden sind.
We present the adaptation process in a CBR application for decision support in the domain of industrial supervision. Our approach uses explanations to approximate relations between a problem description and its solution, and the adaptation process is guided by these explanations (a more detailed presentation has been done in [4]).
The CBR team of the LISA is involved in several applied research projects based on the CBR paradigm. These applications use adaptation to solve the specific problems they face. So, we have capitalized some experience about how can be expressed and formalized adaptation processes. The bibliography on the subject is quite important but demonstrates a lake of formalism. At most, there exists some classifications about different types of adaptation.
Cooperative decision making involves a continuous process, assessing the validity ofdata, information and knowledge acquired and inferred by the colleagues, that is, the shared knowledge space must be transparent. The ACCORD methodology provides aninterpretation framework for the mapping of domain facts - constituting the world model of the expert - onto conceptual models, which can be expressed in formalrepresentations. The ACCORD-BPM framework allows a stepwise and inarbitrary reconstruction of the problem solving competence of BPM experts as a prerequisite foran appropriate architecture of both BPM knowledge bases and the BPM-"reasoning device".
This paper describes how knowledge-based techniques can be used to overcome problems of workflow management in engineering applications. Using explicit process and product models as a basis for a workflow interpreter allows to alternate planning and execution steps, resulting in an increased flexibility of project coordination and enactment. To gain the full advantages of this flexibility, change processes have to be supported by the system. These require an improved traceability of decisions and have to be based on dependency management and change notification mechanisms. Our methods and techniques are illustrated by two applications: Urban land-use planning and software process modeling.
Information technology support for complex, dynamic, and distributed business processes as they occur in engineering domains requires an advanced process management system which enhances currently available workflow management services with respect to integration, flexibility, and adapt ation. We present an uniform and flexible framework for advanced process management on an a bstract level which uses and adapts agent technology from distributed artificial intelligence for both modelling and enacting of processes. We identify two different frameworks for applying agent tec hnology to process management: First, as a multi-agent system with the domain of process manag ement. Second, as a key infrastructure technology for building a process management system. We will then follow the latter approach and introduce different agent types for managing activities, products, and resources which capture specific views on the process.
It is generally agreed that one of the most challenging issues facing the case-based reasoning community is that of adaptation. To date the lion's share of CBR research has concentrated on the retrieval of similar cases, and the result is a wide range of quality retrieval techniques. However, retrieval is just the first part of the CBR equation, because once a similar case has been retrieved it must be adapted. Adaptation research is still in its earliest stages, and researchers are still trying to properly understand and formulate the important issues. In this paper I describe a treatment of adaptation in the context of a case-based reasoning system for software design, called Deja Vu. Deja Vu is particularly interesting, not only because it performs automatic adaptation of retrieved cases, but also because it uses a variety of techniques to try and reduce and predict the degree of adaptation necessary.
About the approach The approach of TOPO was originally developed in the FABEL project1[1] to support architects in designing buildings with complex installations. Supplementing knowledge-based design tools, which are available only for selected subtasks, TOPO aims to cover the whole design process. To that aim, it relies almost exclusively on archived plans. Input to TOPO is a partial plan, and output is an elaborated plan. The input plan constitutes the query case and the archived plans form the case base with the source cases. A plan is a set of design objects. Each design object is defined by some semantic attributes and by its bounding box in a 3-dimensional coordinate system. TOPO supports the elaboration of plans by adding design objects.
INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.
Der Wissenserwerb erschwert bisher häufig den Einsatz wissensbasierter Systeme der Arbeitsplanerstellung in der industriellen Praxis. Die meisten Anwendungen gestatten nur das Erfassen und Editieren des durch aufwendige Erhebung, Systematisierung und Formulierung gewonnenen fachspezifischen Planungswissens. Im Rahmen eines DFG-Projektes soll die Anwendbarkeit bekannter maschineller Lernverfahren auf technologische Reihenfolge- und Zuordnungsprobleme im Rahmen der generierenden Arbeitsplanerstellung von Teilefertigungsprozessen im Maschinenbau nachgewiesen werden. Dazu wird ein Prototyp mit Hilfe eines verfügbaren Softwarewerkzeuges entwickelt, der das maschinelle Lernen aus vorgegebenen Beispielen ermöglichen und mit einem existierenden Prototypen der wissensbasierten Arbeistplanung kommunizieren soll. Der folgende Beitrag gibt einen Überblick über das mit Lernverfahren zu behandelnde Planungswissen und stellt mögliche Repräsentationsmöglichkeiten des Wissens zur Diskussion.
Learning from examples is a field of research in machine learning where class descriptions, like decision trees or implications (production rules or horn clauses) are produced using positive and negative examples as information. To solve this task many different heuristic search strategies have been developed, so far. The search by specialization is the most widely used search strategy, whereas other approaches use a search by generalization only. JoJo is an algorithm that combines both search directions into one search procedure. According to the estimated quality of the currently regarded rule either a generalization or specialization step is carried out by deleting or adding one premise to the conjunction part of the rule. But, to create an even more flexible (and faster) algorithm, it should be possible to delete or add more than just one premise at a time. Relaxing this restriction of JoJo led to the new highly flexible algorithm Frog that additionally uses a third search direction.
Das Ziel dieses Projekts war es, anhand von empirischen Untersuchungen klassische statistische Verfahren und aktuelle Methoden des Maschinellen Lernens mit einem Ansatz zu vergleichen, der in der Arbeitsgruppe entworfen und theoretisch analysiert wurde. Implementiert wurden f"unf Verfahren, einige davon in verschiedenen Varianten: FeedForward Neuronale Netze, Entscheidungsbäume, Bayes Entscheidungen, die auf Chow-Expansionen beruhen, Harmonische Analyse und die Methode des N"achsten Nachbarn. Als Referenzmassstab wurden Vorhersagen herangezogen, die den Trend oder den Mittelwert der letzten letzten Beobachtungen vorhersagten. Als Daten standen 16 Zeitreihen von Aktien- und Devisenkursen zur Verf"ugung. Jede der Zeitreihen bestand aus 2000 Daten, von denen die ersten 1500 zum Training und die restlichen 500 für den Vergleich der Verfahren dienten. Dabei zeigte es sich, dass die naiven Referenzverfahren einen recht guten Pr"ufstein darstellten. Die Bayes-Entscheidungen und die Entscheidungsbäume erwiesen sich als besonders stark und übertrafen die Referenzmethoden fast immer. Neuronale Netze und die Methode des n"achsten Nachbarn waren etwa genausogut, während die Harmonische Analyse für kurzfristige Vorhersagen schlechter und für langfristige besser war. Bei Entscheidungsbäumen und Neuronalen Netzen fiel auf, dass kleine B"aume bzw. Netze bessere Ergebnisse lieferten als grosse.
Reusing Proofs
(1999)
We develop a learning component for a theorem prover designed for verifying statements by mathematical induction. If the prover has found a proof, it is analyzed yielding a so-called catch. The catch provides the features of the proof which are relevant for reusing it in subsequent verification tasks and may also suggest useful lemmata. Proof analysis techniques for computing the catch are presented. A catch is generalized in a certain sense for increasing the reusability of proofs. We discuss problems arising when learning from proofs and illustrate our method by several examples.
Die Induktive Logische Programmierung (ILP) ist ein Forschungsgebiet, das Techniken aus dem Maschinellen Lernen und der Logischen Programmierung vereint. Sie untersucht das klassische Problem induktiven Lernens aus klassifizierten Beispielen im Rahmen der Hornlogik erster Stufe. Inzwischen gibt es eine grosse Zahl verschiedener Ansätze für dieses Lernproblem, die sich hauptsächlich in der Suchrichtung im Hypothesenraum, den Generalisierungs- und Spezialisierungsoperatoren und den verwendeten nichtlogischen Beschränkungen (Bias) unterscheiden. Der Vergleich und die Integration dieser verschiedenen Ansätze war die Hauptmotivation für die Entwicklung des Systems MILES. MILES ist eine Programmierumgebung für die ILP, die neben Mechanismen zur Repräsentation und Verwaltung von Beispielen, Hintergrundwissen und Hypothesen einen Werkzeugkasten mit einem Grossteil der bekannten Generalisierungs-, Spezialisierungs- und Reformulierungsoperatoren enthält. Eine generische Kontrolle erlaubt, verschiedene dieser Operatoren in einen spezifischen ILP-Algorithmus zu integrieren. In diesem Beitrag wird ein kurzer Überblick über die Repräsentation, die Operatoren und die Kontrolle von MILES gegeben.
Die Verfahren der Induktiven Logischen Programmierung (ILP) [Mug93] haben die Aufgabe, aus einer Menge von positiven Beispielen E+, einer Menge von negativen Beispielen E und dem Hintergrundwissen B ein logisches Programm P zu lernen, das aus einer Menge von definiten Klauseln C : l0 l1, : : : ,ln besteht. Da der Hypothesenraum für Hornlogik unendlich ist, schränken viele Verfahren die Hypothesensprache auf eine endliche ein. Auch wird oft versucht, die Hypothesensprache so einzuschränken, dass nur Programme gelernt werden können, für die die Konsistenz entscheidbar ist. Eine andere Motivation, die Hypothesensprache zu beschränken, ist, dass das Wissen über das Zielprogramm, das schon vorhanden ist, ausgenutzt werden soll. So sind für bestimmte Anwendungen funktionsfreie Hypothesenklauseln ausreichend, oder es ist bekannt, dass das Zielprogramm funktional ist.
In diesem Beitrag werden konnektionistische Lernverfahren für die wissensbasierte Diagnose technischer Systeme vorgestellt. Es werden zwei Problemstellungen untersucht: die Prognose von Signalverläufen technischer Zustandsgrössen sowie die diagnostische Klassifikation von Systemzuständen und die Ergebnisse der Untersuchungen dargestellt.
Formalismen und Anschauung
(1999)