Refine
Year of publication
- 1999 (397) (remove)
Document Type
- Preprint (397) (remove)
Keywords
- Case-Based Reasoning (10)
- Fallbasiertes Schliessen (5)
- Location Theory (5)
- case-based problem solving (5)
- Abstraction (4)
- Fallbasiertes Schließen (4)
- Knowledge Acquisition (4)
- Internet (3)
- Knowledge acquisition (3)
- Maschinelles Lernen (3)
Faculty / Organisational entity
Anhand des vom Gutachterausschuß der Stadt Kaiserlautern zur Verfügung gestellten Datenmaterials soll untersucht werden, welche Faktoren den Verkehrswert eines bebauten Grundstücks beeinflussen. Mit diesen Erkenntnissen soll eine möglichst einfache Formel ermittelt werden, die eine Schätzung für den Verkehrswert liefert, und die dabei die in der Vergangenheit erzielten Kaufpreise berücksichtigt. Für die Lösung dieser Aufgabe bietet sich das Verfahren der multiplen linearen Regression an. Auf die theoretischen Grundlagen soll hier nicht näher eingegangen werden, man findet sie in jedem Buch über mathematische Statistik, oder in [1]. Bei der Analyse der Daten wurde im großen und ganzen der Weg eingeschlagen, den Angelika Schwarz in [1] beschreibt. Ihre Ergebnisse lassen sich jedoch nicht direkt übertragen, da die dort betrachteten Grundstücke unbebaut waren. Da bei der statistischen Auswertung großer Datenmengen ein immenser Rechenaufwand anfällt, ist es unverzichtbar, professionelle statistische Software einzusetzen. Es stand das Programm S-Plus 2.0 (PC-Version für Windows) zur Verfügung. Sämtliche Berechnungen und alle Grafiken in diesem Bericht wurden in S-Plus erstellt.
Bei der Programmierung geht es in vielfältiger Form um Identifikation von Individuen: Speicherorte,Datentypen, Werte, Klassen, Objekte, Funktionen u.ä. müssen definierend oder selektierend identifiziert werden.Die Ausführungen zur Identifikation durch Zeigen oder Nennen sind verhältnismäßig kurz gehalten,wogegen der Identifikation durch Umschreiben sehr viel Raum gewidmet ist. Dies hat seinen Grunddarin, daß man zum Zeigen oder Nennen keine strukturierten Sprachformen benötigt, wohl aber zumUmschreiben. Daß die Betrachtungen der unterschiedlichen Formen funktionaler Umschreibungen soausführlich gehalten sind, geschah im Hinblick auf ihre Bedeutung für die Begriffswelt der funktionalen Programmierung. Man hätte zwar die Formen funktionaler Umschreibungen auch im Mosaikstein "Programmzweck versus Programmform" im Kontext des dort dargestellten Konzepts funktionaler Programme behandeln können, aber der Autor meint, daß der vorliegende Aufsatz der angemessenerePlatz dafür sei.
We have developed a middleware framework for workgroup environments that can support distributed software development and a variety of other application domains requiring document management and change management for distributed projects. The framework enables hypermedia-based integration of arbitrary legacy and new information resources available via a range of protocols, not necessarily known in advance to us as the general framework developers nor even to the environment instance designers. The repositories in which such information resides may be dispersed across the Internet and/or an organizational intranet. The framework also permits a range of client models for user and tool interaction, and applies an extensible suite of collaboration services, including but not limited to multi-participant workflow and coordination, to their information retrievals and updates. That is, the framework is interposed between clients, services and repositories - thus "middleware". We explain how our framework makes it easy to realize a comprehensive collection of workgroup and workflow features we culled from a requirements survey conducted by NASA.
In recent years several computational systems and techniques fortheorem proving by analogy have been developed. The obvious prac-tical question, however, as to whether and when to use analogy hasbeen neglected badly in these developments. This paper addresses thisquestion, identifies situations where analogy is useful, and discussesthe merits of theorem proving by analogy in these situations. Theresults can be generalized to other domains.
Given a finite set of points in the plane and a forbidden region R, we want to find a point X not an element of int(R), such that the weighted sum to all given points is minimized. This location problem is a variant of the well-known Weber Problem, where we measure the distance by polyhedral gauges and allow each of the weights to be positive or negative. The unit ball of a polyhedral gauge may be any convex polyhedron containing the origin. This large class of distance functions allows very general (practical) settings - such as asymmetry - to be modeled. Each given point is allowed to have its own gauge and the forbidden region R enables us to include negative information in the model. Additionally the use of negative and positive weights allows to include the level of attraction or dislikeness of a new facility. Polynomial algorithms and structural properties for this global optimization problem (d.c. objective function and a non-convex feasible set) based on combinatorial and geometrical methods are presented.
A multiscale method is introduced using spherical (vector) wavelets for the computation of the earth's magnetic field within source regions of ionospheric and magnetospheric currents. The considerations are essentially based on two geomathematical keystones, namely (i) the Mie representation of solenoidal vector fields in terms of toroidal and poloidal parts and (ii) the Helmholtz decomposition of spherical (tangential) vector fields. Vector wavelets are shown to provide adequate tools for multiscale geomagnetic modelling in form of a multiresolution analysis, thereby completely circumventing the numerical obstacles caused by vector spherical harmonics. The applicability and efficiency of the multiresolution technique is tested with real satellite data.
Vigenere-Verschlüsselung
(1999)
Die Verfahren der Induktiven Logischen Programmierung (ILP) [Mug93] haben die Aufgabe, aus einer Menge von positiven Beispielen E+, einer Menge von negativen Beispielen E und dem Hintergrundwissen B ein logisches Programm P zu lernen, das aus einer Menge von definiten Klauseln C : l0 l1, : : : ,ln besteht. Da der Hypothesenraum für Hornlogik unendlich ist, schränken viele Verfahren die Hypothesensprache auf eine endliche ein. Auch wird oft versucht, die Hypothesensprache so einzuschränken, dass nur Programme gelernt werden können, für die die Konsistenz entscheidbar ist. Eine andere Motivation, die Hypothesensprache zu beschränken, ist, dass das Wissen über das Zielprogramm, das schon vorhanden ist, ausgenutzt werden soll. So sind für bestimmte Anwendungen funktionsfreie Hypothesenklauseln ausreichend, oder es ist bekannt, dass das Zielprogramm funktional ist.
The mathematical modelling of problems in science and engineering leads often to partial differential equations in time and space with boundary and initial conditions.The boundary value problems can be written as extremal problems(principle of minimal potential energy), as variational equations (principle of virtual power) or as classical boundary value problems.There are connections concerning existence and uniqueness results between these formulations, which will be investigated using the powerful tools of functional analysis.The first part of the lecture is devoted to the analysis of linear elliptic boundary value problems given in a variational form.The second part deals with the numerical approximation of the solutions of the variational problems.Galerkin methods as FEM and BEM are the main tools. The h-version will be discussed, and an error analysis will be done.Examples, especially from the elasticity theory, demonstrate the methods.
Value Preserving Strategies and a General Framework for Local Approaches to Optimal Portfolios
(1999)
We present some new general results on the existence and form of value preserving portfolio strategies in a general semimartingale setting. The concept of value preservation will be derived via a mean-variance argument. It will also be embedded into a framework for local approaches to the problem of portfolio optimisation.
Rules are an important knowledge representation formalism in constructive problem solving. On the other hand, object orientation is an essential key technology for maintaining large knowledge bases as well as software applications. Trying to take advantage of the benefits of both paradigms, we integrated Prolog and Smalltalk to build a common base architecture for problem solving. This approach has proven to be useful in the development of two knowledge-based systems for planning and configuration design (CAPlan and Idax). Both applications use Prolog as an efficient computational source for the evaluation of knowledge represented as rules.
We present the adaptation process in a CBR application for decision support in the domain of industrial supervision. Our approach uses explanations to approximate relations between a problem description and its solution, and the adaptation process is guided by these explanations (a more detailed presentation has been done in [4]).
The paper explores the role of artificial intelligence techniques in the development of an enhanced software project management tool, which takes account of the emerging requirement for support systems to address the increasing trend towards distributed multi-platform software development projects. In addressing these aims this research devised a novel architecture and framework for use as the basis of an intelligent assistance system for use by software project managers, in the planning and managing of a software project. This paper also describes the construction of a prototype system to implement this architecture and the results of a series of user trials on this prototype system.
Requirements engineering (RE) is a necessary part of the software development process, as it helps customers and designers identify necessary system requirements. If these stakeholders are separated by distance, we argue that a distributed groupware environment supporting a cooperative requirements engineering process must be supplied that allows them to negotiate software requirements. Such a groupware environment must support aspects of joint work relevant to requirements negotiation: synchronous and asynchronous collaboration, telepresence, and teledata. It should also add explicit support for a structured RE process, which includes the team's ability to discuss multiple perspectives during requirements acquisition and traceability. We chose the TeamWave software platform as an environment that supplied the basic collaboration capabilities, and tailored it to fit the specific needs of RE.
To prove difficult theorems in a mathematical field requires substantial know-ledge of that field. In this paper a frame-based knowledge representation formalismis presented, which supports a conceptual representation and to a large extent guar-antees the consistency of the built-up knowledge bases. We define a semantics ofthe representation by giving a translation into the underlaying logic.
We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.
This paper addresses two modi of analogical reasoning. Thefirst modus is based on the explicit representation of the justificationfor the analogical inference. The second modus is based on the repre-sentation of typical instances by concept structures. The two kinds ofanalogical inferences rely on different forms of relevance knowledge thatcause non-monotonicity. While the uncertainty and non-monotonicity ofanalogical inferences is not questioned, a semantic characterization ofanalogical reasoning has not been given yet. We introduce a minimalmodel semantics for analogical inference with typical instances.
In the paper we discuss the transition from kinetic theory to macroscopic fluid equations, where the macroscopic equations are defined as aymptotic limits of a kinetic equation. This relation can be used to derive computationally efficient domain decomposition schemes for the simulaion of rarefied gas flows close to the continuum limit. Moreover, we present some basic ideas for the derivation of kinetic induced numerical schemes for macroscopic equations, namely kinetic schemes for general conservation laws as well as Lattice-Boltzmann methods for the incompressible Navier-Stokes equations.
In order to improve the quality of software systems and to set up a more effective process for their development, many attempts have been made in the field of software engineering. Reuse of existing knowledge is seen as a promising way to solve the outstanding problems in this field. In previous work we have integrated the design pattern concept with the formal design language SDL, resulting in a certain kind of pattern formalization. For the domain of communication systems we have also developed a pool of SDL patterns with an accompanying process model for pattern application. In this paper we present an extension that combines the SDL pattern approach with the experience base concept. This extension supports a systematic method for empirical evaluation and continuous improvement of the SDL pattern approach. Thereby the experience base serves as a repository necessary for effective reuse of the captured knowledge. A comprehensive usage scenario is described which shows the advantages of the combined approach. To demonstrate its feasibility, first results of a research case study are given.
This paper focuses on the issues involved when multiple mobile agents interact in multiagent systems. The application is an intelligent agent market place, where buyer and seller agents cooperate and compete to process sales transactions for their owners. The market place manager acts as afacilitator by giving necessary information to agents and managing communication between agents, and also as a mediator by proposing solutions to agents or stopping them to get into infinite loops bargaining back and forth.The buyer and seller agents range from using hardcoded logic to rule-based inferencing in their negotiation strategies. However these agents must support some communication skills using KQML or FIPA-ACL.So in contrast with other approaches to multiagent negotiation, we introduce an explicit mediator (market place manager) into the negotiation, and we propose a negotiation strategy based on dependence theory [1] implemented by our best buyers and best sellers.
In this paper we are interested in an algebraic specification language that (1) allowsfor sufficient expessiveness, (2) admits a well-defined semantics, and (3) allows for formalproofs. To that end we study clausal specifications over built-in algebras. To keep thingssimple, we consider built-in algebras only that are given as the initial model of a Hornclause specification. On top of this Horn clause specification new operators are (partially)defined by positive/negative conditional equations. In the first part of the paper wedefine three types of semantics for such a hierarchical specification: model-theoretic,operational, and rewrite-based semantics. We show that all these semantics coincide,provided some restrictions are met. We associate a distinguished algebra A spec to ahierachical specification spec. This algebra is initial in the class of all models of spec.In the second part of the paper we study how to prove a theorem (a clause) valid in thedistinguished algebra A spec . We first present an abstract framework for inductive theoremprovers. Then we instantiate this framework for proving inductive validity. Finally wegive some examples to show how concrete proofs are carried out.This report was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D4-Projekt)
This paper shows how a new approach to theorem provingby analogy is applicable to real maths problems. This approach worksat the level of proof-plans and employs reformulation that goes beyondsymbol mapping. The Heine-Borel theorem is a widely known result inmathematics. It is usually stated in R 1 and similar versions are also truein R 2 , in topology, and metric spaces. Its analogical transfer was proposedas a challenge example and could not be solved by previous approachesto theorem proving by analogy. We use a proof-plan of the Heine-Boreltheorem in R 1 as a guide in automatically producing a proof-plan of theHeine-Borel theorem in R 2 by analogy-driven proof-plan construction.
This paper is concerned with numerical algorithms for the bipolar quantum drift diffusion model. For the thermal equilibrium case a quasi-gradient method minimizing the energy functional is introduced and strong convergence is proven. The computation of current - voltage characteristics is performed by means of an extended emph{Gummel - iteration}. It is shown that the involved fixed point mapping is a contraction for small applied voltages. In this case the model equations are uniquely solvable and convergence of the proposed iteration scheme follows. Numerical simulations of a one dimensional resonant tunneling diode are presented. The computed current - voltage characteristics are in good qualitative agreement with experimental measurements. The appearance of negative differential resistances is verified for the first time in a Quantum Drift Diffusion model.
Continuous and discrete superselection rules induced by the interaction with the environment are investigated for a class of exactly soluble Hamiltonian models. The environment is given by a Boson field. Stable superselection sectors can only emerge if the low frequences dominate and the ground state of the Boson field disappears due to infrared divergence. The models allow uniform estimates of all transition matrix elements between different superselection sectors.
The thermal equilibrium state of a bipolar, isothermal quantum fluid confined to a bounded domain \(\Omega\subset I\!\!R^d,d=1,2\) or \( d=3\) is the minimizer of the total energy \({\mathcal E}_{\epsilon\lambda}\); \({\mathcal E}_{\epsilon\lambda}\) involves the squares of the scaled Planck's constant \(\epsilon\) and the scaled minimal Debye length \(\lambda\). In applications one frequently has \(\lambda^2\ll 1\). In these cases the zero-space-charge approximation is rigorously justified. As \(\lambda \to 0 \), the particle densities converge to the minimizer of a limiting quantum zero-space-charge functional exactly in those cases where the doping profile satisfies some compatibility conditions. Under natural additional assumptions on the internal energies one gets an differential-algebraic system for the limiting \((\lambda=0)\) particle densities, namely the quantum zero-space-charge model. The analysis of the subsequent limit \(\epsilon \to 0\) exhibits the importance of quantum gaps. The semiclassical zero-space-charge model is, for small \(\epsilon\), a reasonable approximation of the quantum model if and only if the quantum gap vanishes. The simultaneous limit \(\epsilon =\lambda \to 0\) is analyzed.
Most automated theorem provers suffer from the problem that theycan produce proofs only in formalisms difficult to understand even forexperienced mathematicians. Efforts have been made to transformsuch machine generated proofs into natural deduction (ND) proofs.Although the single steps are now easy to understand, the entire proofis usually at a low level of abstraction, containing too many tedioussteps. Therefore, it is not adequate as input to natural language gen-eration systems.To overcome these problems, we propose a new intermediate rep-resentation, called ND style proofs at the assertion level . After illus-trating the notion intuitively, we show that the assertion level stepscan be justified by domain-specific inference rules, and that these rulescan be represented compactly in a tree structure. Finally, we describea procedure which substantially shortens ND proofs by abstractingthem to the assertion level, and report our experience with furthertransformation into natural language.
We present an empirical study of mathematical proofs by diagonalization, the aim istheir mechanization based on proof planning techniques. We show that these proofs canbe constructed according to a strategy that (i) finds an indexing relation, (ii) constructsa diagonal element, and (iii) makes the implicit contradiction of the diagonal elementexplicit. Moreover we suggest how diagonal elements can be represented.
Trigonometric invariants are defined for each Weyl group orbit on the root lattice. They are real and periodic on the coroot lattice. Their polynomial algebra is spanned by a basis which is calculated by means of an algorithm. The invariants of the basis can be used as coordinates in any cell of the coroot space and lead to an exactly solvable model of Sutherland type. We apply this construction to the \(F_4\) case.
In this report we present a case study of employing goal-oriented heuristics whenproving equational theorems with the (unfailing) Knut-Bendix completion proce-dure. The theorems are taken from the domain of lattice ordered groups. It will bedemonstrated that goal-oriented (heuristic) criteria for selecting the next critical paircan in many cases significantly reduce the search effort and hence increase per-formance of the proving system considerably. The heuristic, goalADoriented criteriaare on the one hand based on so-called "measures" measuring occurrences andnesting of function symbols, and on the other hand based on matching subterms.We also deal with the property of goal-oriented heuristics to be particularly helpfulin certain stages of a proof. This fact can be addressed by using them in a frame-work for distributed (equational) theorem proving, namely the "teamwork-method".
Orderings on polynomial interpretations of operators represent a powerful technique for proving thetermination of rewriting systems. One of the main problems of polynomial orderings concerns thechoice of the right interpretation for a given rewriting system. It is very difficult to develop techniquesfor solving this problem. Here, we present three new heuristic approaches: (i) guidelines for dealingwith special classes of rewriting systems, (ii) an algorithm for choosing appropriate special polynomialsas well as (iii) an extension of the original polynomial ordering which supports the generation ofsuitable interpretations. All these heuristics will be applied to examples in order to illustrate theirpractical relevance.
Tangent measure distributions are a natural tool to describe the local geometry of arbitrary measures of any dimension. We show that for every measure on a Euclidean space and every s, at almost every point, all s-dimensional tangent measure distributions define statistically self-similar random measures. Consequently, the local geometry of general measures is not different from the local geometry of self-similar sets. We illustrate the strength of this result by showing how it can be used to improve recently proved relations between ordinary and average densities.
We transform a user-friendly formulation of aproblem to a machine-friendly one exploiting the variabilityof first-order logic to express facts. The usefulness of tacticsto improve the presentation is shown with several examples.In particular it is shown how tactical and resolution theoremproving can be combined.
Verfahren des Maschinellen Lernens haben heute eine Reife erreicht, die zu ersten erfolgreichen industriellen Anwendungen geführt hat. In der Prozessdiagnose und -steuerung ermöglichen Lernverfahren die Klassifikation und Bewertung von Betriebszuständen, d.h. eine Grobmodellierung eines Prozesses, wenn dieser nicht oder nur teilweise mathematisch beschreibbar ist. Ausserdem gestatten Lernverfahren die automatische Generierung von Klassifizierungsprozeduren, die deterministisch abgearbeitet werden und daher für die Belange der Echtzeitdiagnose und -steuerung u.U. zeiteffektiver als Inferenzmechanismen auf logischer bzw. Produktionsregelbasis sind, da letztere immer mit zeitaufwendigen Suchprozessen verbunden sind.
We report results of the switching properties of Stoner-like magnetic particles subject to short magnetic field pulses, obtained by numerical investigations. We discuss the switching properties as a function of the external field pulse strength and direction, the pulse length and the pulse shape. For field pulses long compared to the ferromagnetic resonance precession time the switching behavior is governed by the magnetic damping term, whereas in the limit of short field pulses the switching properties are dominated by the details of the precession of the magnetic moment. In the latter case, by choosing the right field pulse parameters, the magnetic damping term is of minor importance and ultrafast switching can be achieved. Switching can be obtained in an enlarged angular range of the direction of the applied field compared to the case of long pulses.
A growing share of all software development project work is being done by geographically distributed teams. To satisfy shorter product design cycles, expert team members for a development project may need to be r ecruited globally. Yet to avoid extensive travelling or r eplacement costs, distributed project work is preferred. Current-generation software engineering tools and ass ociated systems, processes, and methods were for the most part developed to be used within a single enterprise. Major innovations have lately been introduced to enable groupware applications on the Internet to support global collaboration. However, their deployment for distributed software projects requires further research. In partic ular, groupware methods must seamlessly be integrated with project and product management systems to make them attractive for industry. In this position paper we outline the major challenges concerning distributed (virtual) software projects. Based on our experiences with software process modeling and enactment environments, we then propose approaches to solve those challenges.
Abstract: Spontaneous emission and Lamb shift of atoms in absorbing dielectrics are discussed. A Green's-function approach is used based on the multipolar interaction Hamiltonian of a collection of atomic dipoles with the quantised radiation field. The rate of decay and level shifts are determined by the retarded Green's-function of the interacting electric displacement field, which is calculated from a Dyson equation describing multiple scattering. The positions of the atomic dipoles forming the dielectrics are assumed to be uncorrelated and a continuum approximation is used. The associated unphysical interactions between different atoms at the same location is eliminated by removing the point-interaction term from the free-space Green's-function (local field correction). For the case of an atom in a purely dispersive medium the spontaneous emission rate is altered by the well-known Lorentz local-field factor. In the presence of absorption a result different from previously suggested expressions is found and nearest-neighbour interactions are shown to be important.
We report on the observation of spin wave quantization in square arrays of micron size circular magnetic Ni80Fe20 dots by means of Brillouin light scattering spectroscopy. For a large wavevector interval several discrete, dispersionless modes with a frequency splitting of up to 2.5 GHz were observed. The modes are identified as magnetostatic surface spin waves laterally quantized due to in- plane confinement in each single dot. The frequencies of the lowest observed modes decrease with increasing distance between the dots, thus indicating an essential dynamic magnetic dipole interaction between the dots with small interdot distances.
An experimental study of spin wave quantization in arrays of micron size magnetic Ni80Fe20 islands (dots and wires) by means of Brillouin light scattering spectroscopy is reported. Dipolar-dominated spin wave modes laterally quantized in a single island with quantized wavevector values determined by the size of the island are studied. In the case of wires the frequencies of the modes and the transferred wavevector interval, where each mode is observed, are calculated. The results of the calculations are in a good agreement with the experimental data. In the case of circular dots the frequencies of the lowest observed modes decrease with increasing distance between the dots, thus indicating an essential dynamic magnetic dipole interaction between the dots with small interdot distances.
Spektralsequenzen
(1999)
A non-trivial real-time requirement obeying a pattern that can be foundin various instantiations in the application domain building automation, and which is therefore called generic, is investigated in detail. Starting point is a description of a real-time problem in natural language augmented by a diagram, in a style often found in requirements documents. Step by step, this description is made more precise and finally transformed into a surprisingly concise formal specification, written in real-time temporal logic with customized operators. Wereason why this formal specification precisely captures the original description- as far as this is feasible due to the lack of precision of natural language.
A new advanced space- and time-resolved Brillouin light scattering technique is used to study diffraction of two-dimensional beams and pulses of dipolar spin waves excited by strip-line antennas in tangentially magnetized garnet films. The technique is an effective tool for investigations of two-dimensional spin wave propagation with high spatial and temporal resolution. Nonlinear effects such as stationary and nonstationary self-focusing are investigated in detail. It is shown, that nonlinear diffraction of a stationary backward volume magnetostatic wave (BVMSW) beam, having a finite transverse aperture, leads to selffocusing of the beam at one spatial point. Diffraction of a finite-duration (non-stationary) BVMSW pulse leads to space-time self-focusing and formation of a strongly localized two-dimensional wave packet (spin wave bullet).
Starting from the uniqueness question for mixtures of distributions this review centers around the question under which formally weaker assumptions one can prove the existence of SPLIFs, in other words perfect statistics and tests. We mention a couple of positive and negative results which complement the basic contribution of David Blackwell in 1980. Typically the answers depend on the choice of the set theoretic axioms and on the particular concepts of measurability.
We consider three applications of impulse control in financial mathematics, a cash management problem, optimal control of an exchange rate, and portfolio optimisation under transaction costs. We sketch the different ways of solving these problems with the help of quasi-variational inequalities. Further, some viscosity solution results are presented.
In line location problems the objective is to find a straight line which minimizes the sum of distances, or the maximum distance, respectively to a given set of existing facilities in the plane. These problems have well solved. In this paper we deal with restricted line location problems, i.e. we have given a set in the plane where the line is not allowed to pass through. With the help of a geometric duality we solve such problems for the vertical distance and then extend these results to block norms and some of them even to arbitrary norms. For all norms we give a finite candidate set for the optimal line.
The development of software products has become a highly cooperative and distributed activity involving working groups at geographically distinct places. These groups show an increasing mobility and a very flexible organizational structure. Process methodology and technology have to take such evolutions into account. A possible direction for the emergence of new process technology and methodology is to take benefit from recent advances within multiagent systems engineering : innovative methodologies for adaptable and autonomous architectures; they exhibit interesting features to support distributed software processes.
We compare different notions of differentiability of a measure along a vector field on a locally convex space. We consider in the L2-space of a differ entiable measure the analoga of the classical concepts of gradient, divergence and Laplacian (which coincides with the OrnsteinUhlenbeck operator in the Gaussian case). We use these operators for the extension of the basic results of Malliavin and Stroock on the smoothness of finite dimensional image measures under certain nonsmooth mappings to the case of non-Gaussian measures. The proof of this extension is quite direct and does not use any Chaos-decomposition. Finally, the role of this Laplacian in the procedure of quantization of anharmonic oscillators is discussed.
SmallSync, an internet event synchronizer, is intended to provide a monitoring and visualization methodology for permitting simultaneous analysis and control of multiple remote processes on the web. The current SmallSync includes: (1) a mechanism to multicast web window-based commands, message passing events and process execution events among processes; (2) an event synchronizer to allow concurrent execution of some functions on multiple machines; (3) a means to report when these events cause errors in the processes; and (4) ad hoc visualization of process states using existing visualizers.
Patdex is an expert system which carries out case-based reasoning for the fault diagnosis of complex machines. It is integrated in the Moltke workbench for technical diagnosis, which was developed at the university of Kaiserslautern over the past years, Moltke contains other parts as well, in particular a model-based approach; in Patdex where essentially the heuristic features are located. The use of cases also plays an important role for knowledge acquisition. In this paper we describe Patdex from a principal point of view and embed its main concepts into a theoretical framework.
In 1979, J.M. Bernardo argued heuristically that in the case of regular product experiments his information theoretic reference prior is equal to Jeffreys' prior. In this context, B.S. Clarke and A.R. Barron showed in 1994, that in the same class of experiments Jeffreys' prior is asymptotically optimal in the sense of Shannon, or, in Bayesian terms, Jeffreys' prior is asymptotically least favorable under Kullback Leibler risk. In the present paper, we prove, based on Clarke and Barron's results, that every sequence of Shannon optimal priors on a sequence of regular iid product experiments converges weakly to Jeffreys' prior. This means that for increasing sample size Kullback Leibler least favorable priors tend to Jeffreys' prior.
In this paper relationships between Pareto points and saddle points in multiple objective programming are investigated. Convex and nonconvex problems are considered and the equivalence between Pareto points and saddle points is proved in both cases. The results are based on scalarizations of multiple objective programs and related linear and augmented Lagrangian functions. Partitions of the index sets of objectives and constranints are introduced to reduce the size of the problems. The relevance of the results in the context of decision making is also discussed.
In this paper a new trend is introduced into the field of multicriteria location problems. We combine the robustness approach using the minmax regret criterion together with Pareto-optimality. We consider the multicriteria Weber location problem which consists of simultaneously minimizing a number of weighted sum-distance functions and the set of Pareto-optimal locations as its solution concept. For this problem, we characterize the Pareto-optimal solutions within the set of robust locations for the original weighted sum-distance functions. These locations have both the properties of stability and non-domination which are required in robust and multicriteria programming.
Reusing Proofs
(1999)
We develop a learning component for a theorem prover designed for verifying statements by mathematical induction. If the prover has found a proof, it is analyzed yielding a so-called catch. The catch provides the features of the proof which are relevant for reusing it in subsequent verification tasks and may also suggest useful lemmata. Proof analysis techniques for computing the catch are presented. A catch is generalized in a certain sense for increasing the reusability of proofs. We discuss problems arising when learning from proofs and illustrate our method by several examples.
This paper describes a declarative approach forencoding the plan operators in proof planning,the so-called methods. The notion of methodevolves from the much studied concept of a tac-tic and was first used by A. Bundy. Signific-ant deductive power has been achieved withthe planning approach towards automated de-duction; however, the procedural character ofthe tactic part of methods hinders mechanicalmodification. Although the strength of a proofplanning system largely depends on powerfulgeneral procedures which solve a large class ofproblems, mechanical or even automated modi-fication of methods is necessary, since methodsdesigned for a specific type of problems willnever be general enough. After introducing thegeneral framework, we exemplify the mechan-ical modification of methods via a particularmeta-method which modifies methods by trans-forming connectives to quantifiers.
We examine an approach for demand-driven cooperative theorem proving.We briefly point out the problems arising from the use of common success-driven cooperation methods, and we propose the application of our approachof requirement-based cooperative theorem proving. This approach allows for abetter orientation on current needs of provers in comparison with conventional co-operation concepts. We introduce an abstract framework for requirement-basedcooperation and describe two instantiations of it: Requirement-based exchangeof facts and sub-problem division and transfer via requests. Finally, we reporton experimental studies conducted in the areas superposition and unfailing com-pletion.The author was supported by the Deutsche Forschungsgemeinschaft (DFG).
Der Wissenserwerb erschwert bisher häufig den Einsatz wissensbasierter Systeme der Arbeitsplanerstellung in der industriellen Praxis. Die meisten Anwendungen gestatten nur das Erfassen und Editieren des durch aufwendige Erhebung, Systematisierung und Formulierung gewonnenen fachspezifischen Planungswissens. Im Rahmen eines DFG-Projektes soll die Anwendbarkeit bekannter maschineller Lernverfahren auf technologische Reihenfolge- und Zuordnungsprobleme im Rahmen der generierenden Arbeitsplanerstellung von Teilefertigungsprozessen im Maschinenbau nachgewiesen werden. Dazu wird ein Prototyp mit Hilfe eines verfügbaren Softwarewerkzeuges entwickelt, der das maschinelle Lernen aus vorgegebenen Beispielen ermöglichen und mit einem existierenden Prototypen der wissensbasierten Arbeistplanung kommunizieren soll. Der folgende Beitrag gibt einen Überblick über das mit Lernverfahren zu behandelnde Planungswissen und stellt mögliche Repräsentationsmöglichkeiten des Wissens zur Diskussion.
Emerging technologies such as the Internet, the World Wide Web, JavaTM technology, and software components, are changing the software business. Activities that have in the past been constrained by the need for intense information management increasingly involve cooperating organizations. Information management tools and techniques do not scale well in the face of this organizational complexity. An informal approach to information sharing, based largely on manual copying of information, cannot meet the demands of the task as size and complexity increase. Formal approaches to sharing information are based on groupware tools, but cooperating organizations do not always enjoy the trust or commonality of sophisticated infrastructure, methods, and skills that this approach requires. Bridging the gap requires a simple, loosely coupled, highly flexible strategy for information sharing. Extensive information relevant to different parts of the software life cycle should be interconnected in a simple, easily described way; such connections should permit selective information sharing by a variety of tools and in a variety of collaboration modes that vary in the amount of organizational coupling they require.
In this paper we present a renormalizability proof for spontaneously broken SU (2) gauge theory. It is based on Flow Equations, i.e. on the Wilson renormalization group adapted to perturbation theory. The power counting part of the proof, which is conceptually and technically simple, follows the same lines as that for any other renormalizable theory. The main difficulty stems from the fact that the regularization violates gauge invariance. We prove that there exists a class of renormalization conditions such that the renormalized Green functions satisfy the Slavnov-Taylor identities of SU (2) Yang-Mills theory on which the gauge invariance of the renormalized theory is based.
Abstract: In this paper we present a renormalizability proof for spontaneously broken SU (2) gauge theory. It is based on Flow Equations, i.e. on the Wilson renormalization group adapted to perturbation theory. The power counting part of the proof, which is conceptually and technically simple, follows the same lines as that for any other renormalizable theory. The main difficulty stems from the fact that the regularization violates gauge invariance. We prove that there exists a class of renormalization conditions such that the renormalized Green functions satisfy the Slavnov-Taylor identities of SU (2) Yang-Mills theory on which the gauge invariance of the renormalized theory is based.
Abstract: Recently, the chiral logarithms predicted by quenched chiral perturbation theory have been extracted from lattice calculations of hadron masses. We argue that the deviations of lattice results from random matrix theory starting around the so-called Thouless energy can be understood in terms of chiral perturbation theory as well. Comparison of lattice data with chiral perturbation theory formulae allows us to compute the pion decay constant. We present results from a calculation for quenched SU(2) with Kogut-Susskind fermions at ß = 2.0 and 2.2.
Abstract: Recently, the contributions of chiral logarithms predicted by quenched chiral perturbation theory have been extracted from lattice calculations of hadron masses. We argue that a detailed comparison of random matrix theory and lattice calculations allows for a precise determination of such corrections. We estimate the relative size of the m log(m), m, and m^2 corrections to the chiral condensate for quenched SU(2).
Ramsey Numbers of K_m versus (n,k)-graphs and the Local Density of Graphs not Containing a K_m
(1999)
In this paper generalized Ramsey numbers of complete graphs K_m versus the set langle ,n,k angle of (n,k)-graphs are investigated. The value of r(K_m,langle n,k angle) is given in general for (relative to n) values of k small compared to n using a correlation with Turan numbers. These generalized Ramsey numbers con be used to determine the local densities of graphs not containing a subgraph K_m.
We report on the observation of spin wave quantization in tangentially magnetized Ni80Fe20 discs by means of Brillouin light scattering spectroscopy. For a large wave vector interval several discrete, dispersionless modes with a frequency splitting up to 2.5 GHz were observed. The modes are identified as being magne-tostatic surface spin wave modes quantized by their lateral confinement in the disc. For the lowest modes dynamic magnetic dipolar coupling between the discs is found for a disc spacing of 0.1microm.
A lot of the human ability to prove hard mathematical theorems can be ascribedto a problem-specific problem solving know-how. Such knowledge is intrinsicallyincomplete. In order to prove related problems human mathematicians, however,can go beyond the acquired knowledge by adapting their know-how to new relatedproblems. These two aspects, having rich experience and extending it by need, can besimulated in a proof planning framework: the problem-specific reasoning knowledge isrepresented in form of declarative planning operators, called methods; since these aredeclarative, they can be mechanically adapted to new situations by so-called meta-methods. In this contribution we apply this framework to two prominent proofs intheorem proving, first, we present methods for proving the ground completeness ofbinary resolution, which essentially correspond to key lemmata, and then, we showhow these methods can be reused for the proof of the ground completeness of lockresolution.
This paper addresses analogy-driven auto-mated theorem proving that employs a sourceproof-plan to guide the search for a proof-planof the target problem. The approach presen-ted uses reformulations that go beyond symbolmappings and that incorporate frequently usedre-representations and abstractions. Severalrealistic math examples were successfully pro-cessed by our analogy-driven proof-plan con-struction. One challenge example, a Heine-Borel theorem, is discussed here. For this ex-ample the reformulaitons are shown step bystep and the modifying actions are demon-strated.
This paper outlines an implemented system called PROVERB that explains machine -found natural deduction proofs in natural language. Different from earlier works, we pursue a reconstructive approach. Based on the observation that natural deduction proofs are at a too low level of abstraction compared with proofs found in mathematical textbooks, we define first the concept of so-called assertion level inference rules. Derivations justified by these rules can intuitively be understood as the application of a definition or a theorem. Then an algorithm is introduced that abstracts machine-found ND proofs using the assertion level inference rules. Abstracted proofs are then verbalized into natural language by a presentation module. The most significant feature of the presentation module is that it combines standard hierarchical text planning and techniques that locally organize argumentative texts based on the derivation relation under the guidance of a focus mechanism. The behavior of the system is demonstrated with the help of a concrete example throughout the paper.
This paper outlines the linguistic part of an implemented system namedPROVERB[3] that transforms, abstracts, and verbalizes machine-found proofs innatural language. It aims to illustrate, that state-of-the-art techniques of natural language processing are necessary to produce coherent texts that resemble those found in typical mathematical textbooks, in contrast to the belief that mathematical texts are only schematic and mechanical.The verbalization module consists of a content planner, a sentence planner, and a syntactic generator. Intuitively speaking, the content planner first decides the order in which proof steps should be conveyed. It also some messages to highlight global proof structures. Subsequently, thesentence planner combines and rearranges linguistic resources associated with messages produced by the content planner in order to produce connected text. The syntactic generat or finally produces the surface text.
This paper describes the linguistic part of a system called PROVERB, which transforms, abstracts,and verbalizes machine-found proofs into formatedtexts. Linguistically, the architecture of PROVERB follows most application oriented systems, and is a pipelined control of three components. Its macroplanner linearizes a proof and plans mediating communicative acts by employing a combination of hierarchical planning and focus-guided navigation. The microplanner then maps communicative acts and domain concepts into linguistic resources, paraphrases and aggregates such resources to producethe final Text Structure. A Text Structure contains all necessary syntactic information, and can be executed by our realizer into grammatical sentences. The system works fully automatically and performs particularly well for textbook size examples.
We first show that ground term-rewriting systems can be completed in apolynomial number of rewriting steps, if the appropriate data structure for termsis used. We then apply this result to study the lengths of critical pair proofs innon-ground systems, and obtain bounds on the lengths of critical pair proofsin the non-ground case. We show how these bounds depend on the types ofinference steps that are allowed in the proofs.
Proof planning is an alternative methodology to classical automated theorem prov-ing based on exhausitve search that was first introduced by Bundy [8]. The goal ofthis paper is to extend the current realm of proof planning to cope with genuinelymathematical problems such as the well-known limit theorems first investigated for au-tomated theorem proving by Bledsoe. The report presents a general methodology andcontains ideas that are new for proof planning and theorem proving, most importantlyideas for search control and for the integration of domain knowledge into a general proofplanning framework. We extend proof planning by employing explicit control-rules andsupermethods. We combine proof planning with constraint solving. Experiments showthe influence of these mechanisms on the performance of a proof planner. For instance,the proofs of LIM+ and LIM* have been automatically proof planned in the extendedproof planner OMEGA.In a general proof planning framework we rationally reconstruct the proofs of limittheorems for real numbers (IR) that were first computed by the special-purpose programreported in [6]. Compared with this program, the rational reconstruction has severaladvantages: It relies on a general-purpose problem solver; it provides high-level, hi-erarchical representations of proofs that can be expanded to checkable ND-proofs; itemploys declarative contol knowledge that is modularly organized.
In diesem Aufsatz geht es um eine Klassifikation von Programmen nach zwei orthogonalen Kriterien.Programm und Software werden dabei nicht als Synonyme angesehen; Programm sein wird hiergleichgesetzt mit ausführbar sein, d.h. etwas ist dann und nur dann ein Programm, wenn man die Fragebeantworten kann, was es denn heißen solle, dieses Etwas werde ausgeführt. Es gibt durchaus Softwa-regebilde, bezüglich derer diese Frage keinen Sinn hat und die demzufolge auch keine Programme sind - beispielsweise eine Funktions - oder eine Klassenbibliothek.Klassifikation ist von Nutzen, wenn sie Vielfalt überschaubarer macht - die Vielfalt der Schüler einergroßen Schule wird überschaubarer, wenn die Schüler "klassifiziert" sind, d.h. wenn sie in ihren Klas-senzimmern sitzen. Die im folgenden vorgestellte Klassifikation soll die Vielfalt von Programmenüberschaubarer machen.
Many interesting problems arise from the study of the behavior of fluids. From a theoretical point of view Fluid Dynamics works with a well defined set of equat ions for which it is expected to get a clear description of the solutions. Unfortunately, in ge neral this is not easy even if the many experiments performed in the field seem to indicate which path to follow. Some of the basic questions are still either partially or widely open. For example we would like to have a better understanding on : 1. Questions for both bounded and unbounded domains on regularity, uniqueness, long time behavior of the solutions. 2. How well do solutions to the fluid equations fit to the real flow. Depending on the type of data most of the answers to these questions are knonw, when we work in two dimensions. For solutions in three dimensions, in general, we have only partial answers.
This paper outlines an implemented system named PROVERBthat transforms and abstracts machine-found proofs to natural deduction style proofs at an adequate level of abstraction and then verbalizesthem in natural language. The abstracted proofs, originally employedonly as an intermediate representation, also prove to be useful for proofplanning and proving by analogy.
Planverfahren
(1999)
We describe a hybrid architecture supporting planning for machining workpieces. The archi- tecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.
This paper presents a new way to use planning in automated theorem provingby means of distribution. To overcome the problem that often subtasks fora proof problem can not be detected a priori (which prevents the use of theknown planning and distribution techniques) we use a team of experts that workindependently with different heuristics on the problem. After a certain amount oftime referees judge their results using the impact of the results on the behaviourof the expert and a supervisor combines the selected results to a new startingpoint.This supervisor also selects the experts that can work on the problem inthe next round. This selection is a reactive planning task. We outline whichinformation the supervisor can use to fulfill this task and how this informationis processed to result in a plan or to revise a plan. We also show that the useof planning for the assignment of experts to the team allows the system to solvemany different examples in an acceptable time with the same start configurationand without any consultation of the user.Plans are always subject to changeShin'a'in proverb
This report is a first attempt of formalizing the diagonalization proof technique.We give a strategy how to systematically construct diagonalization proofs: (i) findingan indexing relation, (ii) constructing a diagonal element, and (iii) making the implicitcontradiction of the diagonal element explicit. We suggest a declarative representationof the strategy and describe how it can be realized in a proof planning environment.
The Weber Problem for a given finite set of existing facilities {cal E}x = {Ex_1,Ex_2, ... ,Ex_M} subset R^2 with positive weights w_m (m = 1, ... ,M) is to find a new fcility X* such that sum_{m=1}^{M} w_{m}d(X,Ex_m) is minimized for some distance function d. A variation of this problem is obtained of the existing facilities are situated on two sides of a linear barrier. Such barriers like rivers, highways, borders or mountain ranges are frequently encountered in practice. Structural results as well as algorithms for this non-convex optimization problem depending on the distance function and on the number and location of passages through the barrier are presented. A reduction to convex optimization problems is used to derive efficient algorithms.
The Weber problem for a given finite set of existing facilities {cal E}x = {Ex_1,Ex_2, ... ,Ex_M} subset R^2 with positive weights w_m (m = 1, ... ,M) is to find a new facility X* in R^2 such that sum_{m=1}^{M} w_{m}d(X,Ex_m) is minimized for some distance function d. In this paper we consider distances defined by polyhedral gauges. A variation of this problem is obtained if barriers are introduced which are convex polygonal subsets of the plane where neither location of new facilities nor traveling is allowed. Such barriers like lakes, military regions, national parks or mountains are frequently encountered in practice.From a mathematical point of view barrier problems are difficult, since the prensence of barriers destroys the convexity of the objective function. Nevertheless, this paper establishes a descretization result: One of the grid points in the grid defined by the existing facilities and the fuundamental directions of the gauge distances can be proved to be an optimal location. Thus the barrier problem can be solved with a polynomial algorithm.
Abstraction is one of the most promising approaches to improve the performance of problem solvers. Abstraction by dropping sentences of a domain description - as used in most hierarchical planners - is known to be very representation dependent. To overcome these drawbacks, we propose a more general view of abstraction involving the change of representation language. We have developed a new abstraction methodology and a related sound and complete learning algorithm that allows the complete change of representation language of planning cases from concrete to abstract.
Abstract: We present experimental and theoretical results of a detailed study of laser-induced continuum structures (LICS) in the photoionization continuum of helium out of the metastable state 2s^1 S_0. The continuum dressing with a 1064 nm laser, couples the same region of the continuum to the 4s^1 S_0 state. The experimental data, presented for a range of intensities, show pronounced ionization suppression (by asmuch as 70% with respect to the far-from-resonance value) as well as enhancement, in a Beutler-Fano resonance profile. This ionization suppression is a clear indication of population trapping mediated by coupling to a contiuum. We present experimental results demonstrating the effect of pulse delay upon the LICS, and for the behavior of LICS for both weak and strong probe pulses. Simulations based upon numerical solution of the Schrödinger equation model the experimental results. The atomic parameters (Rabi frequencies and Stark shifts) are calculated using a simple model-potential method for the computation of the needed wavefunctions. The simulations of the LICS profiles are in excellent agreement with experiment. We also present an analytic formulation of pulsed LICS. We show that in the case of a probe pulse shorter than the dressing one the LICS profile is the convolution of the power spectra of the probe pulse with the usual Fano profile of stationary LICS. We discuss some consequences of deviation from steady-state theory.
Abstract: The periodic bounce configurations responsible for quantum tunneling are obtained explicitly and are extended to the finite energy case for minisuperspace models of the Universe. As a common feature of the tunneling models at finite energy considered here we observe that the period of the bounce increases with energy monotonically. The periodic bounces do not have bifurcations and make no contribution to the nucleation rate except the one with zero energy. The sharp first order phase transition from quantum tunneling to thermal activation is verified with the general criterions.
Due to the large variety of modern applications and evolving network technologies, a small number of general-purpose protocol stacks will no longer be sufficient. Rather, customization of communication protocols will play a major role. In this paper, we present an approach that has the potential to substantially reduce the effort for designing customized protocols. Our approach is based on the concept of design patterns, which is well-established in object oriented software development. We specialize this concept to communication protocols, and - in addition - use formal description techniques (FDTs) to specify protocol design patterns as well as rules for their instantiation and composition. The FDTs of our choice are SDL-92 and MSCs, which offer suitable language support. We propose an SDL pattern description template and relate pattern-based configuring of communication protocols to existing SDL methodologies. Particular SDL patterns and the configuring of a customized resource reservation protocol are presented in detail.
Im Bereich der Expertensysteme ist das Problemlösen auf der Basis von bekannten Fallbeispielen ein derzeit sehr aktuelles Thema. Auch für Diagnoseaufgaben gewinnt der fallbasierte Ansatz immer mehr an Bedeutung. In diesem Papier soll der im Rahmen des Moltke -Projektes1 an der Universität Kaiserslautern entwickelte fallbasierte Problemlöser Patdex/22 vorgestellt werden. Ein erster Prototyp, Patdex/1, wurde bereits 1988 entwickelt.
Patdex is an expert system which carries out case-based reasoning for the fault diagnosis of complex machines. It is integrated in the Moltke workbench for technical diagnosis, which was developed at the university of Kaiserslautern over the past years, Moltke contains other parts as well, in particular a model-based approach; in Patdex where essentially the heuristic features are located. The use of cases also plays an important role for knowledge acquisition. In this paper we describe Patdex from a principal point of view and embed its main concepts into a theoretical framework.
We consider a (2 + 1)-dimensional mechanical system with the Lagrangian linear in the torsion of a light-like curve. We give Hamiltonian formulation of this system and show that its mass and spin spectra are defined by one-dimensional nonrelativistic mechanics with a cubic potential. Consequently, this system possesses the properties typical of resonance-like particles.
We argue in this paper that sophisticated mi-croplanning techniques are required even formathematical proofs, in contrast to the beliefthat mathematical texts are only schematicand mechanical. We demonstrate why para-phrasing and aggregation significantly en-hance the flexibility and the coherence ofthe text produced. To this end, we adoptedthe Text Structure of Meteer as our basicrepresentation. The type checking mecha-nism of Text Structure allows us to achieveparaphrasing by building comparable combi-nations of linguistic resources. Specified interms of concepts in an uniform ontologicalstructure called the Upper Model, our se-mantic aggregation rules are more compactthan similar rules reported in the literature.