Refine
Year of publication
- 1998 (147) (remove)
Document Type
- Preprint (109)
- Article (21)
- Doctoral Thesis (7)
- Lecture (3)
- Report (3)
- Diploma Thesis (1)
- Master's Thesis (1)
- Periodical Part (1)
- Working Paper (1)
Keywords
- AG-RESY (13)
- PARO (12)
- SKALP (9)
- Case Based Reasoning (4)
- industrial robots (4)
- motion planning (3)
- parallel processing (3)
- CIM-OSA (2)
- HANDFLEX (2)
- Kalman filtering (2)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Informatik (38)
- Kaiserslautern - Fachbereich Mathematik (35)
- Kaiserslautern - Fachbereich Physik (35)
- Fraunhofer (ITWM) (12)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (9)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (6)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (6)
- Kaiserslautern - Fachbereich Biologie (3)
- Kaiserslautern - Fachbereich Chemie (2)
- Universitätsbibliothek (1)
Vorgestellt wird ein System basierend auf einem 3D-Scanner nach dem Licht- schnitt-Prinzip mit dem es möglich ist, einen Menschen innerhalb von 1,5 Sekun- den dreidimensional zu erfassen. Mit Hilfe von Evolutionären Algorithmen wird über eine modellbasierte Dateninterpretation die Auswertung der Meßdaten betrie- ben, so daß beliebige Körpermaße ermittelt werden können. Das Ergebnis ist ein individualisiertes CAD-Modells der Person im Rechner. Ein derartiges Modell kann als virtuelle Kleiderpuppe zur Produktion von Maßbekleidung dienen.
This paper presents a new approach to parallel path planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based a best-first search algorithm and needs no essential off-line computations. The algorithm works in an implicitly discrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on polyhedral models of the robot and the obstacles. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel path planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows very good speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
The World Wide Web is a medium through which a manufacturer may allow Internet visitors to customize or compose his products. Due to missing or rapidly changing standards these applications are often restricted to relatively simple CGI or JAVA based scripts. Usually, results like images or movies are stored in a database and are transferred on demand to the web-user. Viper (Visualisierung parametrisch editierbarer Raumkomponenten) is a Toolkit [VIP96] written in C++ and JAVA which provides 3D-modeling and visualization methodsfor developing complex web-based applications. The Toolkit has been designed to built a prototype, which can be used to construct and visualize prefabricated homes on the Internet. Alternative applications are outlined in this paper. Within Viper, all objects are stored in a scene graph (VSSG ), which is the basic data structure of the Toolkit. To show the concept and structure of the Toolkit, functionality, and implementation of the prototype are described.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
We derive a new class of particle methods for conservation laws, which are based on numerical flux functions to model the interactions between moving particles. The derivation is similar to that of classical Finite-Volume methods; except that the fixed grid structure in the Finite-Volume method is substituted by so-called mass packets of particles. We give some numerical results on a shock wave solution for Burgers equation as well as the well-known one-dimensional shock tube problem.
In the present paper multilane models for vehicular traffic are considered. A microscopic multilane model based on reaction thresholds is developed. Based on this model an Enskog like kinetic model is developed. In particular, care is taken to incorporate the correlations between the vehicles. From the kinetic model a fluid dynamic model is derived. The macroscopic coefficients are deduced from the underlying kinetic model. Numerical simulations are presented for all three levels of description in [10]. Moreover, a comparison of the results is given there.
In this paper the work presented in [6] is continued. The present paper contains detailed numerical investigations of the models developed there. A numerical method to treat the kinetic equations obtained in [6] are presented and results of the simulations are shown. Moreover, the stochastic correlation model used in [6] is described and investigated in more detail.
In this paper the kinetic model for vehicular traffic developed in [3,4] is considered and theoretical results for the space homogeneous kinetic equation are presented. Existence and uniqueness results for the time dependent equation are stated. An investigation of the stationary equation leads to a boundary value problem for an ordinary differential equation. Existence of the solution and some properties are proved. A numerical investigation of the stationary equation is included.
Groups can be studied using methods from different fields such as combinatorial group theory or string rewriting. Recently techniques from Gröbner basis theory for free monoid rings (non-commutative polynomial rings) respectively free group rings have been added to the set of methods due to the fact that monoid and group presentations (in terms of string rewriting systems) can be linked to special polynomials called binomials. In the same mood, the aim of this paper is to discuss the relation between Nielsen reduced sets of generators and the Todd-Coxeter coset enumeration procedure on the one side and the Gröbner basis theory for free group rings on the other. While it is well-known that there is a strong relationship between Buchberger's algorithm and the Knuth-Bendix completion procedure, and there are interpretations of the Todd-Coxeter coset enumeration procedure using the Knuth-Bendix procedure for special cases, our aim is to show how a verbatim interpretation of the Todd-Coxeter procedure can be obtained by linking recent Gröbner techniques like prefix Gröbner bases and the FGLM algorithm as a tool to study the duality of ideals. As a side product our procedure computes Nielsen reduced generating sets for subgroups in finitely generated free groups.
We present a parallel control architecture for industrial robot cells. It is based on closed functional components arranged in a flat communication hierarchy. The components may be executed by different processing elements, and each component itself may run on multiple processing elements. The system is driven by the instructions of a central cell control component. We set up necessary requirements for industrial robot cells and possible parallelization levels. These are met by the suggested robot control architecture. As an example we present a robot work cell and a component for motion planning, which fits well in this concept.
The term enterprise modelling, synonymous with enterprise engineering, refers to methodologies developed for modelling activities, states, time, and cost within an enterprise architecture. They serve as a vehicle for evaluating and modelling activities resources etc. CIM - OSA (Computer Integrated Manufacturing Open Systems Architecture) is a methodology for modelling computer integrated environments, and its major objective is the appropriate integration of enterprise operations by means of efficient information exchange within the enterprise. PERA is another methodology for developing models of computer integrated manufacturing environments. The department of industrial engineering in Toronto proposed the development of ontologies as a vehicle for enterprise integration. The paper reviews the work carried out by various researchers and computing departments on the area of enterprise modelling and points out other modelling problems related to enterprise integration.
We present a particle method for the numerical simulation of boundary value problems for the steady-state Boltzmann equation. Referring to some recent results concerning steady-state schemes, the current approach may be used for multi-dimensional problems, where the collision scattering kernel is not restricted to Maxwellian molecules. The efficiency of the new approach is demonstrated by some numerical results obtained from simulations for the (two-dimensional) BEnard's instability in a rarefied gas flow.
In this paper we present a domain decomposition approach for the coupling of Boltzmann and Euler equations. Particle methods are used for both equations. This leads to a simple implementation of the coupling procedure and to natural interface conditions between the two domains. Adaptive time and space discretizations and a direct coupling procedure leads to considerable gains in CPU time compared to a solution of the full Boltzmann equation. Several test cases involving a large range of Knudsen numbers are numerically investigated.
For the determination of the earth" s gravity field many types of observations are available nowadays, e.g., terrestrial gravimetry, airborne gravimetry, satellite-to-satellite tracking, satellite gradiometry etc. The mathematical connection between these observables on the one hand and gravity field and shape of the earth on the other hand, is called the integrated concept of physical geodesy. In this paper harmonic wavelets are introduced by which the gravitational part of the gravity field can be approximated progressively better and better, reflecting an increasing flow of observations. An integrated concept of physical geodesy in terms of harmonic wavelets is presented. Essential tools for approximation are integration formulas relating an integral over an internal sphere to suitable linear combinations of observation functionals, i.e., linear functionals representing the geodetic observables. A scale discrete version of multiresolution is described for approximating the gravitational potential outside and on the earth" s surface. Furthermore, an exact fully discrete wavelet approximation is developed for the case of band-limited wavelets. A method for combined global outer harmonic and local harmonic wavelet modelling is proposed corresponding to realistic earth" s models. As examples, the role of wavelets is discussed for the classical Stokes problem, the oblique derivative problem, satellite-to-satellite tracking, satellite gravity gradiometry, and combined satellite-to-satellite tracking and gradiometry.
The paper presents a process-oriented view on knowledge management in software development. We describe requirements on knowledge management systems from a process-oriented perspective, introduce a process modeling language MILOS and its use for knowledge management. Then we explain how a process-oriented knowledge management system can be implemented using advanced but available information technologies.
A natural extension of SLD-resolution is introduced as a goal directed proof procedure
for the full first order implicational fragment of intuitionistic logic. Its intuitionistic semantic fits a procedural interpretation of logic programming. By allowing arbitrary nested implications it can be used for implementing modularity in logic programs. With adequate negation axioms it gives an alternative to negation as failure and leads to a proof procedure for full first order predicate logic.
Annual Report 1997
(1998)
Anwendungen effizienter Verfahren in Automation - Universität Karlsruhe auf der SPS97 in Nürnberg -
(1998)
Application of Moment Realizability Criteria for Coupling of the Boltzmann and Euler Equations
(1998)
The moment realizability criteria have been used to test the domains of validity of the Boltzmann and Euler Equations. With the help of this criteria teh coupling of the Boltzmann and Euler equations have been performed in two dimensional spatial space. The time evolution of domain decompositions for such equations have been presented in different time steps. The numerical resulta obtained from the coupling code have been compared with those from the pure Boltzmann one.
This paper discusses the problem of automatic off-line programming and motion planning for industrial robots. At first, a new concept consisting of three steps is proposed. The first step, a new method for on-line motion planning is introduced. The motion planning method is based on the A*-search algorithm and works in the implicit configuration space. During searching, the collisions are detected in the explicitly represented Cartesian workspace by hierarchical distance computation. In the second step, the trajectory planner has to transform the path into a time and energy optimal robot program. The practical application of these two steps strongly depends on the method for robot calibration with high accuracy, thus, mapping the virtual world onto the real world, which is discussed in the third step.
On the one hand, in the world of Product Data Technology (PDT), the ISO standard STEP (STandard for the Exchange of Product model data) gains more and more importance. STEP includes the information model specification language EXPRESS and its graphical notation EXPRESS-G. On the other hand, in the Software Engineering world in general, mainly other modelling languages are in use - particularly the Unified Modeling Language (UML), recently adopted to become a standard by the Object Management Group, will probably achieve broad acceptance. Despite a strong interconnection of PDT with the Software Engineering area, there is a lack of bridging elements concerning the modelling language level. This paper introduces a mapping between EXPRESS-G and UML in order to define a linking bridge and bring the best of both worlds together. Hereby the feasibility of a mapping is shown with representative examples; several problematic cases are discussed as well as possible solutions presented.
In order to improve the distribution system for the Nordic countries the BASF AG considered 13 alternative scenarios to the existing system. These involved the construction of warehouses at various locations. For every scenario the transportation, storage, and handling cost incurred was to be as low as possible, where restrictions on the delivery time were given. The scenarios were evaluated according to (minimal) total cost and weighted average delivery time. The results led to a restriction to only three cases, involving only one new warehouse each. For these a more accurate model for the cost was developped and evaluated, yielding results similar to a simple linear model. Since there were no clear preferences between cost and delivery time, the final decision was chosen to represent a compromise between the two criteria.
Knowledge about the distribution of a statistical estimator is important for various purposes like, for example, the construction of confidence intervals for model parameters or the determiation of critical values of tests. A widely used method to estimate this distribution is the so-called bootstrap which is based on an imitation of the probabilistic structure of the data generating process on the basis of the information provided by a given set of random observations. In this paper we investigate this classical method in the context of artificial neural networks used for estimating a mapping from input to output space. We establish consistency results for bootstrap estimates of the distribution of parameter estimates.
In this paper domain decomposition methods for radiative transfer problems including conductive heat transfer are treated. The paper focuses on semi-transparent materials, like glass, and the associated conditions at the interface between the materials. Using asymptotic analysis we derive conditions for the coupling of the radiative transfer equations and a diffusion approximation. Several test cases are treated and a problem appearing in glass manufacturing processes is computed. The results clearly show the advantages of a domain decomposition approach. Accuracy equivalent to the solution of the global radiative transfer solution is achieved, whereas computation time is strongly reduced.
The paper discusses the metastable states of a quantum particle in a periodic potential under a constant force (the model of a crystal electron in a homogeneous electric ,eld), which are known as the Wannier-Stark ladder of resonances. An ecient procedure to ,nd the positions and widths of resonances is suggested and illustrated by numerical calculation for a cosine potential.
Contrary to symbolic learning approaches, that represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
The Wannier-Bloch resonance states are metastable states of a quantum particle in a space-periodic potential plus a homogeneous field. Here we analyze the states of quantum particle in space- and time-periodic potential. In this case the dynamics of the classical counterpart of the quantum system is either quasiregular or chaotic depending on the driving frequency. It is shown that both the quasiregular and the chaotic motion can also support quantum resonances. The relevance of the obtained result to the problem a of crystal electron under simultaneous influence of d.c. and a.c. electric fields is briefly discussed. PACS: 73.20Dx, 73.40Gk, 05.45.+b
Abstract: We show that the physical mechanism of population transfer in a 3-level system with a closed loop of coherent couplings (loop-STIRAP) is not equivalent to an adiabatic rotation of the dark-state of the Hamiltonian but coresponds to a rotation of a higher-order trapping state in a generalized adiabatic basis. The concept of generalized adiabatic basis sets is used as a constructive toolto design pulse sequences for stimulated Raman adiabatic passage (STIRAP) which give maximum population transfer also under conditions when the usual condition of adiabaticty is only poorly fulfilled. Under certain conditions for the pulses (generalized matched pulses) there exists a higher-order trapping state, which is an exact constant of motion and analytic solutions for the atomic dynamics can be derived.
Convex Analysis
(1998)
Preface Convex analysis is one of the mathematical tools which is used both explicitly and indirectly in many mathematical disciplines. However, there are not so many courses which have convex analysis as the main topic. More often, parts of convex analysis are taught in courses like linear or nonlinear optimization, probability theory, geometry, location theory, etc.. This manuscript gives a systematic introduction to the concepts of convex analysis. A focus is set to the geometrical interpretation of convex analysis. This focus was one of the reasons why I have decided to restrict myself to the finite dimensional case. Another reason for this restriction is that in the infinite dimensional case many proofs become more difficult and more technical. Therefore, it would not have been possible (for me) to cover all the topics I wanted to discuss in this introductory text in the infinite dimensional case, too. Anyway, I am convinced that even for someone who is interested in the infinite dimensional case this manuscript will be a good starting point. When I offered a course in convex analysis in the Wintersemester 1997/1998 (upon which this manuscript is based) a lot of students asked me how this course fits in their own studies. Because this manuscript will (hopefully) be used by some students in the future, I will give here some of the possible statements to answer this very question. - Convex analysis can be seen as an extension of classical analysis, in which still we get many of the results, like a mean-value theorem, with less assumptions on the smoothness of the function. - Convex analysis can be seen as a foundation of linear and nonlinear optimization which provides many tools to handle concepts in optimization much easier (for example the Lemma of Farkas). - Finally, convex analysis can be seen as a link between abstract geometry and very algorithmic oriented computational geometry. As already explained before, this manuscript is based on a one semester course and therefore cannot cover all topics and discuss all aspects of convex analysis in detail. To guide the interested reader I have included a list of nice books about this subject at the end of the manuscript. It should be noted that the philosophy of this course follows [3], [4] and THE BOOK of modern convex analysis [6]. The geometrical emphasis however, is also related to intentions of [1].^L
The greybody factors in BTZ black holes are evaluated from 2D CFT in the spirit of AdS3/CFT correspondence. The initial state of black holes in the usual calculation of greybody factors by effective CFT is described as Poincar'e vacuum state in 2D CFT. The normalization factor which cannot be fixed in the effective CFT without appealing to string theory is shown to be determined by the normalized bulk-to-boundary Green function. The relation among the greybody factors in different dimensional black holes is exhibited. Two kinds of (h; _h) = (1; 1) operators which couple with the boundary value of massless scalar field are discussed.
In this paper, a combined approach to damage diagnosis of rotors is proposed. The intention is to employ signal-based as well as model-based procedures for an improved detection of size and location of the damage. In a first step, Hilbert transform signal processing techniques allow for a computation of the signal envelope and the instantaneous frequency, so that various types of non-linearities due to a damage may be identified and classified based on measured response data. In a second step, a multi-hypothesis bank of Kalman Filters is employed for the detection of the size and location of the damage based on the information of the type of damage provided by the results of the Hilbert transform.
Interoperability between different CAx systems involved in the development process of cars is presently one of the most critical issues in the automotive industry. None of the existing CAx systems meets all requirements of the very complex process network of the lifecycle of a car. With this background, industrial engineers have to use various CAx systems to get an optimal support for their daily work. Today, the communication between different CAx systems is done via data files using special direct converters or neutral system independent standards like IGES, VDAFS, and recently STEP, the international standard for product data description. To reduce the dependency on individual CAx s ystem vendors, the German automotive industry developed an open CAx system architecture based on STEP as guiding principle for CAx system development. The central component of this architecture is a common, system-independent access interface to CAx functions and data of all involved CAx systems, which is under development in the project ANICA. Within this project, a CAx object bus has been developed based on a STEP data description using CORBA as an integration platform. This new approach allows a transparent access to data and functions of the integrated CAx systems without file-based data exchange. The product development process with various CAx systems concerns objects from different CAx systems. Thus, mechanisms are needed to handle the persistent storage of the CAx objects distributed over the CAx object bus to give the developing engineers a consistent view of the data model of their product. The following paper discusses several possibilities to guarantee consistent data management and storage of distributed CAx models. One of the most promising approaches is the enhancement of the CAx object bus by a STEP-based object-oriented data server to realise a central data management.
Das Problem der Integration heterogener Softwaresysteme stellt sich auch auf dem Gebiet der CAx-Systeme, wie sie in vielfältigen Ausprägungen etwa in der Automobilbranche für die Fahrzeugentwicklung eingesetzt werden. Zunächst werden die heute in diesem Bereich
praktizierten Lösungen und die dabei auftretenden Probleme kurz dargestellt. Danach werden der neue Standard für Produktdaten, STEP, und der Standard für die Interoperabilität heterogener Softwaresysteme, CORBA, sowie einige CORBA-Entwurfsmuster erläutert. Als nächstes wird eine auf diesen beiden Standards basierende CAx-Integrationsarchitektur, die im Projekt ANICA entwickelt wurde, vorgestellt und die prinzipielle Vorgehensweise bei
ihrer Realisierung beschrieben. Daran anschließend wird über eine erste Umsetzung dieser Architektur in die Praxis berichtet. Zum Abschluß wird kurz auf die gewonnenen Erfahrungen eingegangen und ein Ausblick auf zukünftige Entwicklungen gegeben.
The paper studies differential and related properties of functions of a real variable with values in the space of signed measures. In particular the connections between different definitions of differentiability are described corresponding to different topologies on the measures. Some conditions are given for the equivalence of the measures in the range of such a function. These conditions are in terms of socalled logarithmic derivatives and yield a generalization of the Cameron-Martin-Maruyama-Girsanov formula. Questions of this kind appear both in the theory of differentiable measures on infinite-dimensional spaces and in the theory of statistical experiments.
Bekanntlich gibt es keinen befriedigenden unendlich dimensionalen Ersatz für das Lebesgue-Mass. Andererseits lassen sich viele Techniken klassischer Analysis auch auf unendlich dimensionale Situationen übertragen. Eine Möglichkeit hierzu gibt die Theorie differenzierbarer Masse. Man definiert Richtungsableitungen für Masse ähnlich wie für Funktionen. Eines der zentralen Beispiele ist das Wiener-Mass. Stochastische Integration bezüglich der Brownschen Bewegung, insbesondere das Skorokhod-Integral ergeben sich in natürlicher Weise durch diesen Ansatz und auch die Grundideen des MalliavinKalküls lassen sich in diesem Rahmen einfach erläutern. Die Vorträge geben die meisten Beweise.
The pure-Skyrme limit of a scale-breaking Skyrmed O(3) sigma model in 1+1 dimensions is employed to study the effect of the Skyrme term on the semiclassical analysis of a field theory with instantons. The instantons of this model are self-dual and can be evaluated explicitly. They are also localised to an absolute scale, and their fluctuation action can be reduced to a scalar subsystem. This permits the explicit calculation of the fluctuation determinant and the shift in vacuum energy due to instantons. The model also illustrates the semiclassical quantisation of a Skyrmed field theory.
Der vorliegende Artikel setzt die Beitragsreihe zur Vorstellung der Ergebnisse der FEMEX fort, die mit der Präsentation einer allgemeinen Feature-Definition in [BWE-96] begonnen wurde. FEMEX (Feature Modelling Experts) ist eine internationale und interdisziplinäre Gruppe von Forschern, Entwicklern und Anwendern aus Universitäten, Forschungsinstituten und Industrie, die sich zum Ziel gesetzt haben, Grundlagen für eine Feature-basierte Produktentwicklung zu erarbeiten. Der Anwender steht dabei im Mittelpunkt der Bemühungen: die Feature-Technologie hat die Aufgabe, ihm Methoden und Werkzeuge an die Hand zu geben, mit denen er in den unterschiedlichen Phasen einer komplexen Prozeßkette effizient arbeiten kann. Vier Arbeitsgruppen wurden gebildet, die sich mit unterschiedlichen Aspekten der Feature-Technologie beschäftigen. In diesem Beitrag werden die Ergebnisse der Arbeitsgruppe II Feature Modelling Methods and Application Areas" vorgestellt. Ihre Aufgabe ist es, die Modellierungsmethoden und Anwendungsgebiete der Feature-Technologie im Kontext des Produktentwicklungs- prozesses zu untersuchen. Ausgangspunkt für die Arbeiten ist neben den benutzerspezifischen Anforderungen die Feature-Definition der Arbeitsgruppe I [BWE-96]. An dieser Definition ist hervorzuheben, daß Features keine physikalischen Elemente sind und auch keine physikalischen Entsprechungen haben müssen, sondern nur in der Welt der informationstechnischen Modelle existieren. Desweiteren sind die für den Anwender relevanten Eigenschaften der bearbeiteten Objekte, welcher Art sie auch sein mögen (beispielsweise die Funktion des Bauteils), die eigentliche Grundlage der Definition. Keiner Eigenschaft wird von vorneherein eine höhere Priorität gegeben, wodurch die Bauteilgeometrie ihre tragende Rolle bei der Modellierung verliert (bei den meisten der heute angebotenen CAD/CAM-Systemen wird dagegen üblicherweise davon ausgegangen, daß die in einem System verarbeitete Produktgeometrie die Basis für das gesamte Produktmodell darstellt).
A new method of determining some characteristics of binary images is proposed based on a special linear filtering. This technique enables the estimation of the area fraction, the specific line length, and the specific integral of curvature. Furthermore, the specific length of the total projection is obtained, which gives detailed information about the texture of the image. The influence of lateral and directional resolution depending on the size of the applied filter mask is discussed in detail. The technique includes a method of increasing directional resolution for texture analysis while keeping lateral resolution as high as possible.
Diese Arbeit beschäftigt sich mit einer Möglichkeit zur Effizienzverbesserung, wobei das SNLP-basierte Planungssystem CAPlan verwendet wird. Dabei werden neue, zu lösende Probleme einer Vorverarbeitung unterzogen. Dort werden bestimmte Eigenschaften ermittelt, ohne jedoch das Problem zu lösen. Anschliessend wird dem Planungssystem das neue Problem mit dem Zusatzwissen in Form der analysierten Eigenschaften übergeben. Das Planungssystem verwendet das Wissen, um effizienter eine Lösung zu finden.
Ein verhaltensorientierter Ansatz zum flächendeckenden Fahren in a priori unbekannter Umgebung
(1998)
In diesem Aufsatz wird ein Verfahren zum flächendeckenden Fahren in zu- nächst unbekannter Umgebung beschrieben, wie es z.B. für Reinigungsanwen- dungen im Heimbereich benötigt wird. Parallel zur Durchführung der Reini- gungsaufgabe wird dabei die Umgebung exploriert und kartiert. Der verhaltensorientierte Ansatz ermöglicht eine robuste, zielgerichtete und dennoch ressourcenschonende Implementierung und gestattet es, einzelne Ver- haltensweisen leicht durch verbesserte oder auch speziell erlernte Versionen auszutauschen. Das vorgestellte Verfahren wurde simulativ getestet und wird in Kürze auf einem realen Roboter erprobt.