### Refine

#### Year of publication

- 1996 (70) (remove)

#### Document Type

- Preprint (70) (remove)

#### Keywords

- COMOKIT (2)
- CoMo-Kit (2)
- Abstraction (1)
- Boltzmann Equation (1)
- CAx Technology (1)
- CAx-Technik (1)
- CORBA (1)
- Cantor sets (1)
- Case-Based Reasoning (1)
- Collision Operator (1)

#### Faculty / Organisational entity

In this paper we consider the problem of finding in a given graph a minimal weight subtree of connected subgraph, which has a given number of edges. These NP-hard combinatorial optimization problems have various applications in the oil industry, in facility layout and graph partitioning. We will present different heuristic approaches based on spanning tree and shortest path methods and on an exact algorithm solving the problem in polynomial time if the underlying graph is a tree. Both the edge- and node weighted case are investigated and extensive numerical results on the behaviour of the heuristics compared to optimal solutions are presented. The best heuristic yielded results within an error margin of less than one percent from optimality for most cases. In a large percentage of tests even optimal solutions have been found.

Es wird das Lernen uniform rekursiv aufzählbarer Sprachfamilien anhand guter Beispiele untersucht und Unterschiede und Gemeinsamkeiten zum Lernen von rekursiven Sprachfamilien und rekursiven Funktionen aufgezeigt. Dem verwendeten Modell liegt das Lernen von Schülern mit einem Lehrer zugrunde. Es werden verschiedene Varianten vorgestellt, verglichen und teilweise auch charakterisiert, und versucht, mit Beispielen und anderen typischen Eigenschaften ein Gefühl für die Leistungsfähigkeit zu vermitteln. Unter anderem wird gezeigt, dass es nicht immer "universelle" gute Beispiele gibt, mit denen eine Sprachklasse in allen Situationen erklärt werden kann.

We present a similarity criterion based on feature weighting. Feature weights are recomputed dynamically according to the performance of cases during problem solving episodes. We will also present a novel algorithm to analyze and explain the performance of the retrieved cases and to determine the features whose weights need to be recomputed. We will perform experiments and show that the integration in a feature weighting model of our similarity criterion with our analysis algorithm improves the adaptability of the retrieved cases by converging to best weights for the features over a period of multiple problem solving episodes.

Planning for manufacturing workpieces is a complex task that requires the interaction of a domain-specific reasoner and a generic planning mechanism. In this paper we present an architecture for organizing the case base that is based on the information provided by a generic problem solver. A retrieval procedure is then presented that uses the information provided by the domain-specific reasoner in order to improve the accuracy of the cases retrieved. However, it is not realistic to suppose that the case retrieved will entirely fit into the new problem. We present a replay procedure to obtain a partial solution that replays not only the valid decisions taken for solving the case, but also justifications of rejected decisions made during the problem solving process. As a result, those completion alternatives of the partial solution are discarded that are already known to be invalid from the case.

Complete Eager Replay
(1996)

We present an algorithm for completely replaying previous problem solving experiences for plan-space planners. In our approach not only the solution trace is replayed, but also the explanations of failed attempts made by the first-principle planner. In this way, the capability of refitting previous solutions into new problems is improved.

Planning for realistic problems in a static and deterministic environment with complete information faces exponential search spaces and, more often than not, should produce plans comprehensible for the user. This article introduces new planning strategies inspired by proof planning examples in order to tackle the search-space-problem and the structured-plan-problem. Island planning and refinement as well as subproblem refinement are integrated into a general planning framework and some exemplary control knowledge suitable for proof planning is given.

Mit der schnellen Verbreitung der CAx-Techniken in der deutschen Automobilindustrie wächst die Notwendigkeit einer besseren Integration der CAx-Systeme in die Prozeßketten und der Beherrschung der Produktinformationsflüsse. Aufgrund dieser Tatsachen ist in den letzten Jah-ren ein Wandel der CAx-Systemarchitekturen von geschloßenen, monolithischen zu offen inte-grierten Systemen erkennbar. Im folgenden wird dieser Prozeß sowie dessen Implikationen auf die Anwendung und auf die Systemhersteller analysiert. Ausgehend von der Initiative der deutschen Automobilindustrie wurde das Projekt ANICA (Analysis of Interfaces of various CAD/CAM-Systems) gestartet. In diesem Projekt werden die Schnittstellen zu den Systemkernen einiger CAx-Hersteller untersucht und ein Konzept für kooperierende CAx-Systeme in der Automobilindustrie wird entwickelt.

In the past years, development and production processes in many companies have changed in a revolutionary way, leading to new demands in information and CAx technology. The R&D-departments of the German automotive industry installed a working group to develop a common long term CAD/CAM strategy1. A preliminary result is the concept for an open CAx system architecture as a basis for realizing industrial requirements on CAD/ CAM and for the cooperation with system vendors. The project ANICA was started in cooperation with five international CAD/CAM -suppliers in order to show the feasibility of this architecture. The access interfaces of different system kernels are analysed with the aim of developing a concept for a cooperating CAx system network. The concept will be put into practice with a software prototype basing on CORBA and OLE. The communication elements within such an architecture have to go far beyond conventional CAD data. This will lead to an extension of "feature" concepts including CAx functionality and dynamic information about the process chain of a product. The impact on modern concepts for user interfaces, on reverse engineering methods and on product data models will be discussed to finally close the loop to industrial CAx application.

Representations of activities dealing with the development or maintenance of software are called software process models. Process models allow for communication, reasoning, guidance, improvement, and automation. Two approaches for building, instantiating, and managing processes, namely CoMo-Kit and MVP-E, are combined to build a more powerful one. CoMo-Kit is based on AI/KE technology; it was developed for supporting complex design processes and is not specialized to software development processes. MVP-E is a process-sensitive software engineering environment for modeling and analyzing software development processes, and guides software developers. Additionally, it provides services to establish and run measurement programmes in software organizations. Because both approaches were developed completely independently major integration efforts are to be made to combine their both advantages. This paper concentrates on the resulting language concepts and their operationalization necessary for building automated process support.

A combination of a state-based formalism and a temporal logic is proposed to get an expressive language for various descriptions of reactive systems. Thereby it is possible to use a model as well as a property oriented specification style in one description. The descriptions considered here are those of the environment, the specification, and the design of a reactive system. It is possible to express e.g. the requirements of a reactive system by states and transitions between them together with further temporal formulas restricting the behaviors of the statecharts. It is shown, how this combined formalism can be used: The specification of a small example is given and a designed controller is proven correct with respect to this specification. The combination of the langugages is based on giving a temporal semantics of a state-based formalism (statecharts) using a temporal logic (TLA).

This article will discuss a qualitative, topological and robust world-modelling technique with special regard to navigation-tasks for mobile robots operating in unknownenvironments. As a central aspect, the reliability regarding error-tolerance and stability will be emphasized. Benefits and problems involved in exploration, as well as in navigation tasks, are discussed. The proposed method demands very low constraints for the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot

The purpose of this paper is to present the state of the art in singular optimal control. If the Hamiltonian in an interval \([t_1,t_2]\) is independent of the control we call the control in this interval singular. Singular optimal controls appear in many applications so that research has been motivated since the 1950s. Often optimal controls consist of nonsingular and singular parts where the junctions between these parts are mostly very difficult to find. One section of this work shows the actual knowledge about the location of the junctions and the behaviour of the control at the junctions. The definition and the properties of the orders (problem order and arc order), which are important in this context, are given, too. Another chapter considers multidimensional controls and how they can be treated. An alternate definition of the orders in the multidimensional case is proposed and a counterexample, which confirms a remark given in the 1960s, is given. A voluminous list of optimality conditions, which can be found in several publications, is added. A strategy for solving optimal control problems numerically is given, and the existing algorithms are compared with each other. Finally conclusions and an outlook on the future research is given.

This paper considers a transmission boundary-value problem for the time-harmonic Maxwell equations neglecting displacement currents which is frequently used for the numerical computation of eddy-currents. Across material boundaries the tangential components of the magnetic field H and the normal component of the magnetization müH are assumed to be continuous. this problem admits a hyperplane of solutions if the domains under consideration are multiply connected. Using integral equation methods and singular perturbation theory it is shown that this hyperplane contains a unique point which is the limit of the classical electromagnetic transmission boundary-value problem for vanishing displacement currents. Considering the convergence proof, a simple contructive criterion how to select this solution is immediately derived.

In the present paper a general criticism of kinetic equations for vehicular traffic is given. The necessity of introducing an Enskog-type correction into these equations is shown. An Enskog-line kinetic traffic flow equation is presented and fluid dynamic equations are derived. This derivation yields new coefficients for the standard fluid dynamic equations of vehicular traffic. Numerical simulations for inhomogeneous traffic flow situations are shown together with a comparison between kinetic and fluid dynamic models.

Die Theorie der mehrdimensionalen Systeme ist ein relativ junges Forschungsgebiet innerhalb der Systemtheorie, erste Arbeiten stammen aus den 70er Jahren. Hauptmotiv für das Studium multidimensionaler Systeme war die Notwendigkeit einer Erweiterung der Theorie der digitalen Filter, die in der klassischen, eindimensionalen Signalverarbeitung (zeitabhängige Signale) Anwendung finden, auf den Bereich der Bildverarbeitung, also auf zweidimensionale Signale.; Die Vorlesung beschäftigt sich daher in ihrem ersten Teil mit skalaren zweidimensionalen Systemen und beschränkt sich im wesentlichen auf den linearen Fall. Untersucht werden zweidimensionale Filter, ihre wichtigsten Eigenschaften, Kausalität und Stabilität, sowie ihre Zustandsraum- realisierungen, etwa die Modelle von Roesser und Fornasini-Marchesini. Parallelen und Unterschiede zur eindimensionalen Systemtheorie werden betont.; Im zweiten Teil der Vorlesung werden allgemeine höherdimensionale und multivariable Systeme behandelt. Für diese Systeme erweist sich der von Jan Willems begründete Zugang zur Systemtheorie, der sogenannte behavioral approach, als zweckmäßig. Grundlegende Ideen dieses Ansatzes sowie eine der wichtigsten Methoden zum Rechnen mit Polynomen in mehreren Variablen, die Theorie der Gröbnerbasen, werden vorgestellt.

The paper presents some new estimates on the gain term of the Boltzmann collision operator. For Maxwellian molecules, it is shown that the L -norm of the gain term can be bounded in terms of the L1 and L -norm of the density function f. In the case of more general collision kernels, like the hard-sphere interaction potential, the gain term is estimated pointwise by the L -norm of the density function and the loss term of the Boltzmann collision operator.

Here the almost sure convergence of one dimensional Kohonen" s algorithm in its general form, namely, 2k point neightbour setting with a non-uniform stimuli distribution is proved. We show that the asymptotic behaviour of the algorithm is governed by a cooperative system of differential equations which in general is irreducible. The system of differential equation has an asymptotically stable fixed point which a compact subset of its domain of attraction will be visited by the state variable Xn infinitely often.

The paper presents some adaptive load balance techniques for the simulation of rarefied gas flows on parallel computers. It is shown that a static load balance is insufficient to obtain a scalable parallel efficiency. Hence, two adaptive techniques are investigated which are based on simple algorithms. Numerical results show that using heuristic techniques one can achieve a sufficiently high efficiency over a wide range of different hardware platforms.