## Fachbereich Informatik

### Refine

#### Year of publication

- 1999 (228)
- 1994 (33)
- 1996 (32)
- 1998 (32)
- 1995 (31)
- 1997 (26)
- 2016 (23)
- 2015 (22)
- 2001 (20)
- 2019 (19)
- 2007 (18)
- 2018 (18)
- 2013 (17)
- 2014 (15)
- 1993 (14)
- 2009 (11)
- 2012 (11)
- 2000 (10)
- 2002 (10)
- 2003 (7)
- 2004 (7)
- 2006 (7)
- 2008 (7)
- 2017 (7)
- 2010 (6)
- 2005 (5)
- 1991 (4)
- 1992 (4)
- 2011 (4)
- 1979 (2)
- 1980 (1)
- 1990 (1)

#### Document Type

- Preprint (290)
- Doctoral Thesis (139)
- Report (111)
- Article (88)
- Conference Proceeding (8)
- Master's Thesis (7)
- Study Thesis (5)
- Bachelor Thesis (2)
- Habilitation (2)

#### Language

- English (652) (remove)

#### Keywords

- AG-RESY (47)
- PARO (25)
- SKALP (15)
- Visualisierung (13)
- Case-Based Reasoning (11)
- RODEO (11)
- HANDFLEX (9)
- Abstraction (6)
- Case Based Reasoning (6)
- Robotics (6)

Partitioned chain grammars
(1979)

This paper introduces a new class of grammars, the partitioned chain grammars, for which efficient parsers can be automatically generated. Besides being efficiently parsable these grammars possess a number of other properties, which make them very attractive for the use in parser-generators. They for instance form a large grammarclass and describe all deterministic context-free languages. Main advantage of the partitioned chain grammars however is, that given a language it is usually easier to describe it by a partitioned chain grammar than to construct a grammar of some other type commonly used in parser-generators for it.

This article describes the basic concepts of an extensible customizable knowledge-basedgraphical editor and its adoption to the DOCASE methodology and tool environment. Oneaspect in this field is the mapping of conceptual models (expressed in a specific language)to their graphical representations. This also has impacts to the semantic of the user actionsin a graphical editor tool. The ability to extend and customize the editor can be used tobuild specific graphical interfaces to various kinds of tools in the software developmentprocess. Major aspects of ODE are semantics-directed editing besides normal syntax-directed editing, support of abstraction mechanisms, multiple modeless views to attack com-plexity, semantic analization and animation. The result is an highly customizable graphicaleditor construction set that matches requirements of applications in many domains of systemdesign.

Based on the experiences from an autonomous mobile robot project called MOBOT-III, we found hard realtime-constraints for the operating- system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operating-system-kernel". The focusin this article is on a communication-scheme fulfilling the previous demanded assurances. The centralchapters discuss the shared buffer management and the way to design the communication architecture.Some further aspects beside the strict realtime-requirements like the possibilities to control and watch a running system, are mentioned. ALBATROSS is actually implemented on a multi-processor VMEbus-system.

Retrieval of cases is one important step within the case-based reasoning paradigm. We propose an improvement of this stage in the process model for finding most similar cases with an average effort of O[log2n], n number of cases. The basic idea of the algorithm is to use the heterogeneity of the search space for a density-based structuring and to employ this precomputed structure, a k-d tree, for efficient case retrieval according to a given similarity measure sim. In addition to illustrating the basic idea, we present the expe- rimental results of a comparison of four different k-d tree generating strategies as well as introduce the notion of virtual bounds as a new one that significantly reduces the retrieval effort from a more pragmatic perspective. The presented approach is fully implemented within the (Patdex) system, a case-based reasoning system for diagnostic applications in engineering domains.

Trimming of surfaces and volumes, curve and surface modeling via Bézier's idea of destortion, segmentation, reparametrization, geometric continuity are examples of applications of functional composition. This paper shows how to
compose polynomial and rational tensor product Bézier representations. The problem of composing Bezier splines and B-spline representations will also be addressed in this paper.

The use of non-volatile semiconductor memory within an extended storage hierarchy promises significant performance improvements for transaction processing. Although page-addressable semiconductor memories like extended memory, solid-state disks and disk caches are commercially available since several years, no detailed investigation of their use for transaction processing has been performed so far. We present a comprehensive simulation study that compares the performance of these storage types and of different usage forms. The following usage forms are considered: allocation of entire log and database files in non-volatile semiconductor memory, using a so-called write buffer to perform disk writes asynchronously, and caching of database pages at intermediate storage levels (in addition to main memory caching). Our simulations are conducted with both synthetically generated workloads and traces from real-life database applications. In particular, simulation results will be presented for the debit-credit workload frequently used in transaction processing benchmarks. As expected, the greatest performance improvements (but at the highest cost) can be achieved by storing log and database files completely in non-volatile semiconductor memory. For update-intensive
workloads, a limited amount of non-volatile memory used as a write buffer also proved to be very effective. To reduce the number of disk reads; caching of database pages in addition to main memory is best supported by an extended memory buffer. In this respect, disk caches are found to be less effective as they are designed for one-level caching. Different storage costs suggest that it may be cost-effective to use two or even three of the intermediate storage types together. The performance improvements obtainable by the use of non-volatile semiconductor memory is also found to reduce the need for sophisticated DBMS buffer management in order to achieve high transaction processing performance.

For the online collision detection with a multi-arm robot a fast method for computing the so-called collision vector is presented. Manipulators and obstacles are modelled by sets of convex polytopes. Known distance algorithms serve as a foundation. To speed up the collision detection dynamic obstacles are approximated by geometric primitives and organized in hierarchies. On-line, the here introduced Dynamic Hierarchies are adjusted to the current arm configuration. A comparison with previous methods shows an increased acceleration of the computations.

User interfaces for large distributed applications have to handle specific problems: the complexity of the application itself and the integration of online-data into the user interface. A main task of the user interface architecture is to provide powerful tools to design and augment the end-user system easily, hence giving the designer more time to focus on user requirements. Our experiences developing a user interface system for a process control room showed that a lot of time during the development process is wasted for the integration of online-data residing anywhere but not in the user interface itself. Furtheron external data may be kept by different kinds of programs, e.g. C-programs running
a numerical process model or PROLOG-programs running a diagnosis system, both in parallel to the process and in parallel to the user interface. Facing these specific requirements, we developed a user interface architecture following two main goals: 1. integration of external information into high-level graphical objects and 2. the system should be open for any program running as a separate process using its own problem-oriented language. The architecture is based on two approaches: an asynchronous, distributed and language independent communication model and an object model describing the problem domain and the interface using object-oriented techniques. Other areas like rule-based programming are involved, too. With this paper, we will present the XAVIA user interface architecture, the (as far as we know) first user inteface architecture, which is consequently based on a distributed object model.

Gauss Frame Offsets
(1992)

Virtual Reality (VR) is to be seen as the superset of simulation and animation. Visualization is done by rendering. The fundamental model of VR accounts for all phenomenons to be modelled with help of a computer. Examples range from simple dragging actions with a mouse device to the complex simulation of physically based animation.

Based on experiences from an autonomous mobile robot project called MOBOT -III, we found hard realtime-constraints for the operating-system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operatingsystem-kernel", not limited to mobile- robot-projects only, but which might be useful also wherever you have to guarantee a high reliability of a realtime-system. The focus in this article is on a communication-scheme fulfilling the demanded (hard realtime-) assurances although not implying time-delays or jitters on the critical informationchannels. The central chapters discuss a locking-free shared buffer management, without the need for interrupts and a way to arrange the communication architecture in order to produce minimal protocol-overhead and short cycle-times. Most of the remaining communication-capacity (if there is any) is used for redundant transfers, increasing the reliability of the whole system. ALBATROSS is actually implemented on a multi-processor VMEbus-system.

Case-based problem solving can be significantly improved by applying domain knowledge (in opposition to problem solving knowledge), which can be acquired with reasonable effort, to derive explanations of the correctness of a case. Such explanations, constructed on several levels of abstraction, can be employed as the basis for similarity assessment as well as for adaptation by solution refinement. The general approach for explanation-based similarity can be applied to different real world problem solving tasks such as diagnosis and planning in technical areas. This paper presents the general idea as well as the two specific, completely implemented realizations for a diagnosis and a planning task.

SPIN-NFDS Learning and Preset Knowledge for Surface Fusion - A Neural Fuzzy Decision System -
(1993)

The problem to be discussed in this paper may be characterized in short by the question: "Are these two surface fragments belonging together (i.e. belonging to the same surface)?" The presented techniques try to benefit from some predefined knowledge as well as from the possibility to refine and adapt this knowledge according to a (changing) real environment, resulting in a combination of fuzzy-decision systems and neural networks. The results are encouraging (fast convergence speed, high accuracy), and the model might be used for a wide range of applications. The general frame surrounding the work in this paper is the SPIN- project, where emphasis is on sub-symbolic abstractions, based on a 3-d scanned environment.

This paper refers to the problem of adaptability over an infinite period of time, regarding dynamic networks. A never ending flow of examples have to be clustered, based on a distance measure. The developed model is based on the self-organizing feature maps of Kohonen [6], [7] and some adaptations by Fritzke [3]. The problem of dynamic surface classification is embedded in the SPIN project, where sub-symbolic abstractions, based on a 3-d scanned environment is being done.

In this paper we present an interpreter which allows to support the validation of conceptual models in early stages of the development. We compare hypermedia and expert system approaches to knowledge processing and show how an integrated approach eases the creation of expert systems. Our knowledge engineering tool CoMo-Kit allows a "smooth" transition from initial protocols via a semi-formal specification based on a typed hypertext up to an running expert system. The interpreter uses the intermediate hypertext representation for the interactive solution of problems. Thereby, tasks are distributed to agents via an local area network. This means that the specification of an expert system can directly be used to solve real world problems. If there exist formal (operational) specifications for subtasks then these are delegated to computers. Therefore, our approach allows to specify and validate distributed, cooperative systems where some subtasks are solved by humans and other subtasks are solved automatically by computers.

This report contains a collection of abstracts for talks given at the "Deduktionstreffen" held at Kaiserslautern, October 6 to 8, 1993. The topics of the talks range from theoretical aspects of term rewriting systems and higher order resolution to descriptions of practical proof systems in various applications. They are grouped together according the following classification: Distribution and Combination of Theorem Provers, Termination, Completion, Functional Programs, Inductive Theorem Proving, Automatic Theorem Proving, Proof Presentation. The Deduktionstreffen is the annual meeting of the Fachgruppe Deduktionssysteme in the Gesellschaft für Informatik (GI), the German association for computer science.

We study deterministic conditional rewrite systems, i.e. conditional rewrite systemswhere the extra variables are not totally free but 'input bounded'. If such a systemR is quasi-reductive then !R is decidable and terminating. We develop a critical paircriterion to prove confluence if R is quasi-reductive and strongly deterministic. In thiscase we prove that R is logical, i.e./!R==R holds. We apply our results to proveHorn clause programs to be uniquely terminating.This research was supported by the Deutsche Forschungsgemeinschaft, SFB 314, Project D4

We investigate restricted termination and confluence properties of term rewritADing systems, in particular weak termination and innermost termination, and theirinterrelation. New criteria are provided which are sufficient for the equivalenceof innermost / weak termination and uniform termination of term rewriting sysADtems. These criteria provide interesting possibilities to infer completeness, i.e.termination plus confluence, from restricted termination and confluence properADties.Using these basic results we are also able to prove some new results aboutmodular termination of rewriting. In particular, we show that termination ismodular for some classes of innermost terminating and locally confluent termrewriting systems, namely for nonADoverlapping and even for overlay systems. Asan easy consequence this latter result also entails a simplified proof of the factthat completeness is a decomposable property of soADcalled constructor systems.Furthermore we show how to obtain similar results for even more general cases of(nonADdisjoint) combined systems with shared constructors and of certain hierarADchical combinations of systems with constructors. Interestingly, these modularityresults are obtained by means of a proof technique which itself constitutes a modADular approach.

Four different initialization methods for parallel Branch-and-bound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search tree. Furthermore, the fourth method offers the best efficiency of the overall algorithm when a centralized OPEN set is used. Experimental results by a PRAM simulation support these statements.

Shadow-Mapping
(1993)

Most radiosity techniques store radiosities in certain sample points, typically the vertices of polyhedral scenes. As diffuse radiosities are view independent they can be used for an interactive 'walk-through'. This paper presents an algorithm for storing radiosities independent of the representation of the object. A distributed rendering system, which uses this shadow-mapping technique is described. The basic thermophysical definitions, needed to derive a sum formula for a form factor calculation of polygons, are explained.

This paper describes some new algorithms for the accurate calculation of surface properties. In the first part an arithmetic on Bézier surfaces is introduced. Formulas are given, which determine the Bézier points and weights of the resulting surface from the points and weights of the operand surfaces. An application of the arithmetic operations to the surface interrogation methods are described in the second part. It turns out, that the quality analysis can be reduced to a few numerical stable operations. Finally the advantages and disadvantages of this method are discussed.

The composition of Bézier curves and tensor product Bézier surfaces, polynomial as well as rational, is applied to exactly and explicitely represent trim curves of tensor product Bézier surfaces. Trimming curves are assumed to be defined as Bézier curves in surface parameter domain. A Bézier spline approximation of lower polynomial degree is built up as weil which is based on the exact trim curve representation in coordinate space.

Termination of Rewriting
(1994)

More and more, term rewriting systems are applied in computer science aswell as in mathematics. They are based on directed equations which may be used as non-deterministic functional programs. Termination is a key property for computing with termrewriting systems.In this thesis, we deal with different classes of so-called simplification orderings which areable to prove the termination of term rewriting systems. Above all, we focus on the problemof applying these termination methods to examples occurring in practice. We introduce aformalism that allows clear representations of orderings. The power of classical simplifica-tion orderings - namely recursive path orderings, path and decomposition orderings, Knuth-Bendix orderings and polynomial orderings - is improved. Further, we restrict these orderingssuch that they are compatible with underlying AC-theories by extending well-known methodsas well as by developing new techniques. For automatically generating all these orderings,heuristic-based algorithms are given. A comparison of these orderings with respect to theirpowers and their time complexities concludes the theoretical part of this thesis. Finally, notonly a detailed statistical evaluation of examples but also a brief introduction into the designof a software tool representing the integration of the specified approaches is given.

A Case Study on Specifikation,Detection and Resolution of IN Feature Interactions with Estelle
(1994)

We present an approach for the treatment of Feature Interactions in Intelligent Networks. The approach is based on the formal description technique Estelle and consists of three steps. For the first step, a specification style supporting the integration of additional features into a basic service is introduced . As a result, feature integration is achieved by adding specification text, i.e . on a purely syntactical level. The second step is the detection of feature interactions resulting from the integration of additional features. A formal criterion is given that can be used for the automatic detection of a particular class of feature interactions. In the third step, previously detected feature interactions are resolved. An algorithm has been devised that allows the automatical incorporation of high-level design decisions into the formal specification. The presented approach is applied to the Basic Call Service and several supplementary interacting features.

Within the present paper we investigate case-based representability as well as case-based learnability of indexed families of uniformly recursive languages. Since we are mainly interested in case-based learning with respect to an arbitrary fixed similarity measure, case-based learnability of an indexed family requires its representability, first. We show that every indexed family is case- based representable by positive and negative cases. If only positive cases are allowed the class of representable families is comparatively small. Furthermore, we present results that provide some bounds concerning the necessary size of case bases. We study, in detail, how the choice of a case selection strategy influences the learning capabilities of a case-based learner. We define different case selection strategies and compare their learning power to one another. Furthermore, we elaborate the relations to Gold-style language learning from positive and both positive and negative examples.

We present a convenient notation for positive/negativeADconditional equations. Theidea is to merge rules specifying the same function by using caseAD, ifAD, matchAD, and letADexpressions.Based on the presented macroADruleADconstruct, positive/negativeADconditional equational specifiADcations can be written on a higher level. A rewrite system translates the macroADruleADconstructsinto positive/negativeADconditional equations.

ALICE
(1994)

The introduction of sorts to first-order automated deduc-tion has brought greater conciseness of representation and a considerablegain in efficiency by reducing search spaces. This suggests that sort in-formation can be employed in higher-order theorem proving with similarresults. This paper develops a sorted (lambda)-calculus suitable for automatictheorem proving applications. It extends the simply typed (lambda)-calculus by ahigher-order sort concept that includes term declarations and functionalbase sorts. The term declaration mechanism studied here is powerfulenough to subsume subsorting as a derived notion and therefore gives ajustification for the special form of subsort inference. We present a set oftransformations for sorted (pre-) unification and prove the nondetermin-istic completeness of the algorithm induced by these transformations.

Based on the idea of using topologic feature-mapsinstead of geometric environment maps in practical mobile robot tasks, we show an applicable way tonavigate on such topologic maps. The main features regarding this kind of navigation are: handling of very inaccurate position (and orientation) information as well as implicit modelling of complex kinematics during an adaptation phase. Due to the lack of proper a-priori knowledge, a re-inforcement based model is used for the translation of navigator commands to motor actions. Instead of employing a backpropagation network for the cen-tral associative memory module (attaching actionprobabilities to sensor situations resp. navigatorcommands) a much faster dynamic cell structure system based on dynamic feature maps is shown. Standard graph-search heuristics like A* are applied in the planning phase.

The problem to be discussed here, is the usage of neural network clustering techniques on a mobile robot, in order to build qualitative topologic environment maps. This has to be done in realtime, i.e. the internal world model has to be adapted by the flow of sensor- samples without the possibility to stop this data-flow.Our experiments are done in a simulation environment as well as on a robot, called ALICE.

Visual Search has been investigated by many researchers inspired by the biological fact, that the sensory elements on the mammal retina are not equably distributed. Therefore the focus of attention (the area of the retina with the highest density of sensory elements) has to be directed in a way to efficiently gather data according to certain criteria. The work discussed in this article concentrates on applying a laser range finder instead of a silicon retina. The laser range finder is maximal focused at any time, but therefore a low resolution total-scene-image, available with camera-like devices from scratch on, cannot be used here. By adapting a couple of algorithms, the edge-scanning module steering the laser range finder is able to trace a detected edge. Based on the data scanned so far , two questions have to be answered. First: "Should the actual (edge-) scanning be interrupted in order to give another area of interest a chance of being investigated?" and second: "Where to start a new edge-scanning, after being interrupted?". These two decision-problems might be solved by a range of decision systems. The correctness of the decisions depends widely on the actual environment and the underlying rules may not be well initialized with a-priori knowledge. So we will present a version of a reinforcement decision system together with an overall scheme for efficiently controlling highly focused devices.

While symbolic learning approaches encode the knowledge provided by the presentation of the cases explicitly into a symbolic representation of the concept, e.g. formulas, rules, or decision trees, case-based approaches describe learned concepts implicitly by a pair (CB; d), i.e. by a set CB of cases and a distance measure d. Given the same information, symbolic as well as the case-based approach compute a classification when a new case is presented. This poses the question if there are any differences concerning the learning power of the two approaches. In this work we will study the relationship between the case base, the measure of distance, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the conjecture of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure used by a case-based algorithm and the target concept.

In this paper we describe a framework for defining and operationalizing conceptual models of distributed knowledge-based systems which extends published approaches by the notion of ,agents" and multiple task decompositions. The main part deals with techniques underlying our distributed interpreter. We show how a client-server-architecture can be implemented which allows prototyping distributed knowledge-based systems. Further we describe our mechanism which manages task interactions and supports dependency-directed backtracking efficiently.

A method for efficiently handling associativity and commutativity (AC) in implementations of (equational) theorem provers without incorporating AC as an underlying theory will be presented. The key of substantial efficiency gains resides in a more suitable representation of permutation-equations (such as f(x,f(y,z))=f(y,f(z,x)) for instance). By representing these permutation-equations through permutations in the mathematical sense (i.e. bijective func- tions :{1,..,n} {1,..,n}), and by applying adapted and specialized inference rules, we can cope more appropriately with the fact that permutation-equations are playing a particular role. Moreover, a number of restrictions concerning application and generation of permuta- tion-equations can be found that would not be possible in this extent when treating permu- tation-equations just like any other equation. Thus, further improvements in efficiency can be achieved.

Automatic proof systems are becoming more and more powerful.However, the proofs generated by these systems are not met withwide acceptance, because they are presented in a way inappropriatefor human understanding.In this paper we pursue two different, but related, aims. First wedescribe methods to structure and transform equational proofs in away that they conform to human reading conventions. We developalgorithms to impose a hierarchical structure on proof protocols fromcompletion based proof systems and to generate equational chainsfrom them.Our second aim is to demonstrate the difficulties of obtaining suchprotocols from distributed proof systems and to present our solutionto these problems for provers using the TEAMWORK method. Wealso show that proof systems using this method can give considerablehelp in structuring the proof listing in a way analogous to humanbehaviour.In addition to theoretical results we also include descriptions onalgorithms, implementation notes, examples and data on a variety ofexamples.

This paper presents fill algorithms for boundary-defined regions in raster graphics. The algorithms require only a constant size working memory. The methods presented are based on the so-called "seed fill" algorithms using the internal connectivity of the region with a given inner point. Basic methods as well as additional heuristics for speeding up the algorithm are described and verified. For different classes of regions, the time complexity of the algorithms is compared using empirical results.

The main problem in computer graphics is to solve the global illumination problem,
which is given by a Fredholm integral equation of the second kind, called the radiance equation (REQ). In order to achieve realistic images, a very complex kernel
of the integral equation, modelling all physical effects of light, must be considered. Due to this complexity Monte Carlo methods seem to be an appropriate approach to solve the REQ approximately. We show that replacing Monte Carlo by quasi-Monte Carlo in some steps of the algorithm results in a faster convergence.

We introduce the concept of streamballs for fluid flow visualization. Streamballs are based upon implicit surface generation techniques adopted from the well-known metaballs. Their property to split or merge automatically in areas of significant divergence or convergence makes them an ideal tool for the visualization of arbitrary complex flow fields. Using convolution surfaces generated by continuous skeletons for streamball construction offers the possibility to visualize even tensor fields.

The problem to interpolate Hermite-type data (i.e. two points with attached tangent vectors) with elastic curves of prescribed tension is known to have multiple solutions. A method is presented that finds all solutions of length not exceeding one period of its curvature function. The algorithm is based on algebraic relations between discrete curvature information which allow to transform the problem into a univariate one. The method operates with curves that by construction partially interpolate the given data. Hereby the objective function of the problem is drastically simplified. A bound on the maximum curvature value is established that provides an interval containing all solutions.

Best-Fit Pattern Matching
(1994)

This report shows that dispatching of methods in object oriented languages is in principle the same as best fit pattern matching. A general conceptual description of best fit pattern matching is presented. Many object oriented features are modelled by means of the general concept. This shows that simple methods, multi methods, overloading of functions, pattern matching,
dynamic and union types, and extendable records can be combined in a single comprehensive concept.

We study the complexity of local solution of Fredholm integral equations. This means that we want to compute not the full solution, but rather a functional (weighted mean, value in a point) of it. For certain Sobolev classes of multivariate periodic functions we prove matching upper and lower bounds and construct an algorithm of the optimal order, based on Fourier coefficients and a hyperbolic cross approximation.

Hardware / Software Codesign
(1994)

Monte Carlo integration is often used for antialiasing in rendering processes.
Due to low sampling rates only expected error estimates can be stated, and the variance can be high. In this article quasi-Monte Carlo methods are presented, achieving a guaranteed upper error bound and a convergence rate essentially as fast as usual Monte Carlo.