## Fachbereich Informatik

### Refine

#### Year of publication

#### Document Type

- Report (111) (remove)

#### Language

- English (111) (remove)

#### Keywords

- Dienstgüte (3)
- Formalisierung (3)
- AG-RESY (1)
- Ad-hoc-Netz (1)
- Compiler (1)
- Coq (1)
- Extraction (1)
- Formal Semantics (1)
- Fräsen (1)
- Funknetz (1)

Partitioned chain grammars
(1979)

This paper introduces a new class of grammars, the partitioned chain grammars, for which efficient parsers can be automatically generated. Besides being efficiently parsable these grammars possess a number of other properties, which make them very attractive for the use in parser-generators. They for instance form a large grammarclass and describe all deterministic context-free languages. Main advantage of the partitioned chain grammars however is, that given a language it is usually easier to describe it by a partitioned chain grammar than to construct a grammar of some other type commonly used in parser-generators for it.

Trimming of surfaces and volumes, curve and surface modeling via Bézier's idea of destortion, segmentation, reparametrization, geometric continuity are examples of applications of functional composition. This paper shows how to
compose polynomial and rational tensor product Bézier representations. The problem of composing Bezier splines and B-spline representations will also be addressed in this paper.

The use of non-volatile semiconductor memory within an extended storage hierarchy promises significant performance improvements for transaction processing. Although page-addressable semiconductor memories like extended memory, solid-state disks and disk caches are commercially available since several years, no detailed investigation of their use for transaction processing has been performed so far. We present a comprehensive simulation study that compares the performance of these storage types and of different usage forms. The following usage forms are considered: allocation of entire log and database files in non-volatile semiconductor memory, using a so-called write buffer to perform disk writes asynchronously, and caching of database pages at intermediate storage levels (in addition to main memory caching). Our simulations are conducted with both synthetically generated workloads and traces from real-life database applications. In particular, simulation results will be presented for the debit-credit workload frequently used in transaction processing benchmarks. As expected, the greatest performance improvements (but at the highest cost) can be achieved by storing log and database files completely in non-volatile semiconductor memory. For update-intensive
workloads, a limited amount of non-volatile memory used as a write buffer also proved to be very effective. To reduce the number of disk reads; caching of database pages in addition to main memory is best supported by an extended memory buffer. In this respect, disk caches are found to be less effective as they are designed for one-level caching. Different storage costs suggest that it may be cost-effective to use two or even three of the intermediate storage types together. The performance improvements obtainable by the use of non-volatile semiconductor memory is also found to reduce the need for sophisticated DBMS buffer management in order to achieve high transaction processing performance.

Virtual Reality (VR) is to be seen as the superset of simulation and animation. Visualization is done by rendering. The fundamental model of VR accounts for all phenomenons to be modelled with help of a computer. Examples range from simple dragging actions with a mouse device to the complex simulation of physically based animation.

Gauss Frame Offsets
(1992)

User interfaces for large distributed applications have to handle specific problems: the complexity of the application itself and the integration of online-data into the user interface. A main task of the user interface architecture is to provide powerful tools to design and augment the end-user system easily, hence giving the designer more time to focus on user requirements. Our experiences developing a user interface system for a process control room showed that a lot of time during the development process is wasted for the integration of online-data residing anywhere but not in the user interface itself. Furtheron external data may be kept by different kinds of programs, e.g. C-programs running
a numerical process model or PROLOG-programs running a diagnosis system, both in parallel to the process and in parallel to the user interface. Facing these specific requirements, we developed a user interface architecture following two main goals: 1. integration of external information into high-level graphical objects and 2. the system should be open for any program running as a separate process using its own problem-oriented language. The architecture is based on two approaches: an asynchronous, distributed and language independent communication model and an object model describing the problem domain and the interface using object-oriented techniques. Other areas like rule-based programming are involved, too. With this paper, we will present the XAVIA user interface architecture, the (as far as we know) first user inteface architecture, which is consequently based on a distributed object model.

The composition of Bézier curves and tensor product Bézier surfaces, polynomial as well as rational, is applied to exactly and explicitely represent trim curves of tensor product Bézier surfaces. Trimming curves are assumed to be defined as Bézier curves in surface parameter domain. A Bézier spline approximation of lower polynomial degree is built up as weil which is based on the exact trim curve representation in coordinate space.

This paper describes some new algorithms for the accurate calculation of surface properties. In the first part an arithmetic on Bézier surfaces is introduced. Formulas are given, which determine the Bézier points and weights of the resulting surface from the points and weights of the operand surfaces. An application of the arithmetic operations to the surface interrogation methods are described in the second part. It turns out, that the quality analysis can be reduced to a few numerical stable operations. Finally the advantages and disadvantages of this method are discussed.

Shadow-Mapping
(1993)

Most radiosity techniques store radiosities in certain sample points, typically the vertices of polyhedral scenes. As diffuse radiosities are view independent they can be used for an interactive 'walk-through'. This paper presents an algorithm for storing radiosities independent of the representation of the object. A distributed rendering system, which uses this shadow-mapping technique is described. The basic thermophysical definitions, needed to derive a sum formula for a form factor calculation of polygons, are explained.

Visualization of large data sets, especially on small machines, requires advanced techniques in image processing and image generation. Our hybrid raytracer is capable of rendering volumetric and geometric data simultaneously, without loss of accuracy due to data conversion. Compound data sets, consisting of several types of data, are called "hybrid data sets". There is only one rendering pipeline to obtain loss-less and efficient visualization of hybrid data. Algorithms apply to both types of data. Optical material properties are stored in the same data base for both volumetric and geometric objects, and anti-aliasing methods appeal to both data types. Stereoscopic display routines have been added to obtain true three-dimensional visualization on various media, and animation features allow generation of recordable 3-D sequences.

Free Form Volumes
(1994)

This report presents a generalization of tensor-product B-spline surfaces. The new scheme permits knots whose endpoints lie in the interior of the domain rectangle of a surface. This allows local refinement of the knot structure for approximation purposes as well as modeling surfaces with local tangent or curvature discontinuities. The surfaces are represented in terms of B-spline basis functions, ensuring affine invariance, local control, the convex hull property, and evaluation by de Boor's algorithm. A dimension formula for a class of generalized tensor-product spline spaces is developed.

Optimization of Projection Methods for Solving ill-posed Problems. In this paper we propose a modification of the projection scheme for solving ill-posed problems. We show that this modification allows to obtain the best possible order of accuracy of Tikhonov Regularization using an amount of information which is far less than for the standard projection technique.

The rapid development of any field of knowledge brings with it unavoidable fragmentation and proliferation of new disciplines. The development of computer science is no exception. Software engineering (SE) and human-computer interaction (HCI) are both relatively new disciplines of computer science. Furthermore, as both names suggest, they each have strong connections with other subjects. SE is concerned with methods and tools for general software development based on engineering principles. This discipline has its roots not only in computer science but also in a number of traditional engineering disciplines. HCI is concerned with methods and tools for the development of human-computer interfaces, assessing the usability of computer systems and with broader issues about how people interact with computers. It is based on theories about how humans process information and interact with computers, other objects and other people in the organizational and social contexts in
which computers are used. HCI draws on knowledge and skills from psychology, anthropology and sociology in addition to computer science. Both disciplines need ways of measuring how well their products and development processes fulfil their intended requirements. Traditionally SE has been concerned with 'how software is constructed' and HCI with 'how people use software'. Given the
different histories of the disciplines and their different objectives, it is not surprising that they take different approaches to measurement. Thus, each has its own distinct 'measurement culture.' In this paper we analyse the differences and the commonalties of the two cultures by examining the measurement approaches used by each. We then argue the need for a common measurement taxonomy and framework, which is derived from our analyses of the two disciplines. Next we demonstrate the usefulness of the taxonomy and framework via specific example studies drawn from our own work and that of others and show that, in fact, the two disciplines have many important similarities as well as differences and that there is some evidence to suggest that they are growing closer. Finally, we discuss the role of the taxonomy as a framework to support: reuse, planning future studies, guiding practice and facilitating communication between the two disciplines.

Monte Carlo integration is often used for antialiasing in rendering processes.
Due to low sampling rates only expected error estimates can be stated, and the variance can be high. In this article quasi-Monte Carlo methods are presented, achieving a guaranteed upper error bound and a convergence rate essentially as fast as usual Monte Carlo.

The radiance equation, which describes the global illumination problem in computer graphics, is a high dimensional integral equation. Estimates of the solution are usually computed on the basis of Monte Carlo methods. In this paper we propose and investigate quasi-Monte Carlo methods, which means that we replace (pseudo-) random samples by low discrepancy sequences, yielding deterministic algorithms. We carry out a comparative numerical study between Monte Carlo and quasi-Monte Carlo methods. Our results show that quasi-Monte Carlo converges considerably faster.

We introduce the concept of streamballs for fluid flow visualization. Streamballs are based upon implicit surface generation techniques adopted from the well-known metaballs. Their property to split or merge automatically in areas of significant divergence or convergence makes them an ideal tool for the visualization of arbitrary complex flow fields. Using convolution surfaces generated by continuous skeletons for streamball construction offers the possibility to visualize even tensor fields.

The problem to interpolate Hermite-type data (i.e. two points with attached tangent vectors) with elastic curves of prescribed tension is known to have multiple solutions. A method is presented that finds all solutions of length not exceeding one period of its curvature function. The algorithm is based on algebraic relations between discrete curvature information which allow to transform the problem into a univariate one. The method operates with curves that by construction partially interpolate the given data. Hereby the objective function of the problem is drastically simplified. A bound on the maximum curvature value is established that provides an interval containing all solutions.

The Basic Reference Model of ODP introduces a number of basic concepts in order to provide a common basis for the development of a coherent set of standards. To achieve this objective, a clear understanding of the basic concepts is one prerequisite. This paper makes an effort at clarifying some of the basic concepts independently of standardized or non-standardized formal description techniques. Among the basic concepts considered here are: agent, action, interaction, interaction point, architecture, behaviour, system, composition, refinement, and abstraction. In a case study, it is then shown how these basic concepts can be represented in a formal specification written in temporal logic.

Hardware / Software Codesign
(1994)

We study the complexity of local solution of Fredholm integral equations. This means that we want to compute not the full solution, but rather a functional (weighted mean, value in a point) of it. For certain Sobolev classes of multivariate periodic functions we prove matching upper and lower bounds and construct an algorithm of the optimal order, based on Fourier coefficients and a hyperbolic cross approximation.

In this paper the complexity of the local solution of Fredholm integral equations
is studied. For certain Sobolev classes of multivariate periodic functions with dominating mixed derivative we prove matching lower and upper bounds. The lower bound is shown using relations to s-numbers. The upper bound is proved in a constructive way providing an implementable algorithm of optimal order based on Fourier coefficients and a hyperbolic cross approximation.

Best-Fit Pattern Matching
(1994)

This report shows that dispatching of methods in object oriented languages is in principle the same as best fit pattern matching. A general conceptual description of best fit pattern matching is presented. Many object oriented features are modelled by means of the general concept. This shows that simple methods, multi methods, overloading of functions, pattern matching,
dynamic and union types, and extendable records can be combined in a single comprehensive concept.

The main problem in computer graphics is to solve the global illumination problem,
which is given by a Fredholm integral equation of the second kind, called the radiance equation (REQ). In order to achieve realistic images, a very complex kernel
of the integral equation, modelling all physical effects of light, must be considered. Due to this complexity Monte Carlo methods seem to be an appropriate approach to solve the REQ approximately. We show that replacing Monte Carlo by quasi-Monte Carlo in some steps of the algorithm results in a faster convergence.

In this paper, the complexity of full solution of Fredholm integral equations of the second kind with data from the Sobolev class \(W^r_2\) is studied. The exact order of information complexity is derived. The lower bound is proved using a Gelfand number technique. The upper bound is shown by providing a concrete algorithm of optimal order, based on a specific hyperbolic cross approximation of the kernel function. Numerical experiments are included, comparing the optimal algorithm with the standard Galerkin method.

A new variance reduction technique for the Monte Carlo solution of integral
equations is introduced. It is based on separation of the main part. A neighboring equation with exactly known solution is constructed by the help of a deterministic Galerkin scheme. The variance of the method is analyzed, and an application to the radiosity equation of computer graphics, together with numerical test results is given.

This paper introduces a new high Level programming language for a novel
class of computational devices namely data-procedural machines. These machines are by up to several orders of magnitude more efficient than the von Neumann paradigm of computers and are as flexible and as universal as computers. Their efficiency and flexibility is achieved by using field-programmable logic as the essential technology platform. The paper briefly summarizes and illustrates the essential new features of this language by means of two example programs.

Experience gathered from applying the software process modeling language MVP-L in software development organizations has shown the need for graphical representations of process models. Project members (i.e„ non MVP-L specialists) review models much more easily by using graphical representations. Although several various graphical notations were developed for individual projects in which MVP-L was applied, there was previously no consistent definition of a mapping between textual MVP-L models and graphical representations. This report defines a graphical representation schema for MVP-L
descriptions and combines previous results in a unified form. A basic set of building blocks (i.e., graphical symbols and text fragments) is defined, but because we must first gain experience with the new symbols, only rudimentary guidelines are given for composing basic
symbols into a graphical representation of a model.

Intellectual control over software development projects requires the existence of an integrated set of explicit models of the products to be developed, the processes used to develop them, the resources needed, and the productivity and quality aspects involved. In recent years the development of languages, methods and tools for modeling software processes, analyzing and enacting them has become a major emphasis of software engineering research. The majority of current process research concentrates on prescriptive modeling of small, completely formalizable processes and their execution entirely on computers. This research direction has produced process modeling languages suitable for machine rather than human consumption. The MVP project, launched at the University of Maryland and continued at Universität Kaiserslautern, emphasizes building descriptive models of large, real-world processes and their use by humans and computers for the purpose of understanding, analyzing, guiding and improving software development projects. The language MVP-L has been developed with these purposes in mind. In this paper, we
motivate the need for MVP-L, introduce the prototype language, and demonstrate its uses. We assume that further improvements to our language will be triggered by lessons learned from applications and experiments.

The CAD/CAM-based design of free-form surfaces is the beginning of a chain of operations, which ends with the numerically controlled (NC-) production of the designed object. During this process the shape control is an important step to amount efficiency. Several surface interrogation methods already exist to analyze curvature and continuity behaviour of the shape. This paper deals with a new aspect of shape control: the stability of surfaces with respect to infnitesimal bendings. Each inEnitesimal bending of a surface determines a so called instability surface, which is used for the stability investigations. The kinematic meaning of this instability surface will be discussed and we present algorithms to calculate it.

Computer processing of free form surfaces forms the basis of a closed construction process starting with surface design and up to NC-production.
Numerical simulation and visualization allow quality analysis before manufacture. A new aspect in surface analysis is described, the stability
of surfaces versus infinitesimal bendings. The stability concept is derived
from the kinetic meaning of a special vector field which is given by the deformation. Algorithms to calculate this vector field together with an appropriate visualization method give a tool able to analyze surface stability.

The \(L_2\)-discrepancy is a quantitative measure of precision for multivariate quadrature rules. It can be computed explicitly. Previously known algorithms needed \(O(m^2\)) operations, where \(m\) is the number of nodes. In this paper we present algorithms which require
\(O(m(log m)^d)\) operations.

The local solution problem of multivariate Fredholm integral equations is studied. Recent research proved that for several function classes the complexity of this problem is closely related to the Gelfand numbers of some characterizing operators. The generalization of this approach to the situation of arbitrary Banach spaces is the subject of the present paper.
Furthermore, an iterative algorithm is described which - under some additional conditions - realizes the optimal error rate. The way these general theorems work is demonstrated by applying them to integral equations in a Sobolev space of periodic functions with dominating mixed derivative of various order.

Optimal degree reductions, i.e. best approximations of \(n\)-th degree Bezier curves
by Bezier curves of degree \(n\) - 1, with respect to different norms are studied. It
is shown that for any \(L_p\)-norm the euclidean degree reduction where the norm is applied to the euclidean distance function of two curves is identical to componentwise degree reduction. The Bezier points of the degree reductions are found to lie on parallel lines through the Bezier points of any Taylor expansion of degree \(n\) - 1 of the original curve. This geometric situation is shown to hold also in the case of constrained degree reduction. The Bezier points of the degree reduction are explicitly given in the unconstrained case for \(p\) = 1 and \(p\) = 2 and in the constrained case for \(p\) = 2.

In this paper an analytic hidden surface removal algorithm is presented which uses a combination
of 2D and 3D BSP trees without involving point sampling or scan conversion. Errors like aliasing
which result from sampling do not occur while using this technique. An application of this
algorithm is outlined which computes the energy locally reflected from a surface having an
arbitrary BRDF. A simplification for diffuse reflectors is described, which has been implemented
to compute analytic form factors from diffuse light sources to differential receivers as they are needed for shading and radiosity algorithms.

Software development organizations measure their real-world processes, products, and resources to achieve the goal of improving their practices. Accurate and useful measurement relies on explicit models of the real-world processes, products, and resources. These explicit models assist with planning measurement, interpreting data, and assisting developers with their work. However, little work has been done on the joint use of measurem(int and process technologies. We hypothesize that it is possible to integrate measurement and process technologies in a way that supports automation of measurement-based feedback. Automated support for measurementbased feedback means that software developers and maintainers are provided with on-line, detailed information about their work. This type of automated support is expected to help software professionals gain intellectual control over their software projects. The dissertation offers three major contributions. First, an integrated measurement and
process modeling framework was constructed. This framework establishes the necessary foundation for integrating measurement and process technologies in a way that will permit automation. Second, a process-centered software engineering environment was developed to support measurement-based feedback. This system provides personnel with information about the tasks expected of them based on an integrated set of measurement and process views. Third, a set of assumptions and requirements about that system were examined in a controlled experiment. The experiment compared the use of different levels of automation to evaluate the acceptance and effectiveness of measurement-based feedback.

The calculation of form factors is an important problem in computing the global illumination in the radiosity setting. Closed form solutions often are only available for objects without obstruction and are very hard to calculate. Using Monte Carlo integration and ray tracing provides a fast and elegant tool for the estimation of the form factors. In this paper we show, that using deterministic low discrepancy sample points is superior to random sampling, resulting in an acceleration of more than half an order of magnitude.

In recent years, Smolyak quadrature rules (also called hyperbolic cross points or sparse grids) have gained interest as a possible competitor to number theoretic quadratures for high dimensional problems. A standard way of comparing the quality of multivariate quadrature formulas
consists in computing their \(L_2\)-discrepancy. Especially for larger dimensions, such computations are a highly complex task. In this paper we develop a fast recursive algorithm for computing the \(L_2\)-discrepancy (and related quality measures) of general Smolyak quadratures. We carry out numerical comparisons between the discrepancies of certain Smolyak rules, Hammersley and Monte Carlo sequences.

A notion of discrepancy is introduced, which represents the integration error on spaces of \(r\)-smooth periodic functions. It generalizes the diaphony and constitutes a periodic counterpart to the classical \(L_2\)-discrepancy as weil as \(r\)-smooth versions of it introduced recently by Paskov [Pas93]. Based on previous work [FH96], we develop an efficient algorithm for computing periodic discrepancies for quadrature formulas possessing certain tensor product structures, in particular, for Smolyak quadrature rules (also called sparse grid methods). Furthermore, fast algorithms of computing periodic discrepancies for lattice rules can easily be derived from well-known properties of lattices. On this basis we carry out numerical comparisons of discrepancies between Smolyak and lattice rules.

This document offers a concise introduction to the Goal Question Metric Paradigm (GQM Paradigm), and surveys research on applying and extending the GQM Paradigm. We describe the GQM Paradigm in terms of its basic principles, techniques for structuring GQM-related documents, and methods for performing tasks of planning and implementing a measurement program based on GQM. We also survey prototype software tools that support applying the GQM Paradigm in various ways. An annotated bibliography lists sources that document experience gained while using the GQM Paradigm and offer in-depth information about the GQM Paradigm.

Quasi-Monte Carlo Radiosity
(1996)

The problem of global illumination in computer graphics is described by a second kind Fredholm integral equation. Due to the complexity of this equation, Monte Carlo methods provide an interesting tool for approximating
solutions to this transport equation. For the case of the radiosity equation, we present the deterministic method of quasi-rondom walks. This method very efficiently uses low discrepancy sequences for integrating the Neumann series and consistently outperforms stochastic techniques. The method of quasi-random walks also is applicable to transport problems in settings other
than computer graphics.

Estelle is an internationally standardized formal description technique (FDT) designed for the specification of distributed systems, in particular communication protocols. An Estelle specification describes a system of communicating components (module instances). The specified system is closed in a topological sense, i.e. it has no ability to interact with some environment. Because of this restriction, open systems can only be specified together with and incorporated with an environment. To overcome this restriction, we introduce a compatible extension of Estelle, called "Open Estelle". It allows the specification of (topologically) open systems, i.e. systems that have the ability to communicate with any environment through a well-defined external interface. We define aformal syntax and a formal semantics for Open Estelle, both based on and extending the syntax and semantics of Estelle. The extension is compatible syntactically and semantically, i.e. Estelle is a subset of Open Estelle. In particular, the formal semantics of Open Estelle reduces to the Estelle semantics in the special case of a closed system. Furthermore, we present a tool for the textual integration of open systems into environments specified in Open Estelle, and a compiler for the automatic generation of implementations directly from Open Estelle specifications.

The intuitionistic calculus mj for sequents, in which no other logical symbols than those for implication and universal quantification occur, is introduced and analysed. It allows a simple backward application, called mj-reduction here, for searching for derivation trees. Terms needed in mj-reduction can be found with the unification algorithm. mj-Reduction with unification can be seen as a natural extension of SLD-resolution. mj-Derivability of the sequents considered here coincides with derivability in Johansson's minimal intuitionistic calculus LHM in [6]. Intuitionistic derivability of formulae with negation and classical derivability of formulae with all usual logical symbols can be expressed with mj-derivability and hence be verified by mj-reduction. mj-Derivations can be easily translated into LJ-derivations without
"Schnitt", or into NJ-derivations in a slightly sharpened form of Prawitz' normal form. In the first three sections, the systematic use of mj-reduction for proving in predicate logic is emphasized. Although the fourth section, the last and largest, is exclusively devoted to the mathematical analysis of the calculus mj, the first three sections may be of interest to a wider readership, including readers looking for applications of symbolic logic. Unfortunately, the mathematical analysis of the calculus mj, as the study of Gentzen's calculi, demands a large amount of technical work that obscures the natural unfolding of the argumentation. To alleviate this, definitions and theorems are completely embedded in the text to provide a fluent and balanced mathematical discourse: new concepts are indicated with bold-face, proofs of assertions are outlined, or omitted when it is assumed that the reader can provide them.

The problem of constructing a geometric model of an existing object from a set of boundary points arises in many areas of industry. In this paper we present a new solution to this problem which is an extension of Boissonnat's method [2]. Our approach uses the well known Delaunay triangulation of the data points as an intermediate step. Starting with this structure, we eliminate tetrahedra until we get an appropriate approximation of the desired shape. The method proposed in this paper is capable of reconstructing objects with arbitrary genus and can cope with different point densities in different regions of the object. The
problems which arise during the elimination process, i.e. which tetrahedra can be eliminated, which order has to be used to control the process and finally, how to stop the elimination procedure at the right time, are discussed in detail. Several examples are given to show the validity of the method.