Interner Bericht des Fachbereich Informatik
Refine
Year of publication
Document Type
- Report (108)
- Preprint (4)
- Conference Proceeding (2)
Language
- English (114) (remove)
Has Fulltext
- yes (114)
Keywords
- Formalisierung (3)
- Dienstgüte (2)
- Translation Validation (2)
- Certifying Compilers (1)
- Compiler (1)
- Coq (1)
- Extraction (1)
- Formal Semantics (1)
- HOL (1)
- Hub-and-Spoke-System (1)
Faculty / Organisational entity
317
Interactive graphics has been limited to simple direct illumination that commonly results in an artificial appearance. A more realistic appearance by simulating global illumination effects has been too costly to compute at interactive rates. In this paper we describe a new Monte Carlo-based global illumination algorithm. It achieves performance of up to 10 frames per second while arbitrary changes to the scene may be applied interactively. The performance is obtained through the effective use of a fast, distributed ray-tracing engine as well as a new interleaved sampling technique for parallel Monte Carlo simulation. A new filtering step in combination with correlated sampling avoids the disturbing noise artifacts common to Monte Carlo methods.
292
Estelle is an internationally standardized formal description technique (FDT) designed for the specification of distributed systems, in particular communication protocols. An Estelle specification describes a system of communicating components (module instances). The specified system is closed in a topological sense, i.e. it has no ability to interact with some environment. Because of this restriction, open systems can only be specified together with and incorporated with an environment. To overcome this restriction, we introduce a compatible extension of Estelle, called "Open Estelle". It allows the specification of (topologically) open systems, i.e. systems that have the ability to communicate with any environment through a well-defined external interface. We define aformal syntax and a formal semantics for Open Estelle, both based on and extending the syntax and semantics of Estelle. The extension is compatible syntactically and semantically, i.e. Estelle is a subset of Open Estelle. In particular, the formal semantics of Open Estelle reduces to the Estelle semantics in the special case of a closed system. Furthermore, we present a tool for the textual integration of open systems into environments specified in Open Estelle, and a compiler for the automatic generation of implementations directly from Open Estelle specifications.
233
This paper describes some new algorithms for the accurate calculation of surface properties. In the first part an arithmetic on Bézier surfaces is introduced. Formulas are given, which determine the Bézier points and weights of the resulting surface from the points and weights of the operand surfaces. An application of the arithmetic operations to the surface interrogation methods are described in the second part. It turns out, that the quality analysis can be reduced to a few numerical stable operations. Finally the advantages and disadvantages of this method are discussed.
20
Partitioned chain grammars
(1979)
This paper introduces a new class of grammars, the partitioned chain grammars, for which efficient parsers can be automatically generated. Besides being efficiently parsable these grammars possess a number of other properties, which make them very attractive for the use in parser-generators. They for instance form a large grammarclass and describe all deterministic context-free languages. Main advantage of the partitioned chain grammars however is, that given a language it is usually easier to describe it by a partitioned chain grammar than to construct a grammar of some other type commonly used in parser-generators for it.
293
The intuitionistic calculus mj for sequents, in which no other logical symbols than those for implication and universal quantification occur, is introduced and analysed. It allows a simple backward application, called mj-reduction here, for searching for derivation trees. Terms needed in mj-reduction can be found with the unification algorithm. mj-Reduction with unification can be seen as a natural extension of SLD-resolution. mj-Derivability of the sequents considered here coincides with derivability in Johansson's minimal intuitionistic calculus LHM in [6]. Intuitionistic derivability of formulae with negation and classical derivability of formulae with all usual logical symbols can be expressed with mj-derivability and hence be verified by mj-reduction. mj-Derivations can be easily translated into LJ-derivations without
"Schnitt", or into NJ-derivations in a slightly sharpened form of Prawitz' normal form. In the first three sections, the systematic use of mj-reduction for proving in predicate logic is emphasized. Although the fourth section, the last and largest, is exclusively devoted to the mathematical analysis of the calculus mj, the first three sections may be of interest to a wider readership, including readers looking for applications of symbolic logic. Unfortunately, the mathematical analysis of the calculus mj, as the study of Gentzen's calculi, demands a large amount of technical work that obscures the natural unfolding of the argumentation. To alleviate this, definitions and theorems are completely embedded in the text to provide a fluent and balanced mathematical discourse: new concepts are indicated with bold-face, proofs of assertions are outlined, or omitted when it is assumed that the reader can provide them.
296
A natural extension of SLD-resolution is introduced as a goal directed proof procedure
for the full first order implicational fragment of intuitionistic logic. Its intuitionistic semantic fits a procedural interpretation of logic programming. By allowing arbitrary nested implications it can be used for implementing modularity in logic programs. With adequate negation axioms it gives an alternative to negation as failure and leads to a proof procedure for full first order predicate logic.
216
The use of non-volatile semiconductor memory within an extended storage hierarchy promises significant performance improvements for transaction processing. Although page-addressable semiconductor memories like extended memory, solid-state disks and disk caches are commercially available since several years, no detailed investigation of their use for transaction processing has been performed so far. We present a comprehensive simulation study that compares the performance of these storage types and of different usage forms. The following usage forms are considered: allocation of entire log and database files in non-volatile semiconductor memory, using a so-called write buffer to perform disk writes asynchronously, and caching of database pages at intermediate storage levels (in addition to main memory caching). Our simulations are conducted with both synthetically generated workloads and traces from real-life database applications. In particular, simulation results will be presented for the debit-credit workload frequently used in transaction processing benchmarks. As expected, the greatest performance improvements (but at the highest cost) can be achieved by storing log and database files completely in non-volatile semiconductor memory. For update-intensive
workloads, a limited amount of non-volatile memory used as a write buffer also proved to be very effective. To reduce the number of disk reads; caching of database pages in addition to main memory is best supported by an extended memory buffer. In this respect, disk caches are found to be less effective as they are designed for one-level caching. Different storage costs suggest that it may be cost-effective to use two or even three of the intermediate storage types together. The performance improvements obtainable by the use of non-volatile semiconductor memory is also found to reduce the need for sophisticated DBMS buffer management in order to achieve high transaction processing performance.
251
The rapid development of any field of knowledge brings with it unavoidable fragmentation and proliferation of new disciplines. The development of computer science is no exception. Software engineering (SE) and human-computer interaction (HCI) are both relatively new disciplines of computer science. Furthermore, as both names suggest, they each have strong connections with other subjects. SE is concerned with methods and tools for general software development based on engineering principles. This discipline has its roots not only in computer science but also in a number of traditional engineering disciplines. HCI is concerned with methods and tools for the development of human-computer interfaces, assessing the usability of computer systems and with broader issues about how people interact with computers. It is based on theories about how humans process information and interact with computers, other objects and other people in the organizational and social contexts in
which computers are used. HCI draws on knowledge and skills from psychology, anthropology and sociology in addition to computer science. Both disciplines need ways of measuring how well their products and development processes fulfil their intended requirements. Traditionally SE has been concerned with 'how software is constructed' and HCI with 'how people use software'. Given the
different histories of the disciplines and their different objectives, it is not surprising that they take different approaches to measurement. Thus, each has its own distinct 'measurement culture.' In this paper we analyse the differences and the commonalties of the two cultures by examining the measurement approaches used by each. We then argue the need for a common measurement taxonomy and framework, which is derived from our analyses of the two disciplines. Next we demonstrate the usefulness of the taxonomy and framework via specific example studies drawn from our own work and that of others and show that, in fact, the two disciplines have many important similarities as well as differences and that there is some evidence to suggest that they are growing closer. Finally, we discuss the role of the taxonomy as a framework to support: reuse, planning future studies, guiding practice and facilitating communication between the two disciplines.