Kaiserslautern - Fachbereich Informatik
Refine
Year of publication
Document Type
- Report (111) (remove)
Language
- English (111) (remove)
Has Fulltext
- yes (111)
Keywords
- Dienstgüte (3)
- Formalisierung (3)
- AG-RESY (1)
- Ad-hoc-Netz (1)
- Compiler (1)
- Coq (1)
- Extraction (1)
- Formal Semantics (1)
- Fräsen (1)
- Funknetz (1)
Faculty / Organisational entity
The World Wide Web is a medium through which a manufacturer may allow Internet visitors to customize or compose his products. Due to missing or rapidly changing standards these applications are often restricted to relatively simple CGI or JAVA based scripts. Usually, results like images or movies are stored in a database and are transferred on demand to the web-user. Viper (Visualisierung parametrisch editierbarer Raumkomponenten) is a Toolkit [VIP96] written in C++ and JAVA which provides 3D-modeling and visualization methodsfor developing complex web-based applications. The Toolkit has been designed to built a prototype, which can be used to construct and visualize prefabricated homes on the Internet. Alternative applications are outlined in this paper. Within Viper, all objects are stored in a scene graph (VSSG ), which is the basic data structure of the Toolkit. To show the concept and structure of the Toolkit, functionality, and implementation of the prototype are described.
Estelle is an internationally standardized formal description technique (FDT) designed for the specification of distributed systems, in particular communication protocols. An Estelle specification describes a system of communicating components (module instances). The specified system is closed in a topological sense, i.e. it has no ability to interact with some environment. Because of this restriction, open systems can only be specified together with and incorporated with an environment. To overcome this restriction, we introduce a compatible extension of Estelle, called "Open Estelle". It allows the specification of (topologically) open systems, i.e. systems that have the ability to communicate with any environment through a well-defined external interface. We define aformal syntax and a formal semantics for Open Estelle, both based on and extending the syntax and semantics of Estelle. The extension is compatible syntactically and semantically, i.e. Estelle is a subset of Open Estelle. In particular, the formal semantics of Open Estelle reduces to the Estelle semantics in the special case of a closed system. Furthermore, we present a tool for the textual integration of open systems into environments specified in Open Estelle, and a compiler for the automatic generation of implementations directly from Open Estelle specifications.
Experience gathered from applying the software process modeling language MVP-L in software development organizations has shown the need for graphical representations of process models. Project members (i.e„ non MVP-L specialists) review models much more easily by using graphical representations. Although several various graphical notations were developed for individual projects in which MVP-L was applied, there was previously no consistent definition of a mapping between textual MVP-L models and graphical representations. This report defines a graphical representation schema for MVP-L
descriptions and combines previous results in a unified form. A basic set of building blocks (i.e., graphical symbols and text fragments) is defined, but because we must first gain experience with the new symbols, only rudimentary guidelines are given for composing basic
symbols into a graphical representation of a model.
The main problem in computer graphics is to solve the global illumination problem,
which is given by a Fredholm integral equation of the second kind, called the radiance equation (REQ). In order to achieve realistic images, a very complex kernel
of the integral equation, modelling all physical effects of light, must be considered. Due to this complexity Monte Carlo methods seem to be an appropriate approach to solve the REQ approximately. We show that replacing Monte Carlo by quasi-Monte Carlo in some steps of the algorithm results in a faster convergence.
The rapid development of any field of knowledge brings with it unavoidable fragmentation and proliferation of new disciplines. The development of computer science is no exception. Software engineering (SE) and human-computer interaction (HCI) are both relatively new disciplines of computer science. Furthermore, as both names suggest, they each have strong connections with other subjects. SE is concerned with methods and tools for general software development based on engineering principles. This discipline has its roots not only in computer science but also in a number of traditional engineering disciplines. HCI is concerned with methods and tools for the development of human-computer interfaces, assessing the usability of computer systems and with broader issues about how people interact with computers. It is based on theories about how humans process information and interact with computers, other objects and other people in the organizational and social contexts in
which computers are used. HCI draws on knowledge and skills from psychology, anthropology and sociology in addition to computer science. Both disciplines need ways of measuring how well their products and development processes fulfil their intended requirements. Traditionally SE has been concerned with 'how software is constructed' and HCI with 'how people use software'. Given the
different histories of the disciplines and their different objectives, it is not surprising that they take different approaches to measurement. Thus, each has its own distinct 'measurement culture.' In this paper we analyse the differences and the commonalties of the two cultures by examining the measurement approaches used by each. We then argue the need for a common measurement taxonomy and framework, which is derived from our analyses of the two disciplines. Next we demonstrate the usefulness of the taxonomy and framework via specific example studies drawn from our own work and that of others and show that, in fact, the two disciplines have many important similarities as well as differences and that there is some evidence to suggest that they are growing closer. Finally, we discuss the role of the taxonomy as a framework to support: reuse, planning future studies, guiding practice and facilitating communication between the two disciplines.
In this work we propose a set of term-rewriting techniques for modelling object-oriented computation. Based on symbolic variants of explicit substitutions calculi, we show how to deal with imperative statements like assignment and sequence in specifications in a pure declarative style. Under our model, computation with classes and objects becomes simply normal form calculation, exactly as it is the case in term-rewriting based languages (for instance the functional languages). We believe this kind of unification between functions and
objects is important because it provides plausible alternatives for using the term-rewriting theory as an engine for supporting the formal and mechanical reasoning about object-oriented specifications.
The problem to interpolate Hermite-type data (i.e. two points with attached tangent vectors) with elastic curves of prescribed tension is known to have multiple solutions. A method is presented that finds all solutions of length not exceeding one period of its curvature function. The algorithm is based on algebraic relations between discrete curvature information which allow to transform the problem into a univariate one. The method operates with curves that by construction partially interpolate the given data. Hereby the objective function of the problem is drastically simplified. A bound on the maximum curvature value is established that provides an interval containing all solutions.
This document introduces the extension of Katja to support position structures and explains the subtleties of their application as well as the design decisions made and problems solved with respect to their implementation. The Katja system was first introduced by Jan Schäfer in the context of his project work and is based on the MAX system developed by Arnd Poetzsch-Heffter.
This report explains basic notions and concepts of Abstract State Machines (ASM) as well as notation for defining ASM models. The objective here is to provide an intuitive understanding of the formalism; for a rigorous definition of the mathematical foundations of ASM, the reader is referred to [2] and [3]. Further references on ASM-related material can be found on the ASM Web Pages [1].