## Fachbereich Informatik

### Refine

#### Year of publication

#### Document Type

- Report (111) (remove)

#### Language

- English (111) (remove)

#### Keywords

- Dienstgüte (3)
- Formalisierung (3)
- AG-RESY (1)
- Ad-hoc-Netz (1)
- Compiler (1)
- Coq (1)
- Extraction (1)
- Formal Semantics (1)
- Fräsen (1)
- Funknetz (1)

Wireless LANs operating within unlicensed frequency bands require random access schemes such as CSMA/ CA, so that wireless networks from different administrative domains (for example wireless community networks) may co-exist without central coordination, even when they happen to operate on the same radio channel. Yet, it is evident that this Jack of coordination leads to an inevitable loss in efficiency due to contention on the MAC layer. The interesting question is, which efficiency may be gained by adding coordination to existing, unrelated wireless networks, for example by self-organization. In this paper, we present a methodology based on a mathematical programming formulation to determine the
parameters (assignment of stations to access points, signal strengths and channel assignment of both access points and stations) for a scenario of co-existing CSMA/ CA-based wireless networks, such that the contention between these networks is minimized. We demonstrate how it is possible to solve this discrete, non-linear optimization problem exactly for small
problems. For larger scenarios, we present a genetic algorithm specifically tuned for finding near-optimal solutions, and compare its results to theoretical lower bounds. Overall, we provide a benchmark on the minimum contention problem for coordination mechanisms in CSMA/CA-based wireless networks.

On the Complexity of the Uncapacitated Single Allocation p-Hub Median Problem with Equal Weights
(2007)

The Super-Peer Selection Problem is an optimization problem in network topology construction. It may be cast as a special case of a Hub Location Problem, more exactly an Uncapacitated Single Allocation p-Hub Median Problem with equal weights. We show that this problem is still NP-hard by reduction from Max Clique.

This report presents a generalization of tensor-product B-spline surfaces. The new scheme permits knots whose endpoints lie in the interior of the domain rectangle of a surface. This allows local refinement of the knot structure for approximation purposes as well as modeling surfaces with local tangent or curvature discontinuities. The surfaces are represented in terms of B-spline basis functions, ensuring affine invariance, local control, the convex hull property, and evaluation by de Boor's algorithm. A dimension formula for a class of generalized tensor-product spline spaces is developed.

In this article we present a method to generate random objects from a large variety of combinatorial classes according to a given distribution. Given a description of the combinatorial class and a set of sample data our method will provide an algorithm that generates objects of size n in worst-case runtime O(n^2) (O(n log(n)) can be achieved at the cost of a higher average-case runtime), with the generated objects following a distribution that closely matches the distribution of the sample data.

We present a methodology to augment system safety step-by-step and illustrate the approach by the definition of reusable solutions for the detection of fail-silent nodes - a watchdog and a heartbeat. These solutions can be added to real-time system designs, to protect against certain types of system failures. We use SDL as a system design language for the development of distributed systems, including real-time systems.

Interactive graphics has been limited to simple direct illumination that commonly results in an artificial appearance. A more realistic appearance by simulating global illumination effects has been too costly to compute at interactive rates. In this paper we describe a new Monte Carlo-based global illumination algorithm. It achieves performance of up to 10 frames per second while arbitrary changes to the scene may be applied interactively. The performance is obtained through the effective use of a fast, distributed ray-tracing engine as well as a new interleaved sampling technique for parallel Monte Carlo simulation. A new filtering step in combination with correlated sampling avoids the disturbing noise artifacts common to Monte Carlo methods.

There is a well known relationship between alternating automata on finite words and symbolically represented nondeterministic automata on finite words. This relationship is of practical relevance because it allows to combine the advantages of alternating and symbolically represented nondeterministic automata on finite words. However, for infinite words the situation is unclear. Therefore, this work investigates the relationship between alternating omega-automata and symbolically represented nondeterministic omega-automata. Thereby, we identify classes of alternating omega-automata that are as expressive as safety, liveness and deterministic prefix automata, respectively. Moreover, some very simple symbolic nondeterminisation procedures are developed for the classes corresponding to safety and liveness properties.

Estelle is an internationally standardized formal description technique (FDT) designed for the specification of distributed systems, in particular communication protocols. An Estelle specification describes a system of communicating components (module instances). The specified system is closed in a topological sense, i.e. it has no ability to interact with some environment. Because of this restriction, open systems can only be specified together with and incorporated with an environment. To overcome this restriction, we introduce a compatible extension of Estelle, called "Open Estelle". It allows the specification of (topologically) open systems, i.e. systems that have the ability to communicate with any environment through a well-defined external interface. We define aformal syntax and a formal semantics for Open Estelle, both based on and extending the syntax and semantics of Estelle. The extension is compatible syntactically and semantically, i.e. Estelle is a subset of Open Estelle. In particular, the formal semantics of Open Estelle reduces to the Estelle semantics in the special case of a closed system. Furthermore, we present a tool for the textual integration of open systems into environments specified in Open Estelle, and a compiler for the automatic generation of implementations directly from Open Estelle specifications.

Ownership Domains generalize ownership types. They support programming patterns like iterators that are not possible with ordinary ownership types. However, they are still too restrictive for cases in which an object X wants to access the public domains of an arbitrary number of other objects, which often happens in observer scenarios. To overcome this restriction, we developed so-called loose domains which abstract over several precise domains. That is, similar to the relation between supertypes and subtypes we have a relation between loose and precise domains. In addition, we simplified ownership domains by reducing the number of domains per object to two and hard-wiring the access permissions between domains. We formalized the resulting type system for an OO core language and proved type soundness and a fundamental accessibility property.

This paper describes some new algorithms for the accurate calculation of surface properties. In the first part an arithmetic on Bézier surfaces is introduced. Formulas are given, which determine the Bézier points and weights of the resulting surface from the points and weights of the operand surfaces. An application of the arithmetic operations to the surface interrogation methods are described in the second part. It turns out, that the quality analysis can be reduced to a few numerical stable operations. Finally the advantages and disadvantages of this method are discussed.

Partitioned chain grammars
(1979)

This paper introduces a new class of grammars, the partitioned chain grammars, for which efficient parsers can be automatically generated. Besides being efficiently parsable these grammars possess a number of other properties, which make them very attractive for the use in parser-generators. They for instance form a large grammarclass and describe all deterministic context-free languages. Main advantage of the partitioned chain grammars however is, that given a language it is usually easier to describe it by a partitioned chain grammar than to construct a grammar of some other type commonly used in parser-generators for it.

On Abstract Shapes of RNA
(2008)

As any RNA sequence can be folded in many different ways, there are lots of different possible secondary structures for a given sequence. Most computational prediction methods based on free energy minimization compute a number of suboptimal foldings and we have to identify the native structures among all these possible secondary structures. For this reason, much effort has been made to develop approaches for identifying good predictions of RNA secondary structure. Using the abstract shapes approach as introduced by Giegerich et al., each class of similar secondary structures is represented by one shape and the native structures can be found among the top shape representatives. In this article, we derive some interesting results answering enumeration problems for abstract shapes and secondary structures of RNA. We start by computing symptotical representations for the number of shape representations of length n. Our main goal is to find out how much the search space can be reduced by using the concept of abstract shapes. To reach this goal, we analyze the number of secondary structures and shapes compatible with an RNA sequence of length n under the assumption that base pairing is allowed between arbitrary pairs of bases analytically and compare their exponential growths. Additionally, we analyze the number of secondary structures compatible with an RNA sequence of length n under the assumptions that base pairing is allowed only between certain pairs of bases and that the structures meet some appropriate conditions. The exponential growth factors of the resulting asymptotics are compared to the corresponding experimentally obtained value as given by Giegerich et al.

This article focuses on the analytical analysis of the free energy in a realistic model for RNA secondary structures. In fact, the free energy in a stochastic model derived from a database of small and large subunit ribosomal RNA (SSU and LSU rRNA) data is studied. A common thermody-namic model for computing the free energy of a given RNA secondary structure, as well as stochastic context-free grammars and generating functions are used to derive the desired results. These results include asymptotics for the expected free energy and for the corresponding variance of a random RNA secondary structure. The quality of our model is judged by comparing the derived results to the used database of SSU and LSU rRNA data. At the end of this article, it is discussed how our results could be used to help on identifying good predictions of RNA secondary structure.

The intuitionistic calculus mj for sequents, in which no other logical symbols than those for implication and universal quantification occur, is introduced and analysed. It allows a simple backward application, called mj-reduction here, for searching for derivation trees. Terms needed in mj-reduction can be found with the unification algorithm. mj-Reduction with unification can be seen as a natural extension of SLD-resolution. mj-Derivability of the sequents considered here coincides with derivability in Johansson's minimal intuitionistic calculus LHM in [6]. Intuitionistic derivability of formulae with negation and classical derivability of formulae with all usual logical symbols can be expressed with mj-derivability and hence be verified by mj-reduction. mj-Derivations can be easily translated into LJ-derivations without
"Schnitt", or into NJ-derivations in a slightly sharpened form of Prawitz' normal form. In the first three sections, the systematic use of mj-reduction for proving in predicate logic is emphasized. Although the fourth section, the last and largest, is exclusively devoted to the mathematical analysis of the calculus mj, the first three sections may be of interest to a wider readership, including readers looking for applications of symbolic logic. Unfortunately, the mathematical analysis of the calculus mj, as the study of Gentzen's calculi, demands a large amount of technical work that obscures the natural unfolding of the argumentation. To alleviate this, definitions and theorems are completely embedded in the text to provide a fluent and balanced mathematical discourse: new concepts are indicated with bold-face, proofs of assertions are outlined, or omitted when it is assumed that the reader can provide them.

A natural extension of SLD-resolution is introduced as a goal directed proof procedure
for the full first order implicational fragment of intuitionistic logic. Its intuitionistic semantic fits a procedural interpretation of logic programming. By allowing arbitrary nested implications it can be used for implementing modularity in logic programs. With adequate negation axioms it gives an alternative to negation as failure and leads to a proof procedure for full first order predicate logic.

The use of non-volatile semiconductor memory within an extended storage hierarchy promises significant performance improvements for transaction processing. Although page-addressable semiconductor memories like extended memory, solid-state disks and disk caches are commercially available since several years, no detailed investigation of their use for transaction processing has been performed so far. We present a comprehensive simulation study that compares the performance of these storage types and of different usage forms. The following usage forms are considered: allocation of entire log and database files in non-volatile semiconductor memory, using a so-called write buffer to perform disk writes asynchronously, and caching of database pages at intermediate storage levels (in addition to main memory caching). Our simulations are conducted with both synthetically generated workloads and traces from real-life database applications. In particular, simulation results will be presented for the debit-credit workload frequently used in transaction processing benchmarks. As expected, the greatest performance improvements (but at the highest cost) can be achieved by storing log and database files completely in non-volatile semiconductor memory. For update-intensive
workloads, a limited amount of non-volatile memory used as a write buffer also proved to be very effective. To reduce the number of disk reads; caching of database pages in addition to main memory is best supported by an extended memory buffer. In this respect, disk caches are found to be less effective as they are designed for one-level caching. Different storage costs suggest that it may be cost-effective to use two or even three of the intermediate storage types together. The performance improvements obtainable by the use of non-volatile semiconductor memory is also found to reduce the need for sophisticated DBMS buffer management in order to achieve high transaction processing performance.

The rapid development of any field of knowledge brings with it unavoidable fragmentation and proliferation of new disciplines. The development of computer science is no exception. Software engineering (SE) and human-computer interaction (HCI) are both relatively new disciplines of computer science. Furthermore, as both names suggest, they each have strong connections with other subjects. SE is concerned with methods and tools for general software development based on engineering principles. This discipline has its roots not only in computer science but also in a number of traditional engineering disciplines. HCI is concerned with methods and tools for the development of human-computer interfaces, assessing the usability of computer systems and with broader issues about how people interact with computers. It is based on theories about how humans process information and interact with computers, other objects and other people in the organizational and social contexts in
which computers are used. HCI draws on knowledge and skills from psychology, anthropology and sociology in addition to computer science. Both disciplines need ways of measuring how well their products and development processes fulfil their intended requirements. Traditionally SE has been concerned with 'how software is constructed' and HCI with 'how people use software'. Given the
different histories of the disciplines and their different objectives, it is not surprising that they take different approaches to measurement. Thus, each has its own distinct 'measurement culture.' In this paper we analyse the differences and the commonalties of the two cultures by examining the measurement approaches used by each. We then argue the need for a common measurement taxonomy and framework, which is derived from our analyses of the two disciplines. Next we demonstrate the usefulness of the taxonomy and framework via specific example studies drawn from our own work and that of others and show that, in fact, the two disciplines have many important similarities as well as differences and that there is some evidence to suggest that they are growing closer. Finally, we discuss the role of the taxonomy as a framework to support: reuse, planning future studies, guiding practice and facilitating communication between the two disciplines.

Optimization of Projection Methods for Solving ill-posed Problems. In this paper we propose a modification of the projection scheme for solving ill-posed problems. We show that this modification allows to obtain the best possible order of accuracy of Tikhonov Regularization using an amount of information which is far less than for the standard projection technique.

In this paper we show how Metropolis Light Transport can be extended both in the underlying theoretical framework and the algorithmic implementation to incorporate volumetric scattering.
We present a generalization of the path integral formulation thathandles anisotropic scattering in non-homogeneous media. Based on this framework we introduce a new mutation strategy that is
specifically designed for participating media. It exploits the locality of light propagation by perturbing certain interaction points within the medium. To efficiently sample inhomogeneous media a new ray marching method has been developed that avoids aliasing artefacts and is significantly faster than stratified sampling. The resulting global illumination algorithm provides a physically correct simulation of light transport in the presence of participating media that includes effects such as volume caustics and multiple volume scattering. It is not restricted to certain classes of geometry and scattering models and has minimal memory requirements. Furthermore, it is unbiased and robust, in the sense that it produces satisfactory results for a wide range of input scenes and lighting situations within acceptable time bounds. In particular, we found that it is weil suited for complex scenes with many light sources.

One of the fundamental problems in computational structural biology is the prediction of RNA secondary structures from a single sequence. To solve this problem, mainly two different approaches have been used over the past decades: the free energy minimization (MFE) approach which is still considered the most popular and successful method and the competing stochastic context-free grammar (SCFG) approach. While the accuracy of the MFE based algorithms is limited by the quality of underlying thermodynamic models, the SCFG method abstracts from free energies and instead tries to learn about the structural behavior of the molecules by training the grammars on known real RNA structures, making it highly dependent on the availability of a rich high quality training set. However, due to the respective problems associated with both methods, new statistics based approaches towards RNA structure prediction have become increasingly appreciated. For instance, over the last years, several statistical sampling methods and clustering techniques have been invented that are based on the computation of partition functions (PFs) and base pair probabilities according to thermodynamic models. A corresponding SCFG based statistical sampling algorithm for RNA secondary structures has been studied just recently. Notably, this probabilistic method is capable of producing accurate (prediction) results, where its worst-case time and space requirements are equal to those of common RNA folding algorithms for single sequences.
The aim of this work is to present a comprehensive study on how enriching the underlying SCFG by additional information on the lengths of generated substructures (i.e. by incorporating length-dependencies into the SCFG based sampling algorithm, which is actually possible without significant losses in performance) affects the reliability of the induced RNA model and the accuracy of sampled secondary structures. As we will see, significant differences with respect to the overall quality of generated sample sets and the resulting predictive accuracy are typically implied. In principle, when considering the more specialized length-dependent SCFG model as basis for statistical sampling, a higher accuracy of predicted foldings can be reached at the price of a lower diversity of generated candidate structures (compared to the more general traditional SCFG variant or sampling based on PFs that rely on free energies).

For most applications the used transport service providers are predetermined during the development of the application. This makes it difficult to consider the application communication requirements and to exploit specific features of the network technology. Specialized protocols that are more efficient and offer a qualitative improved service are typically not supported by most applications because they are not commonly available. In this paper we propose a concept for the realization of protocol independent transport services. Only a transport service is predetermined during the development of the application and an appropriate transport service provider is dynamically selected at run time. This enables to exploit specialized protocols if possible, but standard protocols could still be used if necessary. The main focus of this paper is how a transport service could provide a new transport service provider transparently to existing applications. A prototype is presented that maps TCP/IP based applications to an ATM specific transport service provider which offers a reliable and unreliable transport service like TCP/IP.

The Analytic Blossom
(2001)

Blossoming is a powerful tool for studying and computing with Bézier and B-spline curves and surfaces - that is, for the investigation and analysis of polynomials and piecewise polynomials in geometric modeling. In this paper, we define a notion of the blossom for Poisson curves. Poisson curves are to analytic functions what Bézier curves are to polynomials - a representation adapted to geometric design. As in the polynomial setting, the blossom provides a simple, powerful, elegant and computationally meaningful way to analyze Poisson curves. Here, we
define the analytic blossom and interpret all the known algorithms for Poisson curves - subdivision, trimming, evaluation of the function and its derivatives, and conversion between the Taylor and the Poisson basis - in terms of this analytic blossom.

This document introduces the extension of Katja to support position structures and explains the subtleties of their application as well as the design decisions made and problems solved with respect to their implementation. The Katja system was first introduced by Jan Schäfer in the context of his project work and is based on the MAX system developed by Arnd Poetzsch-Heffter.

Katja is a tool generating order-sorted recursive data types as well as position types for Java, from specifications using an enhanced ML like notation. Katja’s main features are its conciseness of specifications, the rich interface provided by the generated code and the Java atypical immutability of types. After several stages of extending and maintaining the Katja project, it became apparent many changes had to be done. The original design of Katja wasn’t prepared for the introduction of several backends, the introduction of position sorts and constant feature enhancements and bug fixes. By supplying this report Katja reaches release status for the first time.

Web-based authentication is a popular mechanism implemented by Wireless Internet Service Providers (WISPs) because it allows a simple registration and authentication of customers, while avoiding the high resource requirements of the new IEEE 802.11i security standard and the backward compatibility issues of legacy devices. In this work we demonstrate two different and novel attacks against web-based authentication. One attack exploits operational anomalies of low- and middle-priced devices in order to hijack wireless clients, while the other exploits an already known vulnerability within wired-networks, which in dynamic wireless environments turns out to be even harder to detect and protect against.

Software development organizations measure their real-world processes, products, and resources to achieve the goal of improving their practices. Accurate and useful measurement relies on explicit models of the real-world processes, products, and resources. These explicit models assist with planning measurement, interpreting data, and assisting developers with their work. However, little work has been done on the joint use of measurem(int and process technologies. We hypothesize that it is possible to integrate measurement and process technologies in a way that supports automation of measurement-based feedback. Automated support for measurementbased feedback means that software developers and maintainers are provided with on-line, detailed information about their work. This type of automated support is expected to help software professionals gain intellectual control over their software projects. The dissertation offers three major contributions. First, an integrated measurement and
process modeling framework was constructed. This framework establishes the necessary foundation for integrating measurement and process technologies in a way that will permit automation. Second, a process-centered software engineering environment was developed to support measurement-based feedback. This system provides personnel with information about the tasks expected of them based on an integrated set of measurement and process views. Third, a set of assumptions and requirements about that system were examined in a controlled experiment. The experiment compared the use of different levels of automation to evaluate the acceptance and effectiveness of measurement-based feedback.

In this work we propose a set of term-rewriting techniques for modelling object-oriented computation. Based on symbolic variants of explicit substitutions calculi, we show how to deal with imperative statements like assignment and sequence in specifications in a pure declarative style. Under our model, computation with classes and objects becomes simply normal form calculation, exactly as it is the case in term-rewriting based languages (for instance the functional languages). We believe this kind of unification between functions and
objects is important because it provides plausible alternatives for using the term-rewriting theory as an engine for supporting the formal and mechanical reasoning about object-oriented specifications.

Visualization of large data sets, especially on small machines, requires advanced techniques in image processing and image generation. Our hybrid raytracer is capable of rendering volumetric and geometric data simultaneously, without loss of accuracy due to data conversion. Compound data sets, consisting of several types of data, are called "hybrid data sets". There is only one rendering pipeline to obtain loss-less and efficient visualization of hybrid data. Algorithms apply to both types of data. Optical material properties are stored in the same data base for both volumetric and geometric objects, and anti-aliasing methods appeal to both data types. Stereoscopic display routines have been added to obtain true three-dimensional visualization on various media, and animation features allow generation of recordable 3-D sequences.

The composition of Bézier curves and tensor product Bézier surfaces, polynomial as well as rational, is applied to exactly and explicitely represent trim curves of tensor product Bézier surfaces. Trimming curves are assumed to be defined as Bézier curves in surface parameter domain. A Bézier spline approximation of lower polynomial degree is built up as weil which is based on the exact trim curve representation in coordinate space.

Free Form Volumes
(1994)

Trimming of surfaces and volumes, curve and surface modeling via Bézier's idea of destortion, segmentation, reparametrization, geometric continuity are examples of applications of functional composition. This paper shows how to
compose polynomial and rational tensor product Bézier representations. The problem of composing Bezier splines and B-spline representations will also be addressed in this paper.

We propose several algorithms for efficient Testing of logical Implication in the case of ground objects. Because the problem of Testing a set of propositional formulas for (un)satisfiability is \(NP\)-complete there's strong evidence that there exist examples for which every algorithm which solves the problem of testing for (un)satisfiability has a runtime that is exponential in the length of the input. So will have our algorithms. We will therefore point out classes of logic programs for which our algorithms have a lower runtime. At the end of this paper we will give an outline of an algorithm for theory refinement which is based on the algorithms described above.

We propose a framework for the synthesis of temporal logic programs which are formulated in a simple temporal logic programming language from both positive and negative examples. First we will prove that results from the theory of first order inductive logic programming carry over to the domain of temporal logic. After this we will show how programs formulated in the presented language can be generalized or specialized in order to satisfy the specification induced by the sets of examples.

Image synthesis often requires the Monte Carlo estimation of integrals. Based on a generalized concept of stratification we present an efficient sampling scheme that consistently outperforms previous techniques. This is achieved by assembling sampling patterns that are stratified in the sense of jittered sampling and N-rooks sampling at the same time. The faster convergence and improved anti-aliasing are demonstrated by numerical experiments.

As opposed to Monte Carlo integration the quasi-Monte Carlo method does not allow for an (consistent) error estimate from the samples used for the integral approximation. In addition the deterministic error bound of quasi-Monte Carlo integration is not accessible in the setting of computer graphics, since usually the integrands are of unbounded variation. The structure of the high dimensional functionals to be computed for photorealistic image synthesis implies the application of the randomized quasi-Monte Carlo method. Thus we can exploit low discrepancy sampling and at the same time we can estimate the variance. The resulting technique is much more efficient than previous bidirectional path tracing algorithms.

We present an algorithm for determining quadrature rules for computing the direct illumination of predominantly diffuse objects by high dynamic range images. The new method precisely reproduces fine shadow detail, is much more efficient as compared to Monte Carlo integration, and does not require any manual intervention.

Approximating illumination by point light sources, as done in many professional applications, suffers from the problem of the weak singularity: Numerical exceptions caused by the division by the squared distance between the point light source and the point to be illuminated must be avoided. Multiple importance sampling overcomes these problems by combining multiple sampling techniques by weights. Such a set of weights is called a heuristic. So far the estimators resulting from a heuristic only have been analyzed for variance. Since the cost of sampling is not at all constant for different sampling techniques, it is possible to find more efficient heuristics, even though they may hove higher variance. Based on our new stratification heuristic, we present a robust and unbiased global illumination algorithm. By numerical examples, we show that it is more efficient than previous heuristics. The algorithm is as simple as a path tracer, but elegantly avoids the problem of the weak singularity.

The photon map provides a powerful tool for approximating the irradiance in global illumination computations independent from geometry. By presenting new importance sampling techniques, we dramatically improve the memory footprint of the photon map, simplify the caustic generation, and allow for a much faster sampling of direct illumination in complicated models as they arise in a production environment.

Interleaved Sampling
(2001)

The sampling of functions is one of the most fundamental tasks in computer graphics, and occurs in a variety of different forms. The known sampling methods can roughly be grouped in two categories. Sampling on regular grids is simple and efficient, and the algorithms are often easy to built into graphics hardware. On the down side, regular sampling is prone to aliasing artifacts that are expensive to overcome. Monte Carlo methods, on the other hand,
mask the aliasing artifacts by noise. However due to the lack of coherence, these methods are more expensive and not weil suited for hardware implementations. In this paper, we introduce a novel sampling scheme where samples from several regular grids are a combined into a single irregular sampling pattern. The relative positions of the regular grids are themselves determined by Monte Carlo methods. This generalization obtained by interleaving yields,significantly improved quality compared to traditional approaches while at the same time preserving much of the advantageous coherency of regular sampling. We demonstrate the quality of the new sampling scheme with a number of applications ranging from supersampling over motion blur simulation to volume rendering. Due to the coherence in the interleaved samples, the method is optimally suited for implementations in graphics hardware.

The main problem in computer graphics is to solve the global illumination problem,
which is given by a Fredholm integral equation of the second kind, called the radiance equation (REQ). In order to achieve realistic images, a very complex kernel
of the integral equation, modelling all physical effects of light, must be considered. Due to this complexity Monte Carlo methods seem to be an appropriate approach to solve the REQ approximately. We show that replacing Monte Carlo by quasi-Monte Carlo in some steps of the algorithm results in a faster convergence.

A fundamental variance reduction technique for Monte Carlo integration in the framework of integro-approximation problems is
presented. Using the method of dependent tests a successive hierarchical function approximation algorithm is developed, which
captures discontinuities and exploits smoothness in the target function. The general mathematical scheme and its highly efficient
implementation are illustrated for image generation by ray tracing,
yielding new and much faster image synthesis algorithms.

Instant Radiosity
(1997)

We present a fundamental procedure for instant rendering from the radiance equation. Operating directly on the textured scene description, the very efficient and simple algorithm produces photorealistic images without any kernel or solution discretization of the underlying integral equation. Rendering rates of a few seconds are obtained by exploiting graphics hardware, the deterministic
technique of the quasi-random walk for the solution of the global illumination problem, and the new method of jittered low discrepancy sampling.

Monte Carlo & Beyond
(2002)

Quasi-Monte Carlo Radiosity
(1996)

The problem of global illumination in computer graphics is described by a second kind Fredholm integral equation. Due to the complexity of this equation, Monte Carlo methods provide an interesting tool for approximating
solutions to this transport equation. For the case of the radiosity equation, we present the deterministic method of quasi-rondom walks. This method very efficiently uses low discrepancy sequences for integrating the Neumann series and consistently outperforms stochastic techniques. The method of quasi-random walks also is applicable to transport problems in settings other
than computer graphics.