### Filtern

#### Erscheinungsjahr

- 2001 (114) (entfernen)

#### Dokumenttyp

- Preprint (37)
- Dissertation (28)
- Bericht (21)
- Teil eines Periodikums (13)
- Wissenschaftlicher Artikel (4)
- Diplomarbeit (3)
- Vorlesung (2)
- Masterarbeit (2)
- Arbeitspapier (2)
- Lehrmaterial (1)

#### Schlagworte

- AG-RESY (9)
- RODEO (7)
- Mathematikunterricht (3)
- Modellierung (3)
- Phosphaalkin (3)
- praxisorientiert (3)
- vibration (3)
- Eisen (2)
- Lineare Algebra (2)
- Mobilfunk (2)

#### Fachbereich / Organisatorische Einheit

- Fachbereich Mathematik (30)
- Fachbereich Informatik (21)
- Fachbereich Chemie (15)
- Fachbereich Physik (12)
- Fachbereich Sozialwissenschaften (12)
- Fraunhofer (ITWM) (9)
- Fachbereich Elektrotechnik und Informationstechnik (6)
- Fachbereich Wirtschaftswissenschaften (4)
- Fachbereich ARUBI (2)
- Fachbereich Biologie (1)

We survey old and new results about optimal algorithms for summation of finite sequences and for integration of functions from Hölder or Sobolev spaces. First we discuss optimal deterministic and randornized algorithms. Then we add a new aspect, which has not been covered before on conferences
about (quasi-) Monte Carlo methods: quantum computation. We give a short introduction into this setting and present recent results of the authors on optimal quantum algorithms for summation and integration. We discuss comparisons between the three settings. The most interesting case for Monte
Carlo and quantum integration is that of moderate smoothness \(k\) and large dimension \(d\) which, in fact, occurs in a number of important applied problems. In that case the deterministic exponent is negligible, so the \(n^{-1/2}\) Monte Carlo and the \(n^{-1}\) quantum speedup essentially constitute the entire convergence rate.

The Analytic Blossom
(2001)

Blossoming is a powerful tool for studying and computing with Bézier and B-spline curves and surfaces - that is, for the investigation and analysis of polynomials and piecewise polynomials in geometric modeling. In this paper, we define a notion of the blossom for Poisson curves. Poisson curves are to analytic functions what Bézier curves are to polynomials - a representation adapted to geometric design. As in the polynomial setting, the blossom provides a simple, powerful, elegant and computationally meaningful way to analyze Poisson curves. Here, we
define the analytic blossom and interpret all the known algorithms for Poisson curves - subdivision, trimming, evaluation of the function and its derivatives, and conversion between the Taylor and the Poisson basis - in terms of this analytic blossom.

Mobile Agenten im Internet
(2001)

Mobile Agenten haben sich in den letzten Jahren zunehmend in der Architektur und Programmierung verteilter Systeme bewährt. Es sind Programme, die einen Internen Zustand mit sich führen, während sie verschiedene, möglicherweise auf unterschiedlichen Plattformen basierende, Systeme besuchen. Auf dem jeweiligen System nehmen sie Dienste in Anspruch, indem sie entweder lokale Bibliotheken ansprechen, oder auf durch das System bereitgestellte Dienste zugreifen. Dabei müssen mobile Agenten sowohl alle vom Programm benötigten Daten, wie auch den gesamten Code mit sich führen. Zwar sind die Daten ein wichtiger (wenn nicht sogar der entscheidende) Teil eines Agenten, trotzdem wird in der Regel nicht als wertvoller, eigenständiger Part angesehen. Dies ist jedoch nicht immer ratsam, könnten doch Agenten am aktuellen Aufenthaltsort einen „Container" zurückzulassen um ihm anderen Agenten zur Verfügung zu stellen (natürlich erst nach erfolgter Zugriffskontrolle), bzw. die Daten erst dann auf ein Migrationsziel übertragen, wenn sich durch lokale Aufrufe des Systems herausgestellt hat, dass sie dort benötigt werden. Diese Arbeit ist zweigeteilt, insofern, als dass sie sich mit den zwei verschiedenen „Ebenen" der mobilen Agenten beschäftigt. Im ersten Teil werden die für die Migration und Nutzung der Resourcen notwendigen Aspekte besprochen. Dabei wird der Schwerpunkt auf die notwendige Unterstützung durch die Umgebung gelegt, wobei nicht eine neue integrierte Umgebung entworfen, sondern vielmehr die notwendigen Blöcke aufgezeigt werden sollen. Diese können dann als Teil eines Environments oder aber als eigentständige Komponente bereitsgestellt werden. Der zweite Teil beschäftigt sich mit den durch die Interaktion verschiedener Agenten entstehenden Probleme. Stichworte hierbei sind die Kostenkontrolle (wer bezahlt auf welche Art für in Anspruch genommene Dienste), Workflow Unterstützung, sowie Sicherheit in einem offenen, verteilten System, in dem es keine zentrale Überprüfung von Rechten und Identitäten geben kann. Abgeschlossen wird diese Ausarbeitung mit einer Bewertung der auf den beiden Ebenen gefundenen Problemen und Eigenheiten, wobei dann die Frage aufgeworfen wird, ob Agenten in der heutigen Form überhaupt sinnvoll sind.

In this work we propose a set of term-rewriting techniques for modelling object-oriented computation. Based on symbolic variants of explicit substitutions calculi, we show how to deal with imperative statements like assignment and sequence in specifications in a pure declarative style. Under our model, computation with classes and objects becomes simply normal form calculation, exactly as it is the case in term-rewriting based languages (for instance the functional languages). We believe this kind of unification between functions and
objects is important because it provides plausible alternatives for using the term-rewriting theory as an engine for supporting the formal and mechanical reasoning about object-oriented specifications.

As opposed to Monte Carlo integration the quasi-Monte Carlo method does not allow for an (consistent) error estimate from the samples used for the integral approximation. In addition the deterministic error bound of quasi-Monte Carlo integration is not accessible in the setting of computer graphics, since usually the integrands are of unbounded variation. The structure of the high dimensional functionals to be computed for photorealistic image synthesis implies the application of the randomized quasi-Monte Carlo method. Thus we can exploit low discrepancy sampling and at the same time we can estimate the variance. The resulting technique is much more efficient than previous bidirectional path tracing algorithms.

The simulation of random fields has many applications in computer graphics such as e.g. ocean wave or turbulent wind field modeling. We present a new and strikingly simple synthesis algorithm for random fields on rank-1 lattices that requires only one Fourier transform independent of the dimension of the support of the random field. The underlying mathematical principle of discrete Fourier transforms on rank-1 lattices breaks the curse of dimension of the standard tensor product Fourier transform, i.e. the number of function values does not exponentially depend on the dimension, but can be chosen linearly.

Interleaved Sampling
(2001)

The sampling of functions is one of the most fundamental tasks in computer graphics, and occurs in a variety of different forms. The known sampling methods can roughly be grouped in two categories. Sampling on regular grids is simple and efficient, and the algorithms are often easy to built into graphics hardware. On the down side, regular sampling is prone to aliasing artifacts that are expensive to overcome. Monte Carlo methods, on the other hand,
mask the aliasing artifacts by noise. However due to the lack of coherence, these methods are more expensive and not weil suited for hardware implementations. In this paper, we introduce a novel sampling scheme where samples from several regular grids are a combined into a single irregular sampling pattern. The relative positions of the regular grids are themselves determined by Monte Carlo methods. This generalization obtained by interleaving yields,significantly improved quality compared to traditional approaches while at the same time preserving much of the advantageous coherency of regular sampling. We demonstrate the quality of the new sampling scheme with a number of applications ranging from supersampling over motion blur simulation to volume rendering. Due to the coherence in the interleaved samples, the method is optimally suited for implementations in graphics hardware.

We study summation of sequences and integration in the quantum model of computation. We develop quantum algorithms for computing the mean of sequences which satisfy a \(p\)-summability condition and for integration of functions from Lebesgue spaces \(L_p([0,1]^d)\) and analyze their convergence rates. We also prove lower bounds which show that the proposed algorithms are, in many cases, optimal within the setting of quantum computing. This extends recent results of Brassard, Høyer, Mosca, and Tapp (2000) on computing the mean for bounded sequences and complements results of Novak (2001) on integration of functions from Hölder classes.

We introduce two novel techniques for speeding up the generation of digital \((t,s)\)-sequences. Based on these results a new algorithm for the construction of Owen's randomly permuted \((t,s)\)-sequences is developed and analyzed. An implementation of the new techniques is available at http://www.cs.caltech.edu/~ilja/libseq/index.html

The goal of this thesis was to improve the surface quality of highly reinforced polymer
composites in order to make these materials applicable to the painted exterior of passenger
cars.
For the evaluation of the application sector of automotive exterior components, a catalogue of
requirements was drawn up from technical specifications, internal standards, and legal
requirements. Components in the horizontal decorative section of the outer skin, like front hood,
boot lid, and roof, have to fulfil the highest optical and structural requirements. A survey of the
automobile market concerning applications of fibre reinforced plastics in the exterior of cars
showed the state of the art and certain tendencies. So far, only non-reinforced, short-fiber- or
random-fiber-reinforced plastics have been able to fulfil the high suriace requirements. Up to
now, high material prices, the lack of mass production concepts, and insufficient suriace quality
have prohibited serial applications of CFRP in the outer skin of passenger cars. Therefore,
different manufacturing technologies for exterior components in composites were examined and
compared in an overview of processes. The process of resin transfer moulding (RTM) was
identified to have great potential for serial production:because of its achievable suriace quality
together with high specific mechanical properties of 1he composites.
The goal of the current research was to find optimized combinations of materials, processes,
and coatings, in order to realize a Class-A suriace quality for CFRP parts in the RTM process.
The main problem with the suriace quality is the print-through of the reinforcement caused by
the inhomogeneous distribution of the reinforcing fibres and the chemical and thermal shrinkage
of the matrix material during processing. In order to periom a systematic investigation of the
composite materials, the process parameters, and surtace treatments, an experimental RTM
tool with a plate cavity was designed and produced in the suriace quality standard of a serial
tool.
Within the material optimization the comparison of five epoxy resins showed that the system 82
was the most promising for further investigations with regard to surface quality and cycle time.
Within the comparison of the fibre reinforcements, the woven fabrics displayed a minor surface
quality compared to the non-woven and non-crimp fabrics. lt was found out that multiaxial
stitched fabrics with optimized placement technique, texturized, multifilament stitching yarns,
and trikot-franse stitching pattern currently provide the best combination of surface quality and
processability, Even better surface results were achieved with non-crimp fabrics that are fixed
by an adhesive to a polyester mesh. However, the difficult processing and infiltration with matrix
material still provide a hurdle to a possible serial application. As a result of the investigation, one
type of randomly oriented cut glass mat with minimal fibre diameter and even fibre distribution
was preferred as a core material to the commonly used continuous strand mats. Within the
great variety of different surface veils, a few types could be identified to offer an effective
reduction of long term waviness (from LW>20 to LW<20) and short term waviness (from SW>35 to SW<15). These selected surface veil types are mechanically or binder fixed and made of
glass or PAN fibres with an areal weight of 50 to so g/m2
.
Statistical methods for the design of experiments and the analysis of the results were used in
the process optimization with the epoxy system 82. After the identification of the main predictors
and responses a D-optimal experimental plan was designed and perfomed. The method of
multiple regression was used to create a process modell which describes the observed system
behaviour and deviation to a very high degree.
It was discovered that high pressures on the liquid matrix system right after injection contribute
to a high surface quality by compensating a great part of the reaction shrinkage. fn order to
achieve high pressures in the cavity exceeding 100 bar, the processing af)d tooling equipment
was modified beyond conventional RTM process capabilites. Optimal settings for vacuum and
temperature difference depend on tool temperature and post pressure levels. The simultaneous
analysis of curing temperature and demoulding time showed that the best surface quality can be
achieved if the part is demoulded from the tool as soon as the saturation T9, depending on the
current tool temperature, is reached. Longer curing times neither increase the T9 of the part nor
do they improve surface quality. From these results a first strategy for high suriace quality can
be derived with a high tool temperature and a short demoulding time. The second strategy with
a !ow tool temperature and a long demoulding time, however, is easier and safer to periom in
terms of process stability.
In order to compare highly reactive thermoset matrix materials and to measure the volume
shrinkage throughout the whole reaction, a novel shrinkage measurement cell, or dilatometer,
was designed. This created the new opportunity to determine the processing shrinkage in its
chemically and thermally induced proportions depending on matrix material, curing temperature,
and time. Because of the good correlation of the laboratory results with the previous RTM
experiments, a high experimental effort for hardware investigations to characterize new epoxy
systems can be saved in the future. Matrix system 82 displayed the lowest shrinkage values in
combination with a high reactivity. It could also be observed that a great proportion of the
reaction shrinkage takes place very quickly after the start of reaction. Therefore, the post
pressure on the matrix system must be applied as early as possible in order to compensate this
shrinkage. Curing at lower temperatures always leads to lower chemical and thermal shrinkage.
In comparison to literature the newly developed method presented in this thesis provides
plausible results with high accuracy, and for the first time also for highly reactive thermoset
systems.
Suriace coatings offer the opportunity to reduce or cover surface structures and defects in order
to achieve a high quality of the painted part surface. The exploration of in-process coatings lead
to thermoplastic films and gel-coats as technologies with a high potential for the improvement of
surface quality. In comparison, epoxy surfacing films and inmould-powder-coatings result in
more effort to adapt the materials and application methods0to the current RTM process. It was
shown that the post-process coating with a plastic paint system contributes to an improvement of the surface quality. In this study different priming coat materials and thicknesses were
identified that cover part of the surface texture with an acceptable structure of the coat itself. In
addition, two surface finishing methods with manual sanding were found to raise the surface
quality of the painted part up to the required standard if required.
The results of the different subsystems materials, RTM-process, and surface coatings can be
combined in different combinations of various emphasis to the overall system of the painted
ATM-part, complying with the requirements of the specific outer skin region.
Short-term solutions for outer skin parts with vertical surfaces {as A-, B-, C-pillars, sills, or rear
side wings) were found and proven with sample plates for the first time. In order to achieve the
high quality required for horizontal exterior components (as front hood, roof, and trunk lid) at the
current state of development, a higher performance of the subsystems is necessary. But even
for this Class-A suriace quality, sample parts could be produced for the first time with high effort
in the ATM-process. At the beginning of this investigation, sample plates produced in RTM
displayed surtace waviness values of LW>35 and strong fibre marking over the whole surtace.
With the combination of optimization results, sample plates with LW<5 could be produced. A
visual evaluation could not determine any regular, oriented surface texture.
The presented work showed solutions in material-process-coating-combinations and
development potential to reach the required Class-A surface quality of automobile exterior parts
with advanced composites. This provides the necessary foundation for further developments
with the aim of a serial application.

Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.

Congress Report
(2001)

Congress Report 2001.12
(2001)

Congress Report 2001.11
(2001)

Congress Report 2001.10
(2001)