### Refine

#### Year of publication

- 2001 (64) (remove)

#### Document Type

- Preprint (33)
- Report (18)
- Doctoral Thesis (5)
- Article (3)
- Diploma Thesis (3)
- Lecture (1)
- Periodical Part (1)

#### Language

- English (64) (remove)

#### Keywords

- AG-RESY (9)
- RODEO (7)
- vibration (3)
- genetic algorithms (2)
- heat equation (2)
- semiconductor superlattice (2)
- stationary radiative transfer equation (2)
- trajectory planning (2)
- AdS/CFT (1)
- Amplitude-Phase Method (1)
- Analogy (1)
- Assembly (1)
- Associative Memory Problem (1)
- Bingham viscoplastic model (1)
- CFD (1)
- Capacity (1)
- Coherent State (1)
- Control (1)
- Crash-Charakteristiken (1)
- Earth' (1)
- Ecological Economics (1)
- Feed-forward Networks (1)
- Force-Torque (1)
- Generalized LBE (1)
- Husimi (1)
- Hyperbolic Conservation (1)
- IVR (1)
- Incompressible Navier-Stokes equations (1)
- Industrial Applications (1)
- Industrial Ecology (1)
- Inverse Problems (1)
- Kurvenschar (1)
- Lattice Boltzmann (1)
- Learnability (1)
- Learning from Nature (1)
- Least squares approximation (1)
- Levy process (1)
- Manipulation skills (1)
- Melt spinning (1)
- Meshfree method (1)
- Metaphor (1)
- Milne Equation (1)
- Multiplicative Schwarz Algorithm (1)
- Nature as Model (1)
- Navier Stokes equation (1)
- Paradigm (1)
- Parametric Excitation (1)
- Particle scheme (1)
- Perceptron (1)
- Phase Space (1)
- Philosophy (1)
- Philosophy of Nature (1)
- Projection method (1)
- Projektive Fläche (1)
- Propagator (1)
- Quantum mechanics (1)
- Ray-Knight Theorem (1)
- Recurrent Networks (1)
- Regularization Wavelets (1)
- Robotics (1)
- SIMERO (1)
- SKALP (1)
- Semiclassics (1)
- Singularität (1)
- Splines (1)
- Verschwindungsatz (1)
- Wannier-Stark states (1)
- Wannier-Stark systems (1)
- absorption spectrum (1)
- asymptotic analysis (1)
- bidirectional search (1)
- boundary-value problems of potent (1)
- branching process (1)
- clo (1)
- conformal partial wave analysis (1)
- consecutive ones polytopes (1)
- consecutive ones property (1)
- cooling processes (1)
- crossphase modulation (1)
- deformable object (1)
- deformable objects (1)
- diffusive scaling (1)
- discretization (1)
- equilibrium state (1)
- equisingular families (1)
- facility location (1)
- fiber model (1)
- filling processes (1)
- finite biodiversity (1)
- flexible-link (1)
- flexible-link robot (1)
- free-surface phenomena (1)
- frequency bands (1)
- freqzency bands (1)
- fundamental systems (1)
- gas dynamics (1)
- geographical information systems (1)
- graph search (1)
- growing sub-quadratically (1)
- human robot cooperation (1)
- hybrid method (1)
- impinging jets (1)
- incompressible Euler equation (1)
- incompressible limit (1)
- industrial robots (1)
- interband tunneling (1)
- interface boundary conditions (1)
- interval graphs (1)
- kinetic equations (1)
- lattice Boltzmann method (1)
- lifetime statistics (1)
- lifetimes (1)
- limit and jump relations (1)
- liquid film (1)
- low Mach number limit (1)
- manipulation (1)
- maximum entropy moment (1)
- metastable states (1)
- models (1)
- monotone consecutive arrangement (1)
- motion planning (1)
- multiple collision frequencies (1)
- multiscale analysis (1)
- normed residuum (1)
- numerical solution (1)
- on-line algorithms (1)
- optical code multiplex (1)
- optical lattices (1)
- path planning (1)
- point-to-point (1)
- polynomial weight functions (1)
- potential operators (1)
- projection method (1)
- projective surfaces (1)
- pyramid scheme (1)
- redundancy (1)
- redundant robots (1)
- regular surface (1)
- regularized models (1)
- residual based error formula (1)
- resonances (1)
- robot (1)
- s external gravitational field (1)
- safe human robot cooperation (1)
- satellite gravity gradiometry (1)
- satellite-to-satellite tracking (1)
- search algorithms (1)
- second order upwind discretization (1)
- sensor fusion (1)
- separation problem (1)
- shape (1)
- singular fluxes (1)
- singularities (1)
- slope limiter (1)
- software development (1)
- stability (1)
- supply chain management (1)
- thermoplastische Verbundwerkstoffe (1)
- uniqueness (1)
- wavelength multiplex (1)
- wavelets (1)

#### Faculty / Organisational entity

In this thesis a new family of codes for the use in optical high bit rate transmission systems with a direct sequence code division multiple access scheme component was developed and its performance examined. These codes were then used as orthogonal sequences for the coding of the different wavelength channels in a hybrid OCDMA/WDMA system. The overall performance was finally compared to a pure WDMA system. The common codes known up to date have the problem of needing very long sequence lengths in order to accommodate an adequate number of users. Thus, code sequence lengths of 1000 or more were necessary to reach bit error ratios of with only about 10 simultaneous users. However, these sequence lengths are unacceptable if signals with data rates higher than 100 MBit/s are to be transmitted, not to speak about the number of simultaneous users. Starting from the well known optical orthogonal codes (OOC) and under the assumption of synchronization among the participating transmitters - justified for high bit rate WDM transmission systems -, a new code family called ?modified optical orthogonal codes? (MOOC) was developed by minimizing the crosscorrelation products of each two sequences. By this, the number of simultaneous users could be increased by several orders of magnitude compared to the known codes so far. The obtained code sequences were then introduced in numerical simulations of a 80 GBit/s DWDM transmission system with 8 channels, each carrying a 10 GBit/s payload. Usual DWDM systems are featured by enormous efforts to minimize the spectral spacing between the various wavelength channels. These small spacings in combination with the high bit rates lead to very strict demands on the system components like laser diode, filters, multiplexers etc. Continuous channel monitoring and temperature regulations of sensitive components are inevitable, but often cannot prevent drop downs of the bit error ratio due to aging effects or outer influences like mechanical stress. The obtained results show that - very different to the pure WDM system - by orthogonally coding adjacent wavelength channels with the proposed MOOC, the overall system performance gets widely independent from system parameters like input powers, channel spacings and link lengths. Nonlinear effects like XPM that insert interchannel crosstalk are effectively fought. Furthermore, one can entirely dispense with the bandpass filters, thus simplifying the receiver structure, which is especially interesting for broadcast networks. A DWDM system upgraded with the OCDMA subsystem shows a very robust behavior against a variety of influences.

Given a railway network together with information on the population and their use of the railway infrastructure, we are considering the e ffects of introducing new train stops in the existing railway network. One e ffect concerns the accessibility of the railway infrastructure to the population, measured in how far people live from their nearest train stop. The second effect we study is the change in travel time for the railway customers that is induced by new train stops. Based on these two models, we introduce two combinatorial optimization problems and give NP-hardness results for them. We suggest an algorithmic approach for the model based on travel time and give first experimental results.

Urban Design Guidelines have been used in Jakarta for controlling the form of the built environment. This planning instrument has been implemented in several central city redevelopment projects particularly in superblock areas. The instrument has gained popularity and implemented in new development and conservation areas as well. Despite its popularity, there is no formal literature on the Indonesian Urban Design Guideline that systematically explain its contents, structure and the formulation process. This dissertation attempts to explain the substantive of urban design guideline and the way to control its implementation. Various streams of urban design theories are presented and evaluated in term of their suitability for attaining a high urbanistic quality in major Indonesian cities. The explanation on the form and the practical application of this planning instrument is elaborated in a comparative investigation of similar instrument in other countries; namely the USA, Britain and Germany. A case study of a superblock development in Jakarta demonstrates the application of the urban design theories and guideline. Currently, the role of computer in the process of formulating the urban design guideline in Indonesia is merely as a replacement of the manual method, particularly in areas of worksheet calculation and design presentation. Further support of computer for urban planning and design tasks has been researched in developed countries, which shows its potential in supporting decision-making process, enabling public participation, team collaboration, documentation and publication of urban design decisions and so on. It is hoped that the computer usage in Indonesian urban design process can catch up with the global trend of multimedia, networking (Internet/Intranet) and interactive functions that is presented with examples from developed countries.

We present a system concept allowing humans to work safely in the same environment as a robot manipulator. Several cameras survey the common workspace. A look-up-table-based fusion algorithm is used to back-project directly from the image spaces of the cameras to the manipulator?s con-figuration space. In the look-up-tables both, the camera calibration and the robot geometry are implicitly encoded. For experiments, a conven-tional 6 axis industrial manipulator is used. The work space is surveyed by four grayscale cameras. Due to the limits of present robot controllers, the computationally expensive parts of the system are executed on a server PC that communicates with the robot controller via Ethernet.

Point-to-Point Trajectory Planning of Flexible Redundant Robot Manipulators Using Genetic Algorithms
(2001)

The paper focuses on the problem of point-to-point trajectory planning for flexible redundant robot manipulators (FRM) in joint space. Compared with irredundant flexible manipulators, a FRM possesses additional possibilities during point-to-point trajectory planning due to its kinematics redundancy. A trajectory planning method to minimize vibration and/or executing time of a point-to-point motion is presented for FRM based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as planning variables. Quadrinomial and quintic polynomial are used to describe the segments that connect the initial, intermediate, and final points in joint space. The trajectory planning of FRM is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. Case studies show that the method is applicable.

Matrices with the consecutive ones property and interval graphs are important notations in the field of applied mathematics. We give a theoretical picture of them in first part. We present the earliest work in interval graphs and matrices with the consecutive ones property pointing out the close relation between them. We pay attention to Tucker's structure theorem on matrices with the consecutive ones property as an essential step that requires a deep considerations. Later on we concentrate on some recent work characterizing the matrices with the consecutive ones property and matrices related to them in the terms of interval digraphs as the latest and most interesting outlook on our topic. Within this framework we introduce a classiffcation of matrices with consecutive ones property and matrices related to them. We describe the applications of matrices with the consecutive ones property and interval graphs in different fields. We make sure to give a general view of application and their close relation to our studying phenomena. Sometimes we mention algorithms that work in certain fields. In the third part we give a polyhedral approach to matrices with the consecutive ones property. We present the weighted consecutive ones problem and its relation to Tucker's matrices. The constraints of the weighted consecutive ones problem are improved by introducing stronger inequalities, based on the latest theorems on polyhedral aspect of consecutive ones property. Finally we implement a separation algorithm of Oswald and Reinhelt on matrices with the consecutive ones property. We would like to mention that we give a complete proof to the theorems when we consider important within our framework. We prove theorems partially when it is worthwhile to have a closer look, and we omit the proof when there are is only an intersection with our studying phenomena.

This paper deals with the handling of deformable linear objects (DLOs), such as hoses, wires or leaf springs. It investigates the a priori knowledge about the 6-dimensional force/torque signal for a changing contact situation between a DLO and a rigid polyhedral obstacle. The result is a complete list, containing for each contact change the most significant combination of force/torque signal components together with a description of the expected signal curve. This knowledge enables the reliable detection of changes in the DLO contact situation and with it the implementation of sensor-based manipulation skills for all possible contact changes.

This article presents contributions in the field of path planning for industrial robots with 6 degrees of freedom. This work presents the results of our research in the last 4 years at the Institute for Process Control and Robotics at the University of Karlsruhe. The path planning approach we present works in an implicit and discretized C-space. Collisions are detected in the Cartesian workspace by a hierarchical distance computation. The method is based on the A* search algorithm and needs no essential off-line computation. A new optimal discretization method leads to smaller search spaces, thus speeding up the planning. For a further acceleration, the search was parallelized. With a static load distribution good speedups can be achieved. By extending the algorithm to a bidirectional search, the planner is able to automatically select the easier search direction. The new dynamic switching of start and goal leads finally to the multi-goal path planning, which is able to compute a collision-free path between a set of goal poses (e.g., spot welding points) while minimizing the total path length.

The vibration induced in a deformable object upon automatic handling by robot manipulators can often be bothersome. This paper presents a force/torque sensor-based method for handling deformable linear objects (DLOs) in a manner suitable to eliminate acute vibration. An adjustment-motion that can be attached to the end of an arbitrary end-effector's trajectory is employed to eliminate vibration of deformable objects. Differently from model-based methods, the presented sensor-based method does not employ any information from previous motions. The adjustment-motion is generated automatically by analyzing data from a force/torque sensor mounted on the robot wrist. Template matching technique is used to find out the matching point between the vibrational signal of the DLO and a template. Experiments are conducted to test the new method under various conditions. Results demonstrate the effectiveness of the sensor-based adjustment-motion.

The task of handling non-rigid one-dimensional objects by a robot manipulation system is investigated. Especially, approaches to calculate motions with specific behavior in point contacts between the object and environment are regarded. For single point contacts, motions based on generalized rotations solving the direct and inverse manipulation problem are investigated. The latter problem is additionally tackled by simple rotation and translation motions. For double and multiple point contacts, motions based on Splines are suggested. In experimental results with steel springs, the predicted and measured effect for each approach are compared.

Manipulating Deformable Linear Objects: Attachable Adjustment-Motions for Vibration Reduction
(2001)

This paper addresses the problem of handling deformable linear objects (DLOs) in a suitable way to avoid acute vibration. Different types of adjustment-motions that eliminate vibration of deformable objects and can be attached to the end of an arbitrary end-effector trajectory are presented. For describing the dynamics of deformable linear objects, the finite element method is used to derive the dynamic differential equations. Genetic algorithm is used to find the optimal adjustment motion for each simulation example. Experiments are conducted to verify the presented manipulating method.

Manipulating Deformable Linear Objects: Model-Based Adjustment-Motion for Vibration Reduction
(2001)

This paper addresses the problem of handling deformable linear objects (DLOs) in a suitable way to avoid acute vibration. An adjustment-motion that eliminates vibration of DLOs and can be attached to the end of any arbitrary end-effector's trajectory is presented, based on the concept of open-loop control. The presented adjustment-motion is a kind of agile end-effector motion with limited scope. To describe the dynamics of deformable linear objects, the finite element method is used to derive the dynamic differential equations. Genetic algorithm is used to find the optimal adjustment-motion for each simulation example. In contrast to previous approaches, the presented method can be treated as one of the manipulation skills and can be applied to different cases without major changes to the method.

The paper focuses on the problem of trajectory planning of flexible redundant robot manipulators (FRM) in joint space. Compared to irredundant flexible manipulators, FRMs present additional possibilities in trajectory planning due to their kinematics redundancy. A trajectory planning method to minimize vibration of FRMs is presented based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as a planning variable. Quadrinomial and quintic polynomials are used to describe the segments which connect the initial, intermediate, and final points in joint space. The trajectory planning of FRMs is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. A case study shows that the method is applicable.

Integral equations on the half of line are commonly approximated by the finite-section approximation, in which the infinite upper limit is replaced by apositie number called finite-section parameter. In this paper we consider the finite-section approximation for first kind intgral equations which are typically ill-posed and call for regularization. For some classes of such equations corresponding to inverse problems from optics and astronomy we indicate the finite-section parameters that allows to apply standard regularization techniques. Two discretization schemes for the finite-section equations ar also proposed and their efficiency is studied.

By means of the limit and jump relations of classical potential theory the framework of a wavelet approach on a regular surface is established. The properties of a multiresolution analysis are verified, and a tree algorithm for fast computation is developed based on numerical integration. As applications of the wavelet approach some numerical examples are presented, including the zoom-in property as well as the detection of high frequency perturbations. At the end we discuss a fast multiscale representation of the solution of (exterior) Dirichlet's or Neumann's boundary-value problem corresponding to regular surfaces.

Logic
(2001)

Abstract: The behavior of the divergent part of the bulk AdS/CFT effective action is considered with respect to the special finite diffeomorphism transformations acting on the boundary as a Weyl transformation of the boundary metric. The resulting 1-cocycle of the Weyl group is in full agreement with the 1-cocycle of the Weyl group obtained from the cohomological consideration of the effective action of the corresponding CFT.

Abstract: Operator product expansions are applied to dilaton-axion four-point functions. In the expansions of the bilocal fields "doubble Phi", CC and "Phi"C, the conformal fields which are symmetric traceless tensors of rank l and have dimensions "delta" = 2+l or 8+l+ "eta"(l) and "eta"(l) = O(N ^ -2) are identified. The unidentified field have dimension "delta" = "lambda"+l+eta(l) with "lambda" >= 10. The anomalous dimensions eta(l) are calculated at order O(N ^ -2) for both 2 ^ -1/2(-"doubble Phi" + CC) and 2 ^ -1/2(-"Phi"C + C"Phi") and are found to be the same, proving U(1)_Y symmetry. The relevant coupling constants are given at order O(1).

Abstract: In the context of AdS/CFT correspondence the two Wilson loop correlator is examined at both zero and finite temperatures. On the basis of an entirely analytical approach we have found for Nambu-Goto strings the functional relation dSc(Reg) /dL = 2*pi*k between Euclidean action Sc and loop separation L with integration constant k, which corresponds to the analogous formula for point-particles. The physical implications of this relation are explored in particular for the Gross-Ooguri phase transition at finite temperature.

Abstract: The basic concepts of selective multiscale reconstruction of functions on the sphere from error-affected data is outlined for scalar functions. The selective reconstruction mechanism is based on the premise that multiscale approximation can be well-represented in terms of only a relatively small number of expansion coefficients at various resolution levels. A new pyramid scheme is presented to efficiently remove the noise at different scales using a priori statistical information.

Abstract: Evacuation problems can be modeled as flow problems in dynamic networks. A dynamic network is defined by a directed graph G = (N,A) with sources, sinks and non-negative integral travel times and capacities for every arc (i,j) e A. The earliest arrival flow problem is to send a maximum amount of dynamic flow reaching the sink not only for the given time horizon T, but also for any time T' < T . This problem mimics the evacuation problem of public buildings where occupancies may not known. For the buildings where the number of occupancies is known and concentrated only in one source, the quickest flow model is used to find the minimum egress time. We propose in this paper a solution procedure for evacuation problems with a single source of the building where the occupancy number is either known or unknown. The possibility that the flow capacity may change due to the increasing of smoke density or fire obstructions can be mirrored in our model. The solution procedure looks iteratively for the shortest conditional augmenting path (SCAP) from source to sink and compute the time intervals in which flow reaches the sink via this path.

A harmonic oscillator subject to a parametric pulse is examined. The aim of the paper is to present a new theory for analysing transitions due to parametric pulses. The new theoretical notions which are introduced relate the pulse parameters in a direct way with the transition matrix elements. The harmonic oscillator transitions are expressed in terms of asymptotic properties of a companion oscillator, the Milne (amplitude) oscillator. A traditional phase-amplitude decomposition of the harmonic-oscillator solutions results in the so-called Milne's equation for the amplitude, and the phase is determined by an exact relation to the amplitude. This approach is extended in the present analysis with new relevant concepts and parameters for pulse dynamics of classical and quantal systems. The amplitude oscillator has a particularly nice numerical behavior. In the case of strong pulses it does not possess any of the fast oscillations induced by the pulse on the original harmonic oscillator. Furthermore, the new dynamical parameters introduced in this approach relate closely to relevant characteristics of the pulse. The relevance to quantum mechanical problems such as reflection and transmission from a localized well and mechanical problems of controlling vibrations is illustrated.

Wannier-Stark states for semiconductor superlattices in strong static fields, where the interband Landau-Zener tunneling cannot be neglected, are rigorously calculated. The lifetime of these metastable states was found to show multiscale oscillations as a function of the static field, which is explained by an interaction with above-barrier resonances. An equation, expressing the absorption spectrum of semiconductor superlattices in terms of the resonance Wannier-Stark states is obtained and used to calculate the absorption spectrum in the region of high static fields.

The anchored hyperplane location problem is to locate a hyperplane passing through some given points P IR^n and minimizing either the sum of weighted distances (median problem), or the maximum weighted distance (center problem) to some other points Q IR^n . If the distances are measured by a norm, it will be shown that in the median case there exists an optimal hyperplane that passes through at least n - k affinely independent points of Q, if k is the maximum number of affinely independent points of P. In the center case, there exists an optimal hyperplane which isatmaximum distance to at least n - k + 1 affinely independent points of Q. Furthermore, if the norm is a smooth norm, all optimal hyperplanes satisfy these criteria. These new results generalize known results about unrestricted hyperplane location problems.

The purpose of satellite-to-satellite tracking (SST) and/or satellite gravity gradiometry (SGG) is to determine the gravitational field on and outside the Earth's surface from given gradients of the gravitational potential and/or the gravitational field at satellite altitude. In this paper both satellite techniques are analysed and characterized from mathematical point of view. Uniqueness results are formulated. The justification is given for approximating the external gravitational field by finite linear combination of certain gradient fields (for example, gradient fields of single-poles or multi-poles) consistent to a given set of SGG and/or SST data. A strategy of modelling the gravitational field from satellite data within a multiscale concept is described; illustrations based on the EGM96 model are given.

Abstract: We describe quantum-field-theoretical (QFT) techniques for mapping quantum problems onto c-number stochastic problems. This approach yields results which are identical to phase-space techniques [C.W. Gardiner, Quantum Noise (1991)] when the latter result in a Fokker-Planck equation for a corresponding pseudo-probability distribution. If phase-space techniques do not result in a Fokker-Planck equation and hence fail to produce a stochastic representation, the QFT techniques nevertheless yield stochastic di erence equations in discretised time.

In this work, we discuss the resonance states of a quantum particle in a periodic potential plus static force. Originally this problem was formulated for a crystalline electron subject to the static electric field and is known nowadays as the Wannier-Stark problem. We describe a novel approach to the Wannier-Stark problem developed in recent years. This approach allows to compute the complex energy spectrum of a Wannier-Stark system as the poles of a rigorously constructed scattering matrix and, in this sense, solves the Wannier-Stark problem without any approximation. The suggested method is very efficient from the numerical point of view and has proven to be a powerful analytic tool for Wannier-Stark resonances appearing in different physical systems like optical or semiconductor superlattices.

Annual Report 2000
(2001)

In the Finite-Volume-Particle Method (FVPM), the weak formulation of a hyperbolic conservation law is discretized by restricting it to a discrete set of test functions. In contrast to the usual Finite-Volume approach, the test functions are not taken as characteristic functions of the control volumes in a spatial grid, but are chosen from a partition of unity with smooth and overlapping partition functions (the particles), which can even move along prescribed velocity fields. The information exchange between particles is based on standard numerical flux functions. Geometrical information, similar to the surface area of the cell faces in the Finite-Volume Method and the corresponding normal directions are given as integral quantities of the partition functions. After a brief derivation of the Finite-Volume-Particle Method, this work focuses on the role of the geometric coefficients in the scheme.

The objective of this paper is to bridge the gap between location theory and practice. To meet this objective focus is given to the development of software capable of addressing the different needs of a wide group of users. There is a very active community on location theory encompassing many research fields such as operations research, computer science, mathematics, engineering, geography, economics and marketing. As a result, people working on facility location problems have a very diverse background and also different needs regarding the software to solve these problems. For those interested in non-commercial applications (e. g. students and researchers), the library of location algorithms (LoLA can be of considerable assistance. LoLA contains a collection of efficient algorithms for solving planar, network and discrete facility location problems. In this paper, a detailed description of the functionality of LoLA is presented. In the fields of geography and marketing, for instance, solving facility location problems requires using large amounts of demographic data. Hence, members of these groups (e. g. urban planners and sales managers) often work with geographical information too s. To address the specific needs of these users, LoLA was inked to a geographical information system (GIS) and the details of the combined functionality are described in the paper. Finally, there is a wide group of practitioners who need to solve large problems and require special purpose software with a good data interface. Many of such users can be found, for example, in the area of supply chain management (SCM). Logistics activities involved in strategic SCM include, among others, facility location planning. In this paper, the development of a commercial location software tool is also described. The too is embedded in the Advanced Planner and Optimizer SCM software developed by SAP AG, Walldorf, Germany. The paper ends with some conclusions and an outlook to future activities.

This paper details models and algorithms which can be applied to evacuation problems. While it concentrates on building evacuation many of the results are applicable also to regional evacuation. All models consider the time as main parameter, where the travel time between components of the building is part of the input and the overall evacuation time is the output. The paper distinguishes between macroscopic and microscopic evacuation models both of which are able to capture the evacuees' movement over time. Macroscopic models are mainly used to produce good lower bounds for the evacuation time and do not consider any individual behavior during the emergency situation. These bounds can be used to analyze existing buildings or help in the design phase of planning a building. Macroscopic approaches which are based on dynamic network flow models (minimum cost dynamic flow, maximum dynamic flow, universal maximum flow, quickest path and quickest flow) are described. A special feature of the presented approach is the fact, that travel times of evacuees are not restricted to be constant, but may be density dependent. Using multicriteria optimization priority regions and blockage due to fire or smoke may be considered. It is shown how the modelling can be done using time parameter either as discrete or continuous parameter. Microscopic models are able to model the individual evacuee's characteristics and the interaction among evacuees which influence their movement. Due to the corresponding huge amount of data one uses simulation approaches. Some probabilistic laws for individual evacuee's movement are presented. Moreover ideas to model the evacuee's movement using cellular automata (CA) and resulting software are presented. In this paper we will focus on macroscopic models and only summarize some of the results of the microscopic approach. While most of the results are applicable to general evacuation situations, we concentrate on building evacuation.

To simulate the influence of process parameters to the melt spinning process a fiber model is used and coupled with CFD calculations of the quench air flow. In the fiber model energy, momentum and mass balance are solved for the polymer mass flow. To calculate the quench air the Lattice Boltzmann method is used. Simulations and experiments for different process parameters and hole configurations are compared and show a good agreement.

In this paper mathematical models for liquid films generated by impinging jets are discussed. Attention is stressed to the interaction of the liquid film with some obstacle. S. G. Taylor [Proc. R. Soc. London Ser. A 253, 313 (1959)] found that the liquid film generated by impinging jets is very sensitive to properties of the wire which was used as an obstacle. The aim of this presentation is to propose a modification of the Taylor's model, which allows to simulate the film shape in cases, when the angle between jets is different from 180°. Numerical results obtained by discussed models give two different shapes of the liquid film similar as in Taylors experiments. These two shapes depend on the regime: either droplets are produced close to the obstacle or not. The difference between two regimes becomes larger if the angle between jets decreases. Existence of such two regimes can be very essential for some applications of impinging jets, if the generated liquid film can have a contact with obstacles.

Free Surface Lattice-Boltzmann Method To Model The Filling Of Expanding Cavities By Bingham Fluids
(2001)

The filling process of viscoplastic metal alloys and plastics in expanding cavities is modelled using the lattice Boltzmann method in two and three dimensions. These models combine the regularized Bingham model for viscoplastic with a free-interface algorithm. The latter is based on a modified immiscible lattice Boltzmann model in which one species is the fluid and the other one is considered as vacuum. The boundary conditions at the curved liquid-vacuum interface are met without any geometrical front reconstruction from a first-order Chapman-Enskog expansion. The numerical results obtained with these models are found in good agreement with available theoretical and numerical analysis.

A Lagrangian particle scheme is applied to the projection method for the incompressible Navier-Stokes equations. The approximation of spatial derivatives is obtained by the weighted least squares method. The pressure Poisson equation is solved by a local iterative procedure with the help of the least squares method. Numerical tests are performed for two dimensional cases. The Couette flow, Poiseuelle flow, decaying shear flow and the driven cavity flow are presented. The numerical solutions are obtained for stationary as well as instationary cases and are compared with the analytical solutions for channel flows. Finally, the driven cavity in a unit square is considered and the stationary solution obtained from this scheme is compared with that from the finite element method.

Industrial Ecology's Hidden Philosophy of Nature. Fundamental Underpinning to Use Nature as Model
(2001)

In its scientific sense, industrial ecology represents an emerging transdisciplinary field of studying industrial systems and their fundamental linkage with natural ecosystems. As a short form, industrial ecology is called the "science of sustainability". At the bottom of industrial ecology there is a refreshingly different perspective of understanding nature as model in comparison with other scientific disciplines and concepts of understanding nature e.g. in terms of "sack of resources", "biophysical limit", "something outside", "surrounding", or just "environment" as opposed to industrial systems. The keynote of industrial ecology's specific perspective of understanding nature is to balance the development of industrial systems with the constraints of natural eco-systems, analogous to an "industrial symbiosis". The goal is to contribute for laying a fundamental underpinning for industrial ecology in its scientific sense, in this case especially for its use of nature as model. Therefore an impressive battery of philosophical arguments is provided bringing to bear against the sort of probably raised fallacies and facile or hasty proclaimed critics by sceptics, hard-liners, and mainstream-scientists who often overlook industrial ecology's stimulating role towards sustainability.

The study of families of curves with prescribed singularities has a long tradition. Its foundations were laid by Plücker, Severi, Segre, and Zariski at the beginning of the 20th century. Leading to interesting results with applications in singularity theory and in the topology of complex algebraic curves and surfaces it has attained the continuous attraction of algebraic geometers since then. Throughout this thesis we examine the varieties V(D,S1,...,Sr) of irreducible reduced curves in a fixed linear system |D| on a smooth projective surface S over the complex numbers having precisely r singular points of types S1,...,Sr. We are mainly interested in the following three questions: 1) Is V(D,S1,...,Sr) non-empty? 2) Is V(D,S1,...,Sr) T-smooth, that is smooth of the expected dimension? 3) Is V(D,S1,...Sr) irreducible? We would like to answer the questions in such a way that we present numerical conditions depending on invariants of the divisor D and of the singularity types S1,...,Sr, which ensure a positive answer. The main conditions which we derive will be of the type inv(S1)+...+inv(Sr) < aD^2+bD.K+c, where inv is some invariant of singularity types, a, b and c are some constants, and K is some fixed divisor. The case that S is the projective plane has been very well studied by many authors, and on other surfaces some results for curves with nodes and cusps have been derived in the past. We, however, consider arbitrary singularity types, and the results which we derive apply to large classes of surfaces, including surfaces in projective three-space, K3-surfaces, products of curves and geometrically ruled surfaces.

Satellite-to-satellite tracking (SST) and satellite gravity gradiometry (SGG), respectively, are two measurement principles in modern satellite geodesy which yield knowledge of the first and second order radial derivative of the earth's gravitational potential at satellite altitude, respectively. A numerical method to compute the gravitational potential on the earth's surface from those observations should be capable of processing huge amounts of observational data. Moreover, it should yield a reconstruction of the gravitational potential at different levels of detail, and it should be possible to reconstruct the gravitational potential from only locally given data. SST and SGG are modeled as ill-posed linear pseudodifferential operator equations with an injective but non-surjective compact operator, which operates between Sobolev spaces of harmonic functions and such ones consisting of their first and second order radial derivatives, respectively. An immediate discretization of the operator equation is obtained by replacing the signal on its right-hand-side either by an interpolating or a smoothing spline which approximates the observational data. Here the noise level and the spatial distribution of the data determine whether spline-interpolation or spline-smoothing is appropriate. The large full linear equation system with positive definite matrix which occurs in the spline-interplation and spline-smoothing problem, respectively, is efficiently solved with the help of the Schwarz alternating algorithm, a domain decomposition method which allows it to split the large linear equation system into several smaller ones which are then solved alernatingly in an iterative procedure. Strongly space-localizing regularization scaling functions and wavelets are used to obtain a multiscale reconstruction of the gravitational potential on the earth's surface. In a numerical experiment the advocated method is successfully applied to reconstruct the earth's gravitational potential from simulated 'exact' and 'error-affected' SGG data on a spherical orbit, using Tikhonov regularization. The applicability of the numerical method is, however, not restricted to data given on a closed orbit but it can also cope with realistic satellite data.

In this article, we investigate the maximum entropy moment closure in gas dynamics. We show that the usual choice of polynomial weight functions may lead to hyperbolic systems with an unpleasant state space: equilibrium states are boundary points with possibly singular fluxes. In order to avoid singularities, the necessary arises to find weight functions which growing sub-quadratically at infinity. Unfortunately, this requirement leads to a conflict with Galilean invariance of the moment systems because we can show that rotational and translational invariant, finite dimensional function spaces necessarily consist of polynomials.

A natural extension of point facility location problems are those problems in which facilities are extensive, i.e. those that can not be represented by isolated points but as some dimensional structures such as straight lines, segments of lines, polygonal curves or circles. In this paper a review of the existing work on the location of extensive facilities in continuous spaces is given. Gaps in the knowledge are identified and suggestions for further research are made.

We present a complete derivation of the semiclassical limit of the coherent state propagator in one dimension, starting from path integrals in phase space. We show that the arbitrariness in the path integral representation, which follows from the overcompleteness of the coherent states, results in many different semiclassical limits. We explicitly derive two possible semiclassical formulae for the propagator, we suggest a third one, and we discuss their relationships. We also derive an initial value representation for the semiclassical propagator, based on an initial gaussian wavepacket. It turns out to be related to, but different from, Heller's thawed gaussian approximation. It is very different from the Herman - Kluk formula, which is not a correct semiclassical limit. We point out errors in two derivations of the latter. Finally we show how the semiclassical coherent state propagators lead to WKB-type quantization rules and to approximations for the Husimi distributions of stationary states.

Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.

We introduce two novel techniques for speeding up the generation of digital \((t,s)\)-sequences. Based on these results a new algorithm for the construction of Owen's randomly permuted \((t,s)\)-sequences is developed and analyzed. An implementation of the new techniques is available at http://www.cs.caltech.edu/~ilja/libseq/index.html

We survey old and new results about optimal algorithms for summation of finite sequences and for integration of functions from Hölder or Sobolev spaces. First we discuss optimal deterministic and randornized algorithms. Then we add a new aspect, which has not been covered before on conferences
about (quasi-) Monte Carlo methods: quantum computation. We give a short introduction into this setting and present recent results of the authors on optimal quantum algorithms for summation and integration. We discuss comparisons between the three settings. The most interesting case for Monte
Carlo and quantum integration is that of moderate smoothness \(k\) and large dimension \(d\) which, in fact, occurs in a number of important applied problems. In that case the deterministic exponent is negligible, so the \(n^{-1/2}\) Monte Carlo and the \(n^{-1}\) quantum speedup essentially constitute the entire convergence rate.

In this work we propose a set of term-rewriting techniques for modelling object-oriented computation. Based on symbolic variants of explicit substitutions calculi, we show how to deal with imperative statements like assignment and sequence in specifications in a pure declarative style. Under our model, computation with classes and objects becomes simply normal form calculation, exactly as it is the case in term-rewriting based languages (for instance the functional languages). We believe this kind of unification between functions and
objects is important because it provides plausible alternatives for using the term-rewriting theory as an engine for supporting the formal and mechanical reasoning about object-oriented specifications.

The Analytic Blossom
(2001)

Blossoming is a powerful tool for studying and computing with Bézier and B-spline curves and surfaces - that is, for the investigation and analysis of polynomials and piecewise polynomials in geometric modeling. In this paper, we define a notion of the blossom for Poisson curves. Poisson curves are to analytic functions what Bézier curves are to polynomials - a representation adapted to geometric design. As in the polynomial setting, the blossom provides a simple, powerful, elegant and computationally meaningful way to analyze Poisson curves. Here, we
define the analytic blossom and interpret all the known algorithms for Poisson curves - subdivision, trimming, evaluation of the function and its derivatives, and conversion between the Taylor and the Poisson basis - in terms of this analytic blossom.

We study summation of sequences and integration in the quantum model of computation. We develop quantum algorithms for computing the mean of sequences which satisfy a \(p\)-summability condition and for integration of functions from Lebesgue spaces \(L_p([0,1]^d)\) and analyze their convergence rates. We also prove lower bounds which show that the proposed algorithms are, in many cases, optimal within the setting of quantum computing. This extends recent results of Brassard, Høyer, Mosca, and Tapp (2000) on computing the mean for bounded sequences and complements results of Novak (2001) on integration of functions from Hölder classes.

Interleaved Sampling
(2001)

The sampling of functions is one of the most fundamental tasks in computer graphics, and occurs in a variety of different forms. The known sampling methods can roughly be grouped in two categories. Sampling on regular grids is simple and efficient, and the algorithms are often easy to built into graphics hardware. On the down side, regular sampling is prone to aliasing artifacts that are expensive to overcome. Monte Carlo methods, on the other hand,
mask the aliasing artifacts by noise. However due to the lack of coherence, these methods are more expensive and not weil suited for hardware implementations. In this paper, we introduce a novel sampling scheme where samples from several regular grids are a combined into a single irregular sampling pattern. The relative positions of the regular grids are themselves determined by Monte Carlo methods. This generalization obtained by interleaving yields,significantly improved quality compared to traditional approaches while at the same time preserving much of the advantageous coherency of regular sampling. We demonstrate the quality of the new sampling scheme with a number of applications ranging from supersampling over motion blur simulation to volume rendering. Due to the coherence in the interleaved samples, the method is optimally suited for implementations in graphics hardware.

The simulation of random fields has many applications in computer graphics such as e.g. ocean wave or turbulent wind field modeling. We present a new and strikingly simple synthesis algorithm for random fields on rank-1 lattices that requires only one Fourier transform independent of the dimension of the support of the random field. The underlying mathematical principle of discrete Fourier transforms on rank-1 lattices breaks the curse of dimension of the standard tensor product Fourier transform, i.e. the number of function values does not exponentially depend on the dimension, but can be chosen linearly.

As opposed to Monte Carlo integration the quasi-Monte Carlo method does not allow for an (consistent) error estimate from the samples used for the integral approximation. In addition the deterministic error bound of quasi-Monte Carlo integration is not accessible in the setting of computer graphics, since usually the integrands are of unbounded variation. The structure of the high dimensional functionals to be computed for photorealistic image synthesis implies the application of the randomized quasi-Monte Carlo method. Thus we can exploit low discrepancy sampling and at the same time we can estimate the variance. The resulting technique is much more efficient than previous bidirectional path tracing algorithms.

In der vorliegenden Arbeit wird das Verhalten von thermoplastischen
Verbundwerkstoffen mittels experimentellen und numerischen Untersuchungen
betrachtet. Das Ziel dieser Untersuchungen ist die Identifikation und Quantifikation
des Versagensverhaltens und der Energieabsorptionsmechanismen von geschichteten,
quasi-isotropen thermoplastischen Faser-Kunststoff-Verbunden und die Umsetzung
der gewonnenen Einsichten in Eigenschaften und Verhalten eines Materialmodells zur
Vorhersage des Crash-Verhaltens dieser Werkstoffe in transienten Analysen.
Vertreter der untersuchten Klassen sind un- und mittel-vertreckte Rundgestricke und
glasfaserverstärkte Thermoplaste (GMT). Die Untersuchungen an rundgestrickten
glasfaser-(GF)-verstärktem Polyethylentherephthalat (PET) waren Teil eines
Forschungsprojektes zur Charakterisierung sowohl der Verarbeitbarkeit als auch des
mechanischen Verhaltens. Experimente an GMT und Schnittfaser-GMT wurden
ebenfalls zum Vergleich mit dem Gestrick durchgeführt und dienen als Bestätigung
des beobachteten Verhaltens des Gestrickes.
Besonderer Aufmerksamkeit wird der Einfluß der Probengeometrie auf die Resultate
gewidmet, weil die Crash-Charakteristiken wesentlich von der Geometrie des
getesteten Probekörpers abhängen. Hierzu wurde ein Rundhutprofil zur Untersuchung
dieses Einflußes definiert. Diese spezielle Geometrie hat insbesondere Vorteile
hinsichtlich Energieabsorptionsvermögen sowie Herstellbarkeit von thermoplastischen
Verbundwerkstoffen (TPCs). Es wurden Impakt- und Perforationsversuche zur
Untersuchung der Schädigungsausbreitung und zur Charakterisierung der Zähigkeit
der untersuchten Materialien durchgeführt.
Geschichtete TPCs versagen hauptsächlich in einem Laminat-Biegemodus mit
kombiniertem intra- und interlaminaren Schub (transversaler Schub zwischen Lagen und teilweise mit transversalen Schubbrüchen in einzelnen Lagen). Durch eine
Kopplung der aktuellen Versagensmodi und Crash-Kennwerten wie der mittleren
Crash-Spannung, konnten Indikationen über die Relation zwischen Materialparameter
und absoluter Energieabsorption gewonnen werden.
Numerische Untersuchungen wurden mit einem expliziten Finiten Elemente-
Programm zur Simulation von dreidimensionalen, großen Verformungen durchgeführt.
Das Modell besteht bezüglich des Querschnittaufbaus aus einer mesoskopischen
Darstellung, die zwischen Matrix-zwischenlagen und mesoskopischen Verbundwerkstofflagen unterscheidet. Die Modellgeometrie stellt einen vereinfachten
Längsquerschnitt durch den Probekörper dar. Dabei wurden Einflüsse der Reibung
zwischen Impaktor und Material sowie zwischen einzelnen Lagen berücksichtigt.
Auch die lokal herrschende Dehnrate, Energie und Spannungs-Dehnungsverteilung
über die mesoskopischen Phasen konnten beobachtet werden. Dieses Modell zeigt
deutlich die verschiedenen Effekte, die durch den heterogenen Charakter des Laminats
entstehen, und gibt auch Hinweise für einige Erklärungen dieser Effekte.
Basierend auf den Resultaten der obengenannten Untersuchungen wurde ein
phänomenologisches Modell mit a-priori Information des inherenten
Materialverhaltens vorgeschlagen. Daher, daß das Crashverhalten vom heterogenen
Charakter des Werkstoffes dominiert wird, werden im Modell die Phasen separat
betrachtet. Eine einfache Methode zur Bestimmung der mesoskopischen Eigenschaften
wird diskutiert.
Zur Beschreibung des Verhaltens vom thermoplastischen Matrixsystem während
„Crushing“ würde ein dehnraten- und temperaturabhängiges Plastizitätsgesetz
ausreichen. Für die Beschreibung des Verhaltens der Verbundwerkstoffschichten wird
eine gekoppelte Plastizitäts- und Schädigungsformulierung vorgeschlagen. Ein solches
Modell kann sowohl den plastischen Anteil des Matrixsystems als auch das
„Softening“ - verursacht durch Faser-Matrix-Grenzflächenversagen und Faserbrüche -
beschreiben. Das vorgeschlagene Modell unterscheidet zwischen Belastungsfällen für
axiales „Crushing“ und Versagen ohne „Crushing“. Diese Unterteilung ermöglicht
eine explizite Modellierung des Werkstoffes unter Berücksichtigung des spezifischen
Materialzustandes und der Geometrie für den außerordentlichen Belastungsfall, der
zum progressiven Versagen führt.