### Refine

#### Year of publication

- 2002 (54) (remove)

#### Document Type

- Doctoral Thesis (18)
- Report (16)
- Preprint (14)
- Article (3)
- Diploma Thesis (1)
- Lecture (1)
- Periodical Part (1)

#### Language

- English (54) (remove)

#### Keywords

- AG-RESY (4)
- HJB equation (2)
- Portfolio optimisation (2)
- RODEO (2)
- stochastic control (2)
- Adaptive Antennen (1)
- Anti-diffusion (1)
- Antidiffusion (1)
- Assembly (1)
- Assigment (1)

#### Faculty / Organisational entity

Virtual Robot Programming for Deformable Linear Objects: System concept and Prototype Implementation
(2002)

In this paper we present a method and system for robot programming using virtual reality techniques. The proposed method allows intuitive teaching of a manipulation task with haptic feedback in a graphical simulation system. Based on earlier work, our system allows even an operator who lacks specialized knowledge of robotics to automatically generate a robust sensor-based robot program that is ready to execute on different robots, merely by demonstrating the task in virtual reality.

Utilization of Correlation Matrices in Adaptive Array Processors for Time-Slotted CDMA Uplinks
(2002)

It is well known that the performance of mobile radio systems can be significantly enhanced by the application of adaptive antennas which consist of multi-element antenna arrays plus signal processing circuitry. In the thesis the utilization of such antennas as receive antennas in the uplink of mobile radio air interfaces of the type TD-CDMA is studied. Especially, the incorporation of covariance matrices of the received interference signals into the signal processing algorithms is investigated with a view to improve the system performance as compared to state of the art adaptive antenna technology. These covariance matrices implicitly contain information on the directions of incidence of the interference signals, and this information may be exploited to reduce the effective interference power when processing the signals received by the array elements. As a basis for the investigations, first directional models of the mobile radio channels and of the interference impinging at the receiver are developed, which can be implemented on the computer at low cost. These channel models cover both outdoor and indoor environments. They are partly based on measured channel impulse responses and, therefore, allow a description of the mobile radio channels which comes sufficiently close to reality. Concerning the interference models, two cases are considered. In the one case, the interference signals arriving from different directions are correlated, and in the other case these signals are uncorrelated. After a visualization of the potential of adaptive receive antennas, data detection and channel estimation schemes for the TD-CDMA uplink are presented, which rely on such antennas under the consideration of interference covariance matrices. Of special interest is the detection scheme MSJD (Multi Step Joint Detection), which is a novel iterative approach to multi-user detection. Concerning channel estimation, the incorporation of the knowledge of the interference covariance matrix and of the correlation matrix of the channel impulse responses is enabled by an MMSE (Minimum Mean Square Error) based channel estimator. The presented signal processing concepts using covariance matrices for channel estimation and data detection are merged in order to form entire receiver structures. Important tasks to be fulfilled in such receivers are the estimation of the interference covariance matrices and the reconstruction of the received desired signals. These reconstructions are required when applying MSJD in data detection. The considered receiver structures are implemented on the computer in order to enable system simulations. The obtained simulation results show that the developed schemes are very promising in cases, where the impinging interference is highly directional, whereas in cases with the interference directions being more homogeneously distributed over the azimuth the consideration of the interference covariance matrices is of only limited benefit. The thesis can serve as a basis for practical system implementations.

Two approaches for determining the Euler-Poincaré characteristic of a set observed on lattice points are considered in the context of image analysis { the integral geometric and the polyhedral approach. Information about the set is assumed to be available on lattice points only. In order to retain properties of the Euler number and to provide a good approximation of the true Euler number of the original set in the Euclidean space, the appropriate choice of adjacency in the lattice for the set and its background is crucial. Adjacencies are defined using tessellations of the whole space into polyhedrons. In R 3 , two new 14 adjacencies are introduced additionally to the well known 6 and 26 adjacencies. For the Euler number of a set and its complement, a consistency relation holds. Each of the pairs of adjacencies (14:1; 14:1), (14:2; 14:2), (6; 26), and (26; 6) is shown to be a pair of complementary adjacencies with respect to this relation. That is, the approximations of the Euler numbers are consistent if the set and its background (complement) are equipped with this pair of adjacencies. Furthermore, sufficient conditions for the correctness of the approximations of the Euler number are given. The analysis of selected microstructures and a simulation study illustrate how the estimated Euler number depends on the chosen adjacency. It also shows that there is not a uniquely best pair of adjacencies with respect to the estimation of the Euler number of a set in Euclidean space.

In this paper we consider the location of stops along the edges of an already existing public transportation network. This can be the introduction of bus stops along some given bus routes, or of railway stations along the tracks in a railway network. The positive effect of new stops is given by the better access of the potential customers to their closest station, while the increasement of travel time caused by the additional stopping activities of the trains leads to a negative effect. The goal is to cover all given demand points with a minimal amount of additional traveling time, where covering may be defined with respect to an arbitrary norm (or even a gauge). Unfortunately, this problem is NP-hard, even if only the Euclidean distance is used. In this paper, we give a reduction to a finite candidate set leading to a discrete set covering problem. Moreover, we identify network structures in which the coefficient matrix of the resulting set covering problem is totally unimodular, and use this result to derive efficient solution approaches. Various extensions of the problem are also discussed.

Many rendering problems can only be solved using Monte Carlo integration. The noise and variance inherent with the statistical method efficiently can be reduced by stratification. So far only uncorrelated stratification methods were used that in addition depend on the dimension of the integration domain. Based on rank-1-lattices we present a new stratification technique that removes this dependency on dimension, is much more efficient by correlation, is trivial to implement, and robust to use. The superiority of the new scheme is demonstrated for standard rendering algorithms.

Spline functions that approximate (geostrophic) wind field or ocean circulation data are developed in a weighted Sobolev space setting on the (unit) sphere. Two problems are discussed in more detail: the modelling of the (geostrophic) wind field from (i)discrete scalar air pressure data and (ii) discrete vectorial velocity data. Domain decomposition methods based on the Schwarz alternating algorithm for positive definite symmetric matrices are described for solving large linear systems occuring in vectorial spline interpolation or smoothing of geostrophic flow.

A spectral theory for stationary random closed sets is developed and provided with a sound mathematical basis. Definition and proof of existence of the Bartlett spectrum of a stationary random closed set as well as the proof of a Wiener-Khintchine theorem for the power spectrum are used to two ends: First, well known second order characteristics like the covariance can be estimated faster than usual via frequency space. Second, the Bartlett spectrum and the power spectrum can be used as second order characteristics in frequency space. Examples show, that in some cases information about the random closed set is easier to obtain from these characteristics in frequency space than from their real world counterparts.

Solid particle erosion is usually undesirable, as it leads to development of cracks and
holes, material removal and other degradation mechanisms that as final
consequence reduce the durability of the structure imposed to erosion. The main aim
of this study was to characterise the erosion behaviour of polymers and polymer
composites, to understand the nature and the mechanisms of the material removal
and to suggest modifications and protective strategies for the effective reduction of
the material removal due to erosion.
In polymers, the effects of morphology, mechanical-, thermomechanical, and fracture
mechanical- properties were discussed. It was established that there is no general
rule for high resistance to erosive wear. Because of the different erosive wear
mechanisms that can take place, wear resistance can be achieved by more than one
type of materials. Difficulties with materials optimisation for wear reduction arise from
the fact that a material can show different behaviour depending on the impact angle
and the experimental conditions. Effects of polymer modification through mixing or
blending with elastomers and inclusion of nanoparticles were also discussed.
Toughness modification of epoxy resin with hygrothermally decomposed polyesterurethane
can be favourable for the erosion resistance. This type of modification
changes also the crosslinking characteristics of the modified EP and it was
established the crosslink density along with fracture energy are decisive parameters
for the erosion response. Melt blending of thermoplastic polymers with functionalised
rubbers on the other hand, can also have a positive influence whereas inclusion of
nanoparticles deteriorate the erosion resistance at low oblique impact angles (30°).
The effects of fibre length, orientation, fibre/matrix adhesion, stacking sequence,
number, position and existence of interleaves were studied in polymer composites.
Linear and inverse rules of mixture were applied in order to predict the erosion rate of
a composite system as a function of the erosion rate of its constituents and their
relative content. Best results were generally delivered with the inverse rule of mixture
approach.
A semi-empirical model, proposed to describe the property degradation and damage
growth characteristics and to predict residual properties after single impact, was
applied for the case of solid particle erosion. Theoretical predictions and experimental
results were in very good agreement.
Strahlerosionsverschleiß (Erosion) entsteht beim Auftreffen von festen Partikel
auf Oberflächen und zeichnet sich üblicherweise durch einen Materialabtrag aus, der
neben der Partikelgeschwindigkeit und dem Auftreffwinkel stark vom jeweiligen
Werkstoff abhängt. In den letzten Jahren ist die Anwendung von Polymeren und
Verbundwerkstoffen anstelle der traditionellen Materialien stark angestiegen.
Polymere und Polymer-Verbundwerkstoffe weisen eine relativ hohe Erosionsrate
(ER) auf, was die potenzielle Anwendung dieser Werkstoffe unter erosiven
Umgebungsbedingungen erheblich einschränkt.
Untersuchungen des Erosionsverhaltens anhand ausgewählter Polymere und
Polymer-Verbundwerkstoffe haben gezeigt, dass diese Systeme unterschiedlichen
Verschleißmechnismen folgen, die sehr komplex sind und nicht nur von einer
Werkstoffeigenschaft beeinflusst werden. Anhand der ER kann das
Erosionsverhalten grob in zwei Kategorien eingeteilt werden: sprödes und duktiles
Erosionsverhalten. Das spröde Erosionsverhalten zeigt eine maximale ER bei 90°,
während das Maximum bei dem duktilen Verhalten bei 30° liegt. Ob ein Material das
eine oder das andere Erosionsverhalten aufweist, ist nicht nur von seinen
Eigenschaften, sondern auch von den jeweiligen Prüfparametern abhängig.
Das Ziel dieser Forschungsarbeit war, das grundsätzliche Verhalten von
Polymeren und Verbundwerkstoffen unter dem Einfluss von Erosion zu
charakterisieren, die verschiedenen Verschleißmechanismen zu erkennen und die
maßgeblichen Materialeigenschaften und Kennwerte zu erfassen, um Anwendungen
dieser Werkstoffe unter Erosionsbedingungen zu ermöglichen bzw. zu verbessern.
An einer exemplarischen Auswahl von Polymeren, Elastomeren, modifizierten Polymeren und Faserverbundwerkstoffen wurden die wesentlichen Einflussfaktoren
für die Erosion experimentell bestimmt.
Thermoplastische Polymere und thermoplastische- und vernetzte- Elastomere
Die Versuche, den Erosionswiderstand ausgewählter Polymere (Polyethylene
und Polyurethane) mit verschiedenen Materialeigenschaften zu korrelieren, haben
gezeigt, dass es weder eine klare Abhängigkeit von einzelnen Kenngrößen noch von
Eigenschaftskombinationen gibt. Möglicherweise führt die Bestimmung der
Materialeigenschaften unter den gleichen experimentellen Bedingungen wie bei den Erosionsversuchen zu einer besseren Korrelation zwischen ER und
Materialkenngröße.
Modifiziertes Epoxidharz
Am Beispiel eines modifizierten Epoxidharzes (EP) mit verschiedener
Vernetzungsdichte wurde eine Korrelation zwischen Erosionswiderstand und
Bruchenergie bzw. Erosionswiderstand und Vernetzungsdichte gefunden. Die
Modifizierung erfolgte mit verschiedenen Anteilen von einem hygrothermisch
abgebauten Polyurethan (HD-PUR). Der Zusammenhang zwischen ER und
Vernetzungsparametern steht im Einklang mit der Theorie der Kautschukelastizität.
Modifizierungseffizienz in Duromeren, Thermoplasten und Elastomeren
Des weiteren wurde der Einfluss von Modifizierungen von Polymeren und
Elastomeren untersucht. Mit dem obenerwähnten System (d.h. EP/HD-PUR) läßt sich
auch der Einfluss der Zähigkeitsmodifizierung des Epoxidharzes (EP) auf das
Erosionsverhalten untersuchen. Es wurde gezeigt, dass für HD-PUR Anteile von
mehr als 20 Gew.% diese Modifizierung einen positiven Einfluss auf die
Erosionsbeständigkeit hat. Durch Variation der HD-PUR-Anteile können für dieses
EP Materialeigenschaften, die zwischen den Eigenschaften eines üblichen
Duroplasten und eines weniger elastischen Gummis liegen, erzeugt werden.
Deswegen stellt der modifizierte EP-Harz ein sehr gutes Modellmaterial dar, um den
Einfluss der experimentellen Bedingungen zu studieren, und zu untersuchen, ob
verschiedene Erodenten zu gleichen Erosionsmechanismen führen. Der Übergang
vom duroplastischen zum zähen Verhalten wurde anhand von vier Erodenten
untersucht. Aus den Versuchen ergab sich, dass ein solcher Übergang auftritt, wenn
sehr feine, kantige Partikel (Korund) als Erodenten dienen. Die Partikelgröße und -form ist von entscheidender Bedeutung für die jeweiligen Verschleißmechanismen.
Die Effizienz neuartiger thermoplastischer Elastomere mit einer cokontinuierlichen
Phasenstruktur, bestehend aus thermoplastischem Polyester und
Gummi (funktionalisierter NBR und EPDM Kautschuk), wurde in Bezug auf die
Erosionsbeständigkeit untersucht. Große Anteile von funktionalisiertem Gummi (mehr
als 20 Gew.%) sind vorteilhaft für den Erosionswiderstand. Weiterhin wurde
untersucht, ob sich die herausragende Erosionsbeständigkeit von Polyurethan (PUR)
durch Zugabe von Nanosilikaten eventuell noch steigern läßt. Das Ergebnis war,
dass die Nanopartikel sich vor allem bei einem kleinen Verschleißwinkel (30°) negativ
auswirken. Die schwache Adhäsion zwischen Matrix und Partikeln erleichtert den
Beginn und das Wachsen von Rissen. Dies führt zu einem schnelleren
Materialabtrag von der Materialoberfläche.
Faserverbundwerkstoffe
Ferner wurden Faserverbundwerkstoffe (FVW) mit thermoplastischer und
duromerer Matrix auf ihr Verhalten bei Erosivverschleiß untersucht. Es war von
großem Interesse, den Einfluss von Faserlänge und -orientierung zu untersuchen.
Kurzfaserverstärkte Systeme haben einen besseren Erosionswiderstand als die
unidirektionalen (UD) Systeme. Die Rolle der Faserorientierung kann man nur in
Verbindung mit anderen Parametern, wie Matrixzähigkeit, Faseranteil oder Faser-
Matrix Haftung, berücksichtigen. Am Beispiel von GF/PP Verbunden weisen die
parallel zur Verstreckungsrichtung gestrahlten Systeme den geringsten Widerstand
auf. Andererseits findet bei einem GF/EP System die maximale ER in senkrechter
Richtung statt. Eine Verbesserung der Grenzflächenscherfestigkeit beeinflusst die
Erosionsverschleißrate nachhaltig. Wenn die Haftung der Grenzfläche ausreichend
ist, spielt die Erosionsrichtung eine unbedeutende Rolle für die ER. Weiterhin wurde
gezeigt, dass die Präsenz von zähen Zwischenschichten zu einer deutlichen
Verbesserung des Erosionswiderstands von CF/EP- Verbunden führt.
Eine weitere Aufgabenstellung war es, die Rolle des Faservolumenanteils zu
bestimmen. „Lineare, inverse und modifizierte Mischungsregeln“ wurden
angewendet, und es wurde festgestellt, dass die inversen Mischungsregeln besser
die ER in Abhängigkeit des Faservolumenanteils beschreiben können.
Im Anwendungsbereich von Faserverbundwerkstoffen ist nicht nur die Kenntnis
der ER, sondern auch die Kenntnis der Resteigenschaften erforderlich. Ein
halbempirisches Modell für die Vorhersage des Schlagenergieschwellwertes (Uo) für den Beginn der Festigkeitsabnahme und der Restzugfestigkeit nach einer
Schlagbelastung wurde bei der Untersuchung des Erosionsverschleißes
angewendet. Experimentelle Ergebnisse und theoretische Vorhersagen stimmten
nicht nur für duromere CF/EP-Verbundwerkstoffe, sondern auch für
Verbundwerkstoffe mit einer thermoplastischen Matrix (GF/PP) sehr gut überein.

This paper analyzes the problem of sensor-based colli-sion detection for an industrial robotic manipulator. A method to perform collision tests based on images taken from several stationary cameras in the work cell is pre-sented. The collision test works entirely based on the im-ages, and does not construct a representation of the Carte-sian space. It is shown how to perform a collision test for all possible robot configurations using only a single set of images taken simultaneously.

We study high dimensional integration in the quantum model of computation. We develop quantum algorithms for integration of functions from Sobolev classes \(W^r_p [0,1]^d\) and analyze their convergence rates. We also prove lower bounds which show that the proposed algorithms are, in many cases, optimal within the setting of quantum computing. This extends recent results of Novak on integration of functions from Hölder classes.

We consider the problem of locating a line with respect to some existing facilities in 3-dimensional space, such that the sum of weighted distances between the line and the facilities is minimized. Measuring distance using the l_p norm is discussed, along with the special cases of Euclidean and rectangular norms. Heuristic solution procedures for finding a local minimum are outlined.

One crucial assumption of continuous financial mathematics is that the portfolio can be rebalanced continuously and that there are no transaction costs. In reality, this of course does not work. On the one hand, continuous rebalancing is impossible, on the other hand, each transaction causes costs which have to be subtracted from the wealth. Therefore, we focus on trading strategies which are based on discrete rebalancing - in random or equidistant times - and where transaction costs are considered. These strategies are considered for various utility functions and are compared with the optimal ones of continuous trading.

To a network N(q) with determinant D(s;q) depending on a parameter vector q Î Rr via identification of some of its vertices, a network N^ (q) is assigned. The paper deals with procedures to find N^ (q), such that its determinant D^ (s;q) admits a factorization in the determinants of appropriate subnetworks, and with the estimation of the deviation of the zeros of D^ from the zeros of D. To solve the estimation problem state space methods are applied.

We consider some portfolio optimisation problems where either the investor has a desire for an a priori specified consumption stream or/and follows a deterministic pay in scheme while also trying to maximize expected utility from final wealth. We derive explicit closed form solutions for continuous and discrete monetary streams. The mathematical method used is classical stochastic control theory.

If an investor borrows money he generally has to pay higher interest rates than he would have received, if he had put his funds on a savings account. The classical model of continuous time portfolio optimisation ignores this effect. Since there is obviously a connection between the default probability and the total percentage of wealth, which the investor is in debt, we study portfolio optimisation with a control dependent interest rate. Assuming a logarithmic and a power utility function, respectively, we prove explicit formulae of the optimal control.

The immiscible lattice BGK method for solving the two-phase incompressible Navier-Stokes equations is analysed in great detail. Equivalent moment analysis and local differential geometry are applied to examine how interface motion is determined and how surface tension effects can be included such that consistency to the two-phase incompressible Navier-Stokes equations can be expected. The results obtained from theoretical analysis are verified by numerical experiments. Since the intrinsic interface tracking scheme of immiscible lattice BGK is found to produce unsatisfactory results in two-dimensional simulations several approaches to improving it are discussed but all of them turn out to yield no substantial improvement. Furthermore, the intrinsic interface tracking scheme of immiscible lattice BGK is found to be closely connected to the well-known conservative volume tracking method. This result suggests to couple the conservative volume tracking method for determining interface motion with the Navier-Stokes solver of immiscible lattice BGK. Applied to simple flow fields, this coupled method yields much better results than plain immiscible lattice BGK.

In this work the investigation of a (Ti, Al, Si) N system was done. The main point of investigation was to study the possibility of getting the nanocomposite coatings structures by deposition of multilayer films from TiN, AlSiN, . This tries to understand the relation between the mechanical properties (hardness, Young s modulus), and the microstructure (nanocrystalline with individual phases). Particularly special attention was given to the temperature effects on microstructural changes in annealing at 600 °C for the coatings. The surface hardness, elastic modulus, and the multilayers diffusion and compositions were the test tools for the comparison between the different coated samples with and without annealing at 600 °C. To achieve this object a rectangular aluminum vacuum chamber with three unbalanced sputtering magnetrons for the deposition of thin film coatings from different materials was constructed The chamber consists mainly of two chambers, the pre-vacuum chamber to load the workpiece, and the main vacuum chamber where the sputtering deposition of the thin film coatings take place. The workpiece is moving on a car travel on a railway between the two chambers to the position of the magnetrons by step motors. The chambers are divided by a self constructed rectangular gate controlled manually from outside the chamber. The chamber was sealed for vacuum use using glue and screws. Therefore, different types of glue were tested not only for its ability to develop an uniform thin layer in the gap between the aluminum plates to seal the chamber for vacuum use, but also low outgassing rates which made it suitable for vacuum use. A epoxy was able to fulfill this tasks. The evacuation characteristics of the constructed chamber was improved by minimizing the inner surface outgassing rate. Therefore, the throughput outgassing rate test method was used in the comparisons between the selected two aluminum materials (A2017 and A5353) samples short time period (one hour) outgassing rates. Different machining methods and treatments for the inner surface of the vacuum chamber were tested. The machining of the surface of material A (A2017) with ethanol as coolant fluid was able to reduce its outgassing rate a factor of 6 compared with a non-machined sample surface of the same material. The reduction of the surface porous oxide layer on the top of the aluminum surface by the pickling process with HNO3 acid, and the protection of it by producing another passive non-porous oxides layer using anodizing process will protect the surface for longer time and will minimize the outgassing rates even under humid atmosphere The residual gas analyzer (RGA) 6. Summary test shows that more than 85% of the gases inside the test chamber were water vapour (H2O) and the rests are (N2, H2, CO), so liquid nitrogen water vapor trap can enhance the chamber pumping down process. As a result it was possible to construct a chamber that can be pumped down using a turbo molecular pump (450 L/s) to the range of 1x10-6 mbar within one hour of evacuations where the chamber volume is 160 Litters and the inner surface area is 1.6 m2. This is a good base pressure for the process of sputtering deposition of hard thin film coatings. Multilayer thin film coating was deposited to demonstrate that nanostructured thin film within the (Ti, Al, Si) N system could be prepared by reactive magnetron sputtering of multi thin film layers of TiN, AlSiN. The (SNMS) spectrometry of the test samples show that a complete diffusion between the different deposited thin film coating layers in each sample takes place, even at low substrate deposition temperature. The high magnetic flux of the unbalanced magnetrons and the high sputtering power were able to produce a high ion-toatom flux, which give high mobility to the coated atoms. The interactions between the high mobility of the coated atoms and the ion-to-atom flux were sufficient to enhance the diffusion between the different deposited thin layers. It was shown from the XRD patterns for this system that the structure of the formed mixture consists of two phases. One phase is noted as TiN bulk and another detected unknown amorphous phase, which can be SiNx or AlN or a combination of Ti-Al-Si-N. As a result we where able to deposit a nanocomposite coatings by the deposition of multilayers from TiN, AlSiN thin film coatings using the constructed vacuum chamber

In this work we present and estimate an explanatory model with a predefined system of explanatory equations, a so called lag dependent model. We present a locally optimal, on blocked neural network based lag estimator and theorems about consistensy. We define the change points in context of lag dependent model, and present a powerfull algorithm for change point detection in high dimensional high dynamical systems. We present a special kind of bootstrap for approximating the distribution of statistics of interest in dependent processes.

In the present paper a kinetic model for vehicular traffic leading to multivalued fundamental diagrams is developed and investigated in detail. For this model phase transitions can appear depending on the local density and velocity of the flow. A derivation of associated macroscopic traffic equations from the kinetic equation is given. Moreover, numerical experiments show the appearance of stop and go waves for highway traffic with a bottleneck.

Different aspects of geomagnetic field modelling from satellite data are examined in the framework of modern multiscale approximation. The thesis is mostly concerned with wavelet techniques, i.e. multiscale methods based on certain classes of kernel functions which are able to realize a multiscale analysis of the funtion (data) space under consideration. It is thus possible to break up complicated functions like the geomagnetic field, electric current densities or geopotentials into different pieces and study these pieces separately. Based on a general approach to scalar and vectorial multiscale methods, topics include multiscale denoising, crustal field approximation and downward continuation, wavelet-parametrizations of the magnetic field in Mie-representation as well as multiscale-methods for the analysis of time-dependent spherical vector fields. For each subject the necessary theoretical framework is established and numerical applications examine and illustrate the practical aspects.

A geoscientifically relevant wavelet approach is established for the classical (inner) displacement problem corresponding to a regular surface (such as sphere, ellipsoid, actual earth's surface). Basic tools are the limit and jump relations of (linear) elastostatics. Scaling functions and wavelets are formulated within the framework of the vectorial Cauchy-Navier equation. Based on appropriate numerical integration rules a pyramid scheme is developed providing fast wavelet transform (FWT). Finally multiscale deformation analysis is investigated numerically for the case of a spherical boundary.

This survey paper deals with multiresolution analysis from geodetically relevant data and its numerical realization for functions harmonic outside a (Bjerhammar) sphere inside the Earth. Harmonic wavelets are introduced within a suit- able framework of a Sobolev-like Hilbert space. Scaling functions and wavelets are defined by means of convolutions. A pyramid scheme provides efficient implementation und economical computation. Essential tools are the multiplicative Schwarz alternating algorithm (providing domain decomposition procedures) and fast multipole techniques (accelerating iterative solvers of linear systems).

We present a unified approach of several boundary conditions for lattice Boltzmann models. Its general framework is a generalization of previously introduced schemes such as the bounce-back rule, linear or quadratic interpolations, etc. The objectives are two fold: first to give theoretical tools to study the existing boundary conditions and their corresponding accuracy; secondly to design formally third- order accurate boundary conditions for general flows. Using these boundary conditions, Couette and Poiseuille flows are exact solution of the lattice Boltzmann models for a Reynolds number Re = 0 (Stokes limit). Numerical comparisons are given for Stokes flows in periodic arrays of spheres and cylinders, linear periodic array of cylinders between moving plates and for Navier-Stokes flows in periodic arrays of cylinders for Re < 200. These results show a significant improvement of the overall accuracy when using the linear interpolations instead of the bounce-back reflection (up to an order of magnitude on the hydrodynamics fields). Further improvement is achieved with the new multi-reflection boundary conditions, reaching a level of accuracy close to the quasi-analytical reference solutions, even for rather modest grid resolutions and few points in the narrowest channels. More important, the pressure and velocity fields in the vicinity of the obstacles are much smoother with multi-reflection than with the other boundary conditions. Finally the good stability of these schemes is highlighted by some simulations of moving obstacles: a cylinder between flat walls and a sphere in a cylinder.

In this paper we study linear ill-posed problems Ax = y in a Hilbert space setting where instead of exact data y noisy data y^delta are given satisfying |y - y^delta| <= delta with known noise level delta. Regularized approximations are obtained by a general regularization scheme where the regularization parameter is chosen from Morozov's discrepancy principle. Assuming the unknown solution belongs to some general source set M we prove that the regularized approximation provides order optimal error bounds on the set M. Our results cover the special case of finitely smoothing operators A and extends recent results for infinitely smoothing operators.

Monte Carlo & Beyond
(2002)

Based on the framework of continuum mechanics two different concepts to formulate phenomenological anisotropic inelasticity are developed in a thermodynamically consistent manner. On the one hand, special emphasis is placed on the incorporation of structural tensors while on the other hand, fictitious configurations are introduced. Substantial parts of this work deal with the numerical treatment of the presented theory within the finite element method.

Matrix Compression Methods for the Numerical Solution of Radiative Transfer in Scattering Media
(2002)

Radiative transfer in scattering media is usually described by the radiative transfer equation, an integro-differential equation which describes the propagation of the radiative intensity along a ray. The high dimensionality of the equation leads to a very large number of unknowns when discretizing the equation. This is the major difficulty in its numerical solution. In case of isotropic scattering and diffuse boundaries, the radiative transfer equation can be reformulated into a system of integral equations of the second kind, where the position is the only independent variable. By employing the so-called momentum equation, we derive an integral equation, which is also valid in case of linear anisotropic scattering. This equation is very similar to the equation for the isotropic case: no additional unknowns are introduced and the integral operators involved have very similar mapping properties. The discretization of an integral operator leads to a full matrix. Therefore, due to the large dimension of the matrix in practical applcation, it is not feasible to assemble and store the entire matrix. The so-called matrix compression methods circumvent the assembly of the matrix. Instead, the matrix-vector multiplications needed by iterative solvers are performed only approximately, thus, reducing, the computational complexity tremendously. The kernels of the integral equation describing the radiative transfer are very similar to the kernels of the integral equations occuring in the boundary element method. Therefore, with only slight modifications, the matrix compression methods, developed for the latter are readily applicable to the former. As apposed to the boundary element method, the integral kernels for radiative transfer in absorbing and scattering media involve an exponential decay term. We examine how this decay influences the efficiency of the matrix compression methods. Further, a comparison with the discrete ordinate method shows that discretizing the integral equation may lead to reductions in CPU time and to an improved accuracy especially in case of small absorption and scattering coefficients or if local sources are present.

It is difficult for robots to handle a vibrating deformable object. Even for human beings it is a high-risk operation to, for example, insert a vibrating linear object into a small hole. However, fast manipulation using a robot arm is not just a dream; it may be achieved if some important features of the vibration are detected online. In this paper, we present an approach for fast manipulation using a force/torque sensor mounted on the robot's wrist. Template matching method is employed to recognize the vibrational phase of the deformable objects. Therefore, a fast manipulation can be performed with a high success rate, even if there is acute vibration. Experiments inserting a deformable object into a hole are conducted to test the presented method. Results demonstrate that the presented sensor-based online fast manipulation is feasible.

Manipulating Deformable Linear Objects: Manipulation Skill for Active Damping of Oscillations
(2002)

While handling deformable linear objects (DLOs), such as hoses, wires or leaf springs, with an industrial robot at high speed, unintended and undesired oscillations that delay further operations may occur. This paper analyzes oscillations based on a simple model with one degree of freedom (DOF) and presents a method for active open-loop damping. Different ways to interpret an oscillating DLO as a system with 1 DOF lead to translational and rotational adjustment motions. Both were implemented as a manipulation skill with a sepa-rate program that can be executed immediately after any robot motion. We showed how these manipulation skills can generate the needed adjustment motions automatically based on the readings of a wrist-mounted force/torque sensor. Experiments demonstrated the effectiveness under various conditions.

Dealing with problems from locational planning in schools can enrich the mathematical education. In this report we describe planar locational problems which can be used in mathematical lessons. The problems production of a semiconductor plate, design of a fire brigade building and the warehouse problem are from real-world. The problems are worked out detailed so that the usage for school lessons is possible.

Lattice Boltzmann Model for Free-Surface flow and Its Application to Filling Process in Casting
(2002)

A generalized lattice Boltzmann model to simulate free-surface is constructed in both two and three dimensions. The proposed model satisfies the interfacial boundary conditions accurately. A distinctive feature of the model is that the collision processes is carried out only on the points occupied partially or fully by the fluid. To maintain a sharp interfacial front, the method includes an anti-diffusion algorithm. The unknown distribution functions at the interfacial region are constructed according to the first order Chapman-Enskog analysis. The interfacial boundary conditions are satisfied exactly by the coefficients in the Chapman-Enskog expansion. The distribution functions are naturally expressed in the local interfacial coordinates. The macroscopic quantities at the interface are extracted from the least-square solutions of a locally linearized system obtained from the known distribution functions. The proposed method does not require any geometric front construction and is robust for any interfacial topology. Simulation results of realistic filling process are presented: rectangular cavity in two dimensions and Hammer box, Campbell box, Sheffield box, and Motorblock in three dimensions. To enhance the stability at high Reynolds numbers, various upwind-type schemes are developed. Free-slip and no-slip boundary conditions are also discussed.

Strict order relations are defined as strict asymmetric and transitive binary relations. For classes of so-called levelled strict orders it is analyzed, under which conditions the endomorphism monoids of two relations coincide; in particular the case of direct sums of strict antichains is studied. Further, it is shown that these orders differ in their sets of binary order preserving functions.

These lecture notes give a completely self-contained introduction to the control theory of linear time-invariant systems. No prior knowledge is requried apart from linear algebra and some basic familiarity with ordinary differential equations. Thus, the course is suited for students of mathematics in their second or third year, and for theoretically inclined engineering students. Because of its appealing simplicity and elegance, the behavioral approch has been adopted to a large extend. A short list of recommended text books on the subject has been added, as a suggestion for further reading.

Interactive graphics has been limited to simple direct illumination that commonly results in an artificial appearance. A more realistic appearance by simulating global illumination effects has been too costly to compute at interactive rates. In this paper we describe a new Monte Carlo-based global illumination algorithm. It achieves performance of up to 10 frames per second while arbitrary changes to the scene may be applied interactively. The performance is obtained through the effective use of a fast, distributed ray-tracing engine as well as a new interleaved sampling technique for parallel Monte Carlo simulation. A new filtering step in combination with correlated sampling avoids the disturbing noise artifacts common to Monte Carlo methods.

Scheduling and location models are often used to tackle problems in production, logistics, and supply chain management. Instead of treating these models independent of each other, as is usually done in the literature, we consider in this paper an integrated model in which the locations of machines define release times for jobs. Polynomial solution algorithms are presented for single machine problems in which the scheduling part can be solved by the earliest release time rule.

The development of recombinant DNA techniques opened a new era for protein production both in scientific research and industrial application. However, the purification of recombinant proteins is very often quite difficult and inefficient. Therefore, we tried to employ novel techniques for the expression and purification of three pharmacologically interesting proteins: the plant toxin gelonin; a fusion protein of gelonin and the extracellular domain of the subunit of the acetylcholine receptor (gelonin-AchR) and human neurotrophin 3 (hNT3). Recombinant gelonin, acetylcholine receptor a subunit and their fusion product, gelonin-AchR were constructed and expressed. The gelonin gene, a 753 bp polynucleotide was chemically synthesized by Ya-Wei Shi et al. and was kindly provided to us. The gene was first inserted into the vector pUC118 yielding pUC-gel. It was subsequently transferred into pET28a and pET-gel was expressed in E. coli. The product, gelonin was soluble and was purified in two steps showing a homogeneous band corresponding to 28 kD on SDS-PAGE. The expression of the extracellular domain of the -subunit of AchR always led to insoluble aggregates and even upon coexpression with the chaperonin GroESL, very small and hardly reproducible amounts of soluble material were formed, only. Therefore, recombinant AchR- gelonin was cloned and expressed in the same host. The corresponding fusion protein, gelonin-AchR, again formed aggregates and it had to be solubilized in 6 M Gu-HCl for further purification and refolding. The final product, however, was recognized by several monoclonal antibodies directed against the extracellular domain of the -subunit of AchR as well as a polyclonal serum against gelonin. Expression and purification of recombinant hNT3 was achieved by the use of a protein self-splicing system. Based on the reported hNT3 DNA sequence, a 380 bp fragment corresponding to a 14 kD protein was amplified from genomal DNA of human whole blood by PCR. The DNA fragment was cloned into the pTXB1 vector, which contains a DNA fragment of intein and chintin binding domain (CBD). A further construct, pJLA-hNT3, is temperature-inducible. Both constructs expressed the target protein, hNT3-intein-CBD in E. coli by the induction with IPTG or temperature, however, as aggregates. After denaturation and renaturation, the soluble fusion protein was slowly loaded on an affinity column of chitin beads. A 14 kD hNT3 could be isolated after cleavage with DTT either at 4 °C or 25 °C for 48 h. Based on nerve fiber out-growth of the dorsal root ganglia of chicken embryos, both, hNT-3-intein-CBD and hNT3 itself exhibit almost the same biological activity.

In the present work, we investigated how to correct the questionable normality, linear and quadratic assumptions underlying existing Value-at-Risk methodologies. In order to take also into account the skewness, the heavy tailedness and the stochastic feature of the volatility of the market values of financial instruments, the constant volatility hypothesis widely used by existing Value-at-Risk appproches has also been investigated and corrected and the tails of the financial returns distributions have been handled via Generalized Pareto or Extreme Value Distributions. Artificial Neural Networks have been combined by Extreme Value Theory in order to build consistent and nonparametric Value-at-Risk measures without the need to make any of the questionable assumption specified above. For that, either autoregressive models (AR-GARCH) have been used or the direct characterization of conditional quantiles due to Bassett, Koenker [1978] and Smith [1987]. In order to build consistent and nonparametric Value-at-Risk estimates, we have proved some new results extending White Artificial Neural Network denseness results to unbounded random variables and provide a generalisation of the Bernstein inequality, which is needed to establish the consistency of our new Value-at-Risk estimates. For an accurate estimation of the quantile of the unexpected returns, Generalized Pareto and Extreme Value Distributions have been used. The new Artificial Neural Networks denseness results enable to build consistent, asymptotically normal and nonparametric estimates of conditional means and stochastic volatilities. The denseness results uses the Sobolev metric space L^m (my) for some m >= 1 and some probability measure my and which holds for a certain subclass of square integrable functions. The Fourier transform, the new extension of the Bernstein inequality for unbounded random variables from stationary alpha-mixing processes combined with the new generalization of a result of White and Wooldrige [1990] have been the main tool to establich the extension of White's neural network denseness results. To illustrate the goodness and level of accuracy of the new denseness results, we were able to demonstrate the applicability of the new Value-at-Risk approaches by means of three examples with real financial data mainly from the banking sector traded on the Frankfort Stock Exchange.

Lung cancer, mainly caused by tobacco smoke, is the leading cause of cancer mortality. Large efforts in prevention and cessation have reduced smoking rates in the U.S. and other countries. Nevertheless, since 1990, rates have remained constant and it is believed that most of those currently smoking (~25%) are addicted to nicotine, and therefore are unable to stop smoking. An alternative strategy to reduce lung cancer mortality is the development of chemopreventive mixtures used to reduce cancer risk. Before entering clinical trails, it is crucial to know the efficacy, toxicity and the molecular mechanism by which the active compounds prevent carcinogenesis. 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), N-nitrosonornicotine (NNN) and benzo[a]pyrene (B[a]P) are among the most carcinogenic compounds in tobacco smoke. All have been widely used as model carcinogens and their tumorigenic activities are well established. It is believed that formation of DNA adducts is a crucial step in carcinogenesis. NNK and NNN form 4-hydroxy-1-(3-pyridyl)-1-butanone releasing and methylating adducts, while B[a]P forms B[a]P-tetraol-releasing adducts. Different isothiocyanates (ITCs) are able to prevent NNK-, NNN- or B[a]P-induced tumor formation, but relative little is know about the mechanism of these preventive effects. In this thesis, the influence of different ITCs on adduct formation from NNK plus B[a]P and NNN were evaluated. Using an A/J mouse lung tumor model, it was first shown that the formation of HPB-releasing, O6-mG and B[a]P-tetraol-releasing adducts were not affected when NNK and B[a]P were given individually or in combination, of by gavage. Using the same model, the effects of different mixtures of PEITC and BITC, given by gavage or in the diet, on DNA adduct formation were evaluated. Dietary treatment with phenethyl isothiocyanate (PEITC) or PEITC plus benzyl isothiocyanate (BITC) reduced levels of HPB-releasing adducts by 40*50%. This is consistent with a previously shown 40% inhibition of tumor multiplicity for the same treatment. In the gavage treatments with ITCs it seemed that PEITC reduced HPB-releasing DNA adducts, while levels of BITC counteracted these effects. Levels of O6-mG were minimally affected by any of the treatments. Levels of B[a]P-tetraol releasing adducts were reduced by gavaged PEITC Summary Page XII and BITC, 120 h after the last carcinogen treatment, while dietary treatment had no effects. We then extended our investigation to F-344 rats by using a similar ITC treatment protocol as in the mouse model. NNK was given in the drinking water and B[a]P in diet. Dietary PEITC reduced the formation of HPB-releasing globin and DNA adducts in lung but not in liver, while levels of B[a]P-tetraol-releasing adducts were unaffected. Additionally, the effects of PEITC, 3-phenlypropyl isothiocyanate, and their N-acetylcystein conjugates in diet on adducts from NNN in drinking water were evaluated in rat esophageal DNA and globin. Using a protocol known to inhibit NNNinduced esophageal tumorigenesis, the levels of HPB-releasing adduct levels were unaffected by the ITCs treatment. The observations that dietary PEITC inhibited the formation of HPB-releasing DNA adducts only in mice where the control levels were above 1 fmol/µg DNA and adduct levels in rat lung were reduced to levels seen in liver, lead to the conclusion that in mice and rats, there are at least two activation pathway of NNK. One is PEITC-sensitive and responsible for the high adduct levels in lung and presumably also for higher carcinogenicity of NNK in lung. The other is PEITC-insensitive and responsible for the remaining adduct levels and tumorigenicity. In conclusion, our results demonstrated that the preventive mechanism by which ITCs inhibit carcinogenesis is only in part due to inhibition of DNA adduct formation and that other mechanisms are involved. There is a large body of evidence indicating that induction of apoptosis may be a mechanism by which ITCs prevent tumor formation, but further studies are required.

Image synthesis often requires the Monte Carlo estimation of integrals. Based on a generalized concept of stratification we present an efficient sampling scheme that consistently outperforms previous techniques. This is achieved by assembling sampling patterns that are stratified in the sense of jittered sampling and N-rooks sampling at the same time. The faster convergence and improved anti-aliasing are demonstrated by numerical experiments.

We present an algorithm for determining quadrature rules for computing the direct illumination of predominantly diffuse objects by high dynamic range images. The new method precisely reproduces fine shadow detail, is much more efficient as compared to Monte Carlo integration, and does not require any manual intervention.

thesis deals with the investigation of the dynamics of optically excited (hot) electrons in thin and ultra-thin layers. The main interests concern about the time behaviour of the dissipation of energy and momentum of the excited electrons. The relevant relaxation times occur in the femtosecond time region. The two-photon photoemission is known to be an adequate tool in order to analyse such dynamical processes in real-time. This work expands the knowledge in the fields of electron relaxation in ultra-thin silver layers on different substrates, as well as in adsorbate states in a bandgap of a semiconductor. It contributes facts to the comprehension of spin transport through an interface between a metal and a semiconductor. The primary goal was to prove the predicted theory by reducing the observed crystal in at least one direction. One expects a change of the electron relaxation behaviour while altering the crystal’s shape from a 3d bulk to a 2d (ultra-thin) layer. This is due to the fact that below a determined layer thickness, the electron gas transfers to a two-dimensional one. This behaviour could be proven in this work. In an about 3nm thin silver layer on graphite, the hot electrons show a jump to longer relaxation time all over the whole accessible energy range. It is the first time that the temporal evolution of the relaxation of excited electrons could be observed during the transition from a 3d to a 2d system. In order to reduce or even eliminate the influence coming from the substrate, the system of silver on the semiconductor GaAs, which has a bandgap of 1.5eV at the Gamma-point, was investigated. The observations of the relaxation behaviour of hot electron in different ultra-thin silver layers on this semiconductor could show, that at metal-insulator-junctions, plasmons in the silver and in the interface, as well as cascading electrons from higher lying energies, have a huge influence to the dissipation of momentum and energy. This comes mainly from the band bending of the semiconductor, and from the electrons, which are excited in GaAs. The limitation of the silver layer on GaAs in one direction led to the expected generation of quantum well states (QWS) in the bandgap. Those adsorbate states have quantised energy- and momentum values, which are directly connected to the layer thickness and the standing electron wave therein. With the experiments of this work, published values could not only be completed and proved, but it could also be determined the time evolution of such a QWS. It came out that this QWS might only be filled by electrons, which are moving from the lower edge of the conduction band of the semiconductor to the silver and suffer cascading steps there. By means of the system silver on GaAs, and of the known fact that an excitation of electrons in GaAs with circularly polarised light of the energy 1.5eV does produce spin polarised electrons in the conduction band, it became possible to bring a contribution to the hot topic of spin injection. The main target of spin injection is the transfer of spin polarised electrons out of a ferromagnet into a semiconductor, in order to develop spin dependent switches and memories. It could be demonstrated here that spin polarised electrons from GaAs can move through the interface into silver, could be photoemitted from there and their spin was still being detectable. As a third investigation system, ultra-thin silver layers were deposited on the insulator MgO, which has a bandgap of 7.8eV. Also in this system, one could recognize a change in the relaxation time while reducing the dimension of the silver layer from thick to ultra-thin. Additionally, it came out an extreme large relaxation time at a layer thickness of 0.6 – 1.2nm. This time is an order of magnitude longer than at thick films, and this is a consequence of two factors: first, the reduction of the phase space due to the confined electron gas in the z-direction, and second, the slowlier thermalisation of the electron gas due to less accessible scattering partners.

Contributions to the application of adaptive antennas and CDMA code pooling in the TD CDMA downlink
(2002)

TD (Time Division)-CDMA is one of the partial standards adopted by 3GPP (3rd Generation Partnership Project) for 3rd Generation (3G) mobile radio systems. An important issue when designing 3G mobile radio systems is the efficient use of the available frequency spectrum, that is the achievement of a spectrum efficiency as high as possible. It is well known that the spectrum efficiency can be enhanced by utilizing multi-element antennas instead of single-element antennas at the base station (BS). Concerning the uplink of TD- CDMA, the benefits achievable by multi-element BS antennas have been quantitatively studied to a satisfactory extent. However, corresponding studies for the downlink are still missing. This thesis has the goal to make contributions to fill this lack of information. For near-to-reality directional mobile radio scenarios TD-CDMA downlink utilizing multi-element antennas at the BS are investigated both on the system level and on the link level. The system level investigations show how the carrier-to-interference ratio can be improved by applying such antennas. As the result of the link level investigations, which rely on the detection scheme Joint Detection (JD), the improvement of the bit er- ror rate by utilizing multi-element antennas at the BS can be quantified. Concerning the link level of TD-CDMA, a number of improvements are proposed which allow considerable performance enhancement of TD-CDMA downlink in connection with multi-element BS antennas. These improvements include * the concept of partial joint detection (PJD), in which at each mobile station (MS) only a subset of the arriving CDMA signals including those being of interest to this MS are jointly detected, * a blind channel estimation algorithm, * CDMA code pooling, that is assigning more than one CDMA code to certain con- nections in order to offer these users higher data rates, * maximizing the Shannon transmission capacity by an interleaving concept termed CDMA code interleaving and by advantageously selecting the assignment of CDMA codes to mobile radio channels, * specific power control schemes, which tackle the problem of different transmission qualities of the CDMA codes. As a comprehensive illustration of the advantages achievable by multi-element BS anten- nas in the TD-CDMA downlink, quantitative results concerning the spectrum efficiency for different numbers of antenna elements at the BS conclude the thesis.

Microsystem technology has been a fast evolving field over the last few years. Its ability to handle volumes in the sub-microliter range makes it very interesting for potential application in fields such as biology, medicine and pharmaceutical research. However, the use of micro-fabricated devices for the analysis of liquid biological samples still has to prove its applicability for many particular demands of basic research. This is particularly true for samples consisting of complex protein mixtures. The presented study therefore aimed at evaluating if a commonly used glass-coating technique from the field of micro-fluidic technology can be used to fabricate an analysis system for molecular biology. It was ultimately motivated by the demand to develop a technique that allows the analysis of biological samples at the single-cell level. Gene expression at the transcription level is initiated and regulated by DNA-binding proteins. To fully understand these regulatory processes, it is necessary to monitor the interaction of specific transcription factors with other elements - proteins as well as DNA sites - in living cells. One well-established method to perform such analysis is the Chromatin Immunoprecipitation (CHIP) assay. To map protein-DNA interactions, living cells are treated with formaldehyde in vivo to cross-link DNA-binding proteins to their resident sites. The chromatin is then broken into small fragments, and specific antibodies against the protein of interest are used to immunopurify the chromatin fragments to which those factors are bound. After purification, the associated DNA can be detected and analyzed using Polymerase Chain Reaction (PCR). Current CHIP technology is limited as it needs a relatively large number of cells while there is increasing interest in monitoring DNA-protein interactions in very few, if not single cells. Most notably this is the case in research on early organism development (embryogenesis). To investigate if microsystem technology can be used to analyze DNA-protein complexes from samples containing chromatin from only few cells, a new setup for fluid transport in glass capillaries of 75 µm inner diameter has been developed, forming an array of micro-columns for parallel affinity chromatography. The inner capillary walls were antibody-coated using a silane-based protocol. The remaining surface was made chemically inert by saturating free binding sites with suitable biomolecules. Variations of this protocol have been tested. Furthermore, the sensitivity of the PCR method to detect immunoprecipitated protein-DNA complexes was improved, resulting in the reliable detection of about 100 DNA fragments from chromatin. The aim of the study was to successively decrease the amount of analyzed chromatin in order to investigate the lower limits of this technology in regard to sensitivity and specificity of detection. The Drosophila GAGA transcription factor was used as an established model system. The protein has already been analyzed in several large-scale CHIP experiments and antibodies of excellent specificity are available. The results of the study revealed that this approach is not easily applicable to "real-world" biological samples in regard to volume reduction and specificity. Particularly, material that non-specifically adsorbed to capillary surfaces outweighed the specific antibody-antigen interaction, the system was designed for. It became clear that complex biological structures, such as chromatin-protein compositions, are not as easily accessible by techniques based on chemically modified glass surfaces as pre-purified samples. In the case of the investigated system, it became evident that there is a need for more research that goes beyond the scope of this work. It is necessary to develop novel coatings and materials to prevent non-specific adsorption. In addition to improving existing techniques, fundamentally new concepts, such as microstructures in biocompatible polymers or liquid transport on hydrophobic stripes on planar substrates to minimize surface contact, may also help to advance the miniaturization of biological experiments.

In the last decade, injection molding of long-fiber reinforced thermoplastics
(LFT) has been established as a low-cost, high volume technique for manufacturing
parts with complex shape without any post-treatment [1–3]. Applications
are mainly found in the automotive industry with a volume annually
growing by 10% to 15% [4].
While first applications were based on polyamide (PA6 and PA6.6), the market
share of glass fiber reinforced polypropylene (PP) is growing due to cost savings
and ease of processing. With the use of polypropylene, different processing
techniques such as gas-assisted injection molding [5] or injection compression
molding [6] have emerged in addition to injection molding [7, 8].
In order to overcome or justify higher materials costs when compared to short
fiber reinforced thermoplastics, the manufacturing techniques for LFT pellets
with fiber length greater than 10mm have evolved starting from pultrusion by
improving impregnation and throughput [9] or by direct addition of fiber strands
in the mold [10–12].
The benefit of long glass fiber reinforcement either in PP or PA is mainly due
to the enhanced resistance to fiber pull-out resulting in an increase in impact
properties and strength [13–19], even at low temperature levels [20]. Creep
and fatigue resistance are also substantially improved [21, 22].
The performance of fiber reinforced thermoplastics manufactured by injection
molding strongly depends on the flow-induced microstructure which is
driven by materials composition, processing conditions and part geometry.
The anisotropic microstructure is characterized by fiber fraction and dispersion,
fiber length and fiber orientation.
Facing the complexity of this processing technique, simulation becomes a precious
tool already in the concept phase for parts manufactured by injection
molding. Process simulation supports decisions with respect to choice of concepts
and materials. The part design is determined in terms of mold filling
including location of gates, vents and weld lines. Tool design requires the
determination of melt feeding, logistics and mold heating. Subsequently, performance
including prediction of shrinkage and warpage as well as structural
analysis is evaluated [23].
While simulation based on two-dimensional representation of three-dimensional
part geometry has been extensively used during the last two decades, the
complexity of the parts as well as the trend towards solid modelling in CAD
and CAE demands the step towards three-dimensional process simulation. The scope of this work is the prediction of flow-induced microstructure during
injection molding of long glass fiber reinforced polypropylene using threedimensional
process simulation. Modelling of the injection molding process in
three dimensions is supported experimentally by rheological characterization
in both shear and extensional flow and by two- and three-dimensional evaluation
of microstructure.
In chapter 2 the fundamentals of rheometry and rheology are presented with
respect to long fiber reinforced thermoplastics. The influence of parameters
on microstructure is described and approaches for modelling the state of microstructure
and its dynamics are discussed.
Chapter 3 introduces a rheometric technique allowing for rheological characterization
of polymer melts at processing conditions as encountered during
manufacturing. Using this rheometer, both shear and extensional viscosity of
long glass fiber reinforced polypropylene are measured with respect to composition
of materials, processing conditions and geometry of the cavity.
Chapter 4 contains the evaluation of microstructure of long glass fiber reinforced
polypropylene in terms of two-dimensional fiber orientation and its dependence
on materials parameters and processing condition. For the evaluation
of three-dimensional microstructure, a technique based on x-ray tomography
is introduced.
In chapter 5, modelling of microstructural dynamics is addressed. One-way
coupling of interactions between fluid and fibers is described macroscopically.
The flow behavior of fibers in the vicinity of cavity walls is evaluated experimentally.
From these observations, a model for treatment of fiber-wall interaction
with respect to numerical simulation is proposed.
Chapter 6 presents the application of three-dimensional simulation of the injection
molding process. Mold filling simulation is performed using a commercial
code while prediction of 3D fiber orientation is based on a proprietary module.
The rheological and thermal properties derived in chapter 3 are tested by
simulation of the experiments and comparison of predicted pressure and temperature
profile versus recorded results. The performance of fiber orientation
prediction is verified using analytical solutions of test examples from literature.
The capability of three-dimensional simulation is demonstrated based on the
simulation of mold filling and prediction of fiber orientation for an automotive
part.

In this paper we consider short term storage systems. We analyze presorting strategies to improve the effiency of these storage systems. The presorting task is called Batch PreSorting Problem (BPSP). The BPSP is a variation of an assigment problem, i.e., it has an assigment problem kernel and some additional constraints. We present different types of these presorting problems, introduce mathematical programming formulations and prove the NP-completeness for one type of the BPSP. Experiments are carried out in order to compare the different model formulations and to investigate the behavior of these models.

Annual Report 2001
(2002)

While there exist closed-form solutions for vanilla options in the presence of stochastic volatility for nearly a decade, practitioners still depend on numerical methods - in particular the Finite Difference and Monte Carlo methods - in the case of double barrier options. It was only recently that Lipton proposed (semi-)analytical solutions for this special class of path-dependent options. Although he presents two different approaches to derive these solutions, he restricts himself in both cases to a less general model, namely one where the correlation and the interest rate differential are assumed to be zero. Naturally the question arises, if these methods are still applicable for the general stochastic volatility model without these restrictions. In this paper we show that such a generalization fails for both methods. We will explain why this is the case and discuss the consequences of our results.