### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (608) (remove)

#### Language

- English (608) (remove)

#### Keywords

- Visualisierung (13)
- finite element method (8)
- Finite-Elemente-Methode (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Visualization (6)
- Computergraphik (5)
- Finanzmathematik (5)
- Mobilfunk (5)
- Optimization (5)

#### Faculty / Organisational entity

- Fachbereich Mathematik (215)
- Fachbereich Informatik (130)
- Fachbereich Maschinenbau und Verfahrenstechnik (95)
- Fachbereich Chemie (58)
- Fachbereich Elektrotechnik und Informationstechnik (45)
- Fachbereich Biologie (27)
- Fachbereich Sozialwissenschaften (15)
- Fachbereich Wirtschaftswissenschaften (8)
- Fachbereich ARUBI (5)
- Fachbereich Physik (5)
- Fraunhofer (ITWM) (4)
- Fachbereich Raum- und Umweltplanung (3)
- Universitätsbibliothek (1)

Termination of Rewriting
(1994)

More and more, term rewriting systems are applied in computer science aswell as in mathematics. They are based on directed equations which may be used as non-deterministic functional programs. Termination is a key property for computing with termrewriting systems.In this thesis, we deal with different classes of so-called simplification orderings which areable to prove the termination of term rewriting systems. Above all, we focus on the problemof applying these termination methods to examples occurring in practice. We introduce aformalism that allows clear representations of orderings. The power of classical simplifica-tion orderings - namely recursive path orderings, path and decomposition orderings, Knuth-Bendix orderings and polynomial orderings - is improved. Further, we restrict these orderingssuch that they are compatible with underlying AC-theories by extending well-known methodsas well as by developing new techniques. For automatically generating all these orderings,heuristic-based algorithms are given. A comparison of these orderings with respect to theirpowers and their time complexities concludes the theoretical part of this thesis. Finally, notonly a detailed statistical evaluation of examples but also a brief introduction into the designof a software tool representing the integration of the specified approaches is given.

In dieser Dissertation wird das Konzept der Gröbnerbasen für endlich erzeugte Monoid-und Gruppenringe verallgemeinert. Dabei werden Reduktionsmethoden sowohl zurDarstellung der Monoid- beziehungsweise Gruppenelemente, als auch zur Beschreibungder Rechtsidealkongruenz in den entsprechenden Monoid- beziehungsweise Gruppenrin-gen benutzt. Da im allgemeinen Monoide und insbesondere Gruppen keine zulässigenOrdnungen mehr erlauben, treten bei der Definition einer geeigneten Reduktionsrela-tion wesentliche Probleme auf: Zum einen ist es schwierig, die Terminierung einer Re-duktionsrelation zu garantieren, zum anderen sind Reduktionsschritte nicht mehr mitMultiplikationen verträglich und daher beschreiben Reduktionen nicht mehr unbedingteine Rechtsidealkongruenz. In dieser Arbeit werden verschiedene Möglichkeiten Reduk-tionsrelationen zu definieren aufgezeigt und im Hinblick auf die beschriebenen Problemeuntersucht. Dabei wird das Konzept der Saturierung, d.h. eine Polynommenge so zu er-weitern, daß man die von ihr erzeugte Rechtsidealkongruenz durch Reduktion erfassenkann, benutzt, um Charakterisierungen von Gröbnerbasen bezüglich der verschiedenenReduktionen durch s-Polynome zu geben. Mithilfe dieser Konzepte ist es gelungenfür spezielle Klassen von Monoiden, wie z.B. endliche, kommutative oder freie, undverschiedene Klassen von Gruppen, wie z.B. endliche, freie, plain, kontext-freie odernilpotente, unter Ausnutzung struktureller Eigenschaften spezielle Reduktionsrelatio-nen zu definieren und terminierende Algorithmen zur Berechnung von Gröbnerbasenbezüglich dieser Reduktionsrelationen zu entwickeln.

Structure and Construction of Instanton Bundles on P3

At present the standardization of third generation (3G) mobile radio systems is the subject of worldwide research activities. These systems will cope with the market demand for high data rate services and the system requirement for exibility concerning the offered services and the transmission qualities. However, there will be de ciencies with respect to high capacity, if 3G mobile radio systems exclusively use single antennas. Very promising technique developed for increasing the capacity of 3G mobile radio systems the application is adaptive antennas. In this thesis, the benefits of using adaptive antennas are investigated for 3G mobile radio systems based on Time Division CDMA (TD-CDMA), which forms part of the European 3G mobile radio air interface standard adopted by the ETSI, and is intensively studied within the standardization activities towards a worldwide 3G air interface standard directed by the 3GPP (3rd Generation Partnership Project). One of the most important issues related to adaptive antennas is the analysis of the benefits of using adaptive antennas compared to single antennas. In this thesis, these bene ts are explained theoretically and illustrated by computer simulation results for both data detection, which is performed according to the joint detection principle, and channel estimation, which is applied according to the Steiner estimator, in the TD-CDMA uplink. The theoretical explanations are based on well-known solved mathematical problems. The simulation results illustrating the benefits of adaptive antennas are produced by employing a novel simulation concept, which offers a considerable reduction of the simulation time and complexity, as well as increased exibility concerning the use of different system parameters, compared to the existing simulation concepts for TD-CDMA. Furthermore, three novel techniques are presented which can be used in systems with adaptive antennas for additionally improving the system performance compared to single antennas. These techniques concern the problems of code-channel mismatch, of user separation in the spatial domain, and of intercell interference, which, as it is shown in the thesis, play a critical role on the performance of TD-CDMA with adaptive antennas. Finally, a novel approach for illustrating the performance differences between the uplink and downlink of TD-CDMA based mobile radio systems in a straightforward manner is presented. Since a cellular mobile radio system with adaptive antennas is considered, the ultimate goal is the investigation of the overall system efficiency rather than the efficiency of a single link. In this thesis, the efficiency of TD-CDMA is evaluated through its spectrum efficiency and capacity, which are two closely related performance measures for cellular mobile radio systems. Compared to the use of single antennas, the use of adaptive antennas allows impressive improvements of both spectrum efficiency and capacity. Depending on the mobile radio channel model and the user velocity, improvement factors range from six to 10.7 for the spectrum efficiency, and from 6.7 to 12.6 for the spectrum capacity of TD-CDMA. Thus, adaptive antennas constitute a promising technique for capacity increase of future mobile communications systems.

Urban Design Guidelines have been used in Jakarta for controlling the form of the built environment. This planning instrument has been implemented in several central city redevelopment projects particularly in superblock areas. The instrument has gained popularity and implemented in new development and conservation areas as well. Despite its popularity, there is no formal literature on the Indonesian Urban Design Guideline that systematically explain its contents, structure and the formulation process. This dissertation attempts to explain the substantive of urban design guideline and the way to control its implementation. Various streams of urban design theories are presented and evaluated in term of their suitability for attaining a high urbanistic quality in major Indonesian cities. The explanation on the form and the practical application of this planning instrument is elaborated in a comparative investigation of similar instrument in other countries; namely the USA, Britain and Germany. A case study of a superblock development in Jakarta demonstrates the application of the urban design theories and guideline. Currently, the role of computer in the process of formulating the urban design guideline in Indonesia is merely as a replacement of the manual method, particularly in areas of worksheet calculation and design presentation. Further support of computer for urban planning and design tasks has been researched in developed countries, which shows its potential in supporting decision-making process, enabling public participation, team collaboration, documentation and publication of urban design decisions and so on. It is hoped that the computer usage in Indonesian urban design process can catch up with the global trend of multimedia, networking (Internet/Intranet) and interactive functions that is presented with examples from developed countries.

In this thesis a new family of codes for the use in optical high bit rate transmission systems with a direct sequence code division multiple access scheme component was developed and its performance examined. These codes were then used as orthogonal sequences for the coding of the different wavelength channels in a hybrid OCDMA/WDMA system. The overall performance was finally compared to a pure WDMA system. The common codes known up to date have the problem of needing very long sequence lengths in order to accommodate an adequate number of users. Thus, code sequence lengths of 1000 or more were necessary to reach bit error ratios of with only about 10 simultaneous users. However, these sequence lengths are unacceptable if signals with data rates higher than 100 MBit/s are to be transmitted, not to speak about the number of simultaneous users. Starting from the well known optical orthogonal codes (OOC) and under the assumption of synchronization among the participating transmitters - justified for high bit rate WDM transmission systems -, a new code family called ?modified optical orthogonal codes? (MOOC) was developed by minimizing the crosscorrelation products of each two sequences. By this, the number of simultaneous users could be increased by several orders of magnitude compared to the known codes so far. The obtained code sequences were then introduced in numerical simulations of a 80 GBit/s DWDM transmission system with 8 channels, each carrying a 10 GBit/s payload. Usual DWDM systems are featured by enormous efforts to minimize the spectral spacing between the various wavelength channels. These small spacings in combination with the high bit rates lead to very strict demands on the system components like laser diode, filters, multiplexers etc. Continuous channel monitoring and temperature regulations of sensitive components are inevitable, but often cannot prevent drop downs of the bit error ratio due to aging effects or outer influences like mechanical stress. The obtained results show that - very different to the pure WDM system - by orthogonally coding adjacent wavelength channels with the proposed MOOC, the overall system performance gets widely independent from system parameters like input powers, channel spacings and link lengths. Nonlinear effects like XPM that insert interchannel crosstalk are effectively fought. Furthermore, one can entirely dispense with the bandpass filters, thus simplifying the receiver structure, which is especially interesting for broadcast networks. A DWDM system upgraded with the OCDMA subsystem shows a very robust behavior against a variety of influences.

The study of families of curves with prescribed singularities has a long tradition. Its foundations were laid by Plücker, Severi, Segre, and Zariski at the beginning of the 20th century. Leading to interesting results with applications in singularity theory and in the topology of complex algebraic curves and surfaces it has attained the continuous attraction of algebraic geometers since then. Throughout this thesis we examine the varieties V(D,S1,...,Sr) of irreducible reduced curves in a fixed linear system |D| on a smooth projective surface S over the complex numbers having precisely r singular points of types S1,...,Sr. We are mainly interested in the following three questions: 1) Is V(D,S1,...,Sr) non-empty? 2) Is V(D,S1,...,Sr) T-smooth, that is smooth of the expected dimension? 3) Is V(D,S1,...Sr) irreducible? We would like to answer the questions in such a way that we present numerical conditions depending on invariants of the divisor D and of the singularity types S1,...,Sr, which ensure a positive answer. The main conditions which we derive will be of the type inv(S1)+...+inv(Sr) < aD^2+bD.K+c, where inv is some invariant of singularity types, a, b and c are some constants, and K is some fixed divisor. The case that S is the projective plane has been very well studied by many authors, and on other surfaces some results for curves with nodes and cusps have been derived in the past. We, however, consider arbitrary singularity types, and the results which we derive apply to large classes of surfaces, including surfaces in projective three-space, K3-surfaces, products of curves and geometrically ruled surfaces.

Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.

In der vorliegenden Arbeit wird das Verhalten von thermoplastischen
Verbundwerkstoffen mittels experimentellen und numerischen Untersuchungen
betrachtet. Das Ziel dieser Untersuchungen ist die Identifikation und Quantifikation
des Versagensverhaltens und der Energieabsorptionsmechanismen von geschichteten,
quasi-isotropen thermoplastischen Faser-Kunststoff-Verbunden und die Umsetzung
der gewonnenen Einsichten in Eigenschaften und Verhalten eines Materialmodells zur
Vorhersage des Crash-Verhaltens dieser Werkstoffe in transienten Analysen.
Vertreter der untersuchten Klassen sind un- und mittel-vertreckte Rundgestricke und
glasfaserverstärkte Thermoplaste (GMT). Die Untersuchungen an rundgestrickten
glasfaser-(GF)-verstärktem Polyethylentherephthalat (PET) waren Teil eines
Forschungsprojektes zur Charakterisierung sowohl der Verarbeitbarkeit als auch des
mechanischen Verhaltens. Experimente an GMT und Schnittfaser-GMT wurden
ebenfalls zum Vergleich mit dem Gestrick durchgeführt und dienen als Bestätigung
des beobachteten Verhaltens des Gestrickes.
Besonderer Aufmerksamkeit wird der Einfluß der Probengeometrie auf die Resultate
gewidmet, weil die Crash-Charakteristiken wesentlich von der Geometrie des
getesteten Probekörpers abhängen. Hierzu wurde ein Rundhutprofil zur Untersuchung
dieses Einflußes definiert. Diese spezielle Geometrie hat insbesondere Vorteile
hinsichtlich Energieabsorptionsvermögen sowie Herstellbarkeit von thermoplastischen
Verbundwerkstoffen (TPCs). Es wurden Impakt- und Perforationsversuche zur
Untersuchung der Schädigungsausbreitung und zur Charakterisierung der Zähigkeit
der untersuchten Materialien durchgeführt.
Geschichtete TPCs versagen hauptsächlich in einem Laminat-Biegemodus mit
kombiniertem intra- und interlaminaren Schub (transversaler Schub zwischen Lagen und teilweise mit transversalen Schubbrüchen in einzelnen Lagen). Durch eine
Kopplung der aktuellen Versagensmodi und Crash-Kennwerten wie der mittleren
Crash-Spannung, konnten Indikationen über die Relation zwischen Materialparameter
und absoluter Energieabsorption gewonnen werden.
Numerische Untersuchungen wurden mit einem expliziten Finiten Elemente-
Programm zur Simulation von dreidimensionalen, großen Verformungen durchgeführt.
Das Modell besteht bezüglich des Querschnittaufbaus aus einer mesoskopischen
Darstellung, die zwischen Matrix-zwischenlagen und mesoskopischen Verbundwerkstofflagen unterscheidet. Die Modellgeometrie stellt einen vereinfachten
Längsquerschnitt durch den Probekörper dar. Dabei wurden Einflüsse der Reibung
zwischen Impaktor und Material sowie zwischen einzelnen Lagen berücksichtigt.
Auch die lokal herrschende Dehnrate, Energie und Spannungs-Dehnungsverteilung
über die mesoskopischen Phasen konnten beobachtet werden. Dieses Modell zeigt
deutlich die verschiedenen Effekte, die durch den heterogenen Charakter des Laminats
entstehen, und gibt auch Hinweise für einige Erklärungen dieser Effekte.
Basierend auf den Resultaten der obengenannten Untersuchungen wurde ein
phänomenologisches Modell mit a-priori Information des inherenten
Materialverhaltens vorgeschlagen. Daher, daß das Crashverhalten vom heterogenen
Charakter des Werkstoffes dominiert wird, werden im Modell die Phasen separat
betrachtet. Eine einfache Methode zur Bestimmung der mesoskopischen Eigenschaften
wird diskutiert.
Zur Beschreibung des Verhaltens vom thermoplastischen Matrixsystem während
„Crushing“ würde ein dehnraten- und temperaturabhängiges Plastizitätsgesetz
ausreichen. Für die Beschreibung des Verhaltens der Verbundwerkstoffschichten wird
eine gekoppelte Plastizitäts- und Schädigungsformulierung vorgeschlagen. Ein solches
Modell kann sowohl den plastischen Anteil des Matrixsystems als auch das
„Softening“ - verursacht durch Faser-Matrix-Grenzflächenversagen und Faserbrüche -
beschreiben. Das vorgeschlagene Modell unterscheidet zwischen Belastungsfällen für
axiales „Crushing“ und Versagen ohne „Crushing“. Diese Unterteilung ermöglicht
eine explizite Modellierung des Werkstoffes unter Berücksichtigung des spezifischen
Materialzustandes und der Geometrie für den außerordentlichen Belastungsfall, der
zum progressiven Versagen führt.

The development of recombinant DNA techniques opened a new era for protein production both in scientific research and industrial application. However, the purification of recombinant proteins is very often quite difficult and inefficient. Therefore, we tried to employ novel techniques for the expression and purification of three pharmacologically interesting proteins: the plant toxin gelonin; a fusion protein of gelonin and the extracellular domain of the subunit of the acetylcholine receptor (gelonin-AchR) and human neurotrophin 3 (hNT3). Recombinant gelonin, acetylcholine receptor a subunit and their fusion product, gelonin-AchR were constructed and expressed. The gelonin gene, a 753 bp polynucleotide was chemically synthesized by Ya-Wei Shi et al. and was kindly provided to us. The gene was first inserted into the vector pUC118 yielding pUC-gel. It was subsequently transferred into pET28a and pET-gel was expressed in E. coli. The product, gelonin was soluble and was purified in two steps showing a homogeneous band corresponding to 28 kD on SDS-PAGE. The expression of the extracellular domain of the -subunit of AchR always led to insoluble aggregates and even upon coexpression with the chaperonin GroESL, very small and hardly reproducible amounts of soluble material were formed, only. Therefore, recombinant AchR- gelonin was cloned and expressed in the same host. The corresponding fusion protein, gelonin-AchR, again formed aggregates and it had to be solubilized in 6 M Gu-HCl for further purification and refolding. The final product, however, was recognized by several monoclonal antibodies directed against the extracellular domain of the -subunit of AchR as well as a polyclonal serum against gelonin. Expression and purification of recombinant hNT3 was achieved by the use of a protein self-splicing system. Based on the reported hNT3 DNA sequence, a 380 bp fragment corresponding to a 14 kD protein was amplified from genomal DNA of human whole blood by PCR. The DNA fragment was cloned into the pTXB1 vector, which contains a DNA fragment of intein and chintin binding domain (CBD). A further construct, pJLA-hNT3, is temperature-inducible. Both constructs expressed the target protein, hNT3-intein-CBD in E. coli by the induction with IPTG or temperature, however, as aggregates. After denaturation and renaturation, the soluble fusion protein was slowly loaded on an affinity column of chitin beads. A 14 kD hNT3 could be isolated after cleavage with DTT either at 4 °C or 25 °C for 48 h. Based on nerve fiber out-growth of the dorsal root ganglia of chicken embryos, both, hNT-3-intein-CBD and hNT3 itself exhibit almost the same biological activity.

Contributions to the application of adaptive antennas and CDMA code pooling in the TD CDMA downlink
(2002)

TD (Time Division)-CDMA is one of the partial standards adopted by 3GPP (3rd Generation Partnership Project) for 3rd Generation (3G) mobile radio systems. An important issue when designing 3G mobile radio systems is the efficient use of the available frequency spectrum, that is the achievement of a spectrum efficiency as high as possible. It is well known that the spectrum efficiency can be enhanced by utilizing multi-element antennas instead of single-element antennas at the base station (BS). Concerning the uplink of TD- CDMA, the benefits achievable by multi-element BS antennas have been quantitatively studied to a satisfactory extent. However, corresponding studies for the downlink are still missing. This thesis has the goal to make contributions to fill this lack of information. For near-to-reality directional mobile radio scenarios TD-CDMA downlink utilizing multi-element antennas at the BS are investigated both on the system level and on the link level. The system level investigations show how the carrier-to-interference ratio can be improved by applying such antennas. As the result of the link level investigations, which rely on the detection scheme Joint Detection (JD), the improvement of the bit er- ror rate by utilizing multi-element antennas at the BS can be quantified. Concerning the link level of TD-CDMA, a number of improvements are proposed which allow considerable performance enhancement of TD-CDMA downlink in connection with multi-element BS antennas. These improvements include * the concept of partial joint detection (PJD), in which at each mobile station (MS) only a subset of the arriving CDMA signals including those being of interest to this MS are jointly detected, * a blind channel estimation algorithm, * CDMA code pooling, that is assigning more than one CDMA code to certain con- nections in order to offer these users higher data rates, * maximizing the Shannon transmission capacity by an interleaving concept termed CDMA code interleaving and by advantageously selecting the assignment of CDMA codes to mobile radio channels, * specific power control schemes, which tackle the problem of different transmission qualities of the CDMA codes. As a comprehensive illustration of the advantages achievable by multi-element BS anten- nas in the TD-CDMA downlink, quantitative results concerning the spectrum efficiency for different numbers of antenna elements at the BS conclude the thesis.

thesis deals with the investigation of the dynamics of optically excited (hot) electrons in thin and ultra-thin layers. The main interests concern about the time behaviour of the dissipation of energy and momentum of the excited electrons. The relevant relaxation times occur in the femtosecond time region. The two-photon photoemission is known to be an adequate tool in order to analyse such dynamical processes in real-time. This work expands the knowledge in the fields of electron relaxation in ultra-thin silver layers on different substrates, as well as in adsorbate states in a bandgap of a semiconductor. It contributes facts to the comprehension of spin transport through an interface between a metal and a semiconductor. The primary goal was to prove the predicted theory by reducing the observed crystal in at least one direction. One expects a change of the electron relaxation behaviour while altering the crystal’s shape from a 3d bulk to a 2d (ultra-thin) layer. This is due to the fact that below a determined layer thickness, the electron gas transfers to a two-dimensional one. This behaviour could be proven in this work. In an about 3nm thin silver layer on graphite, the hot electrons show a jump to longer relaxation time all over the whole accessible energy range. It is the first time that the temporal evolution of the relaxation of excited electrons could be observed during the transition from a 3d to a 2d system. In order to reduce or even eliminate the influence coming from the substrate, the system of silver on the semiconductor GaAs, which has a bandgap of 1.5eV at the Gamma-point, was investigated. The observations of the relaxation behaviour of hot electron in different ultra-thin silver layers on this semiconductor could show, that at metal-insulator-junctions, plasmons in the silver and in the interface, as well as cascading electrons from higher lying energies, have a huge influence to the dissipation of momentum and energy. This comes mainly from the band bending of the semiconductor, and from the electrons, which are excited in GaAs. The limitation of the silver layer on GaAs in one direction led to the expected generation of quantum well states (QWS) in the bandgap. Those adsorbate states have quantised energy- and momentum values, which are directly connected to the layer thickness and the standing electron wave therein. With the experiments of this work, published values could not only be completed and proved, but it could also be determined the time evolution of such a QWS. It came out that this QWS might only be filled by electrons, which are moving from the lower edge of the conduction band of the semiconductor to the silver and suffer cascading steps there. By means of the system silver on GaAs, and of the known fact that an excitation of electrons in GaAs with circularly polarised light of the energy 1.5eV does produce spin polarised electrons in the conduction band, it became possible to bring a contribution to the hot topic of spin injection. The main target of spin injection is the transfer of spin polarised electrons out of a ferromagnet into a semiconductor, in order to develop spin dependent switches and memories. It could be demonstrated here that spin polarised electrons from GaAs can move through the interface into silver, could be photoemitted from there and their spin was still being detectable. As a third investigation system, ultra-thin silver layers were deposited on the insulator MgO, which has a bandgap of 7.8eV. Also in this system, one could recognize a change in the relaxation time while reducing the dimension of the silver layer from thick to ultra-thin. Additionally, it came out an extreme large relaxation time at a layer thickness of 0.6 – 1.2nm. This time is an order of magnitude longer than at thick films, and this is a consequence of two factors: first, the reduction of the phase space due to the confined electron gas in the z-direction, and second, the slowlier thermalisation of the electron gas due to less accessible scattering partners.

Matrix Compression Methods for the Numerical Solution of Radiative Transfer in Scattering Media
(2002)

Radiative transfer in scattering media is usually described by the radiative transfer equation, an integro-differential equation which describes the propagation of the radiative intensity along a ray. The high dimensionality of the equation leads to a very large number of unknowns when discretizing the equation. This is the major difficulty in its numerical solution. In case of isotropic scattering and diffuse boundaries, the radiative transfer equation can be reformulated into a system of integral equations of the second kind, where the position is the only independent variable. By employing the so-called momentum equation, we derive an integral equation, which is also valid in case of linear anisotropic scattering. This equation is very similar to the equation for the isotropic case: no additional unknowns are introduced and the integral operators involved have very similar mapping properties. The discretization of an integral operator leads to a full matrix. Therefore, due to the large dimension of the matrix in practical applcation, it is not feasible to assemble and store the entire matrix. The so-called matrix compression methods circumvent the assembly of the matrix. Instead, the matrix-vector multiplications needed by iterative solvers are performed only approximately, thus, reducing, the computational complexity tremendously. The kernels of the integral equation describing the radiative transfer are very similar to the kernels of the integral equations occuring in the boundary element method. Therefore, with only slight modifications, the matrix compression methods, developed for the latter are readily applicable to the former. As apposed to the boundary element method, the integral kernels for radiative transfer in absorbing and scattering media involve an exponential decay term. We examine how this decay influences the efficiency of the matrix compression methods. Further, a comparison with the discrete ordinate method shows that discretizing the integral equation may lead to reductions in CPU time and to an improved accuracy especially in case of small absorption and scattering coefficients or if local sources are present.

In this work we present and estimate an explanatory model with a predefined system of explanatory equations, a so called lag dependent model. We present a locally optimal, on blocked neural network based lag estimator and theorems about consistensy. We define the change points in context of lag dependent model, and present a powerfull algorithm for change point detection in high dimensional high dynamical systems. We present a special kind of bootstrap for approximating the distribution of statistics of interest in dependent processes.

Lung cancer, mainly caused by tobacco smoke, is the leading cause of cancer mortality. Large efforts in prevention and cessation have reduced smoking rates in the U.S. and other countries. Nevertheless, since 1990, rates have remained constant and it is believed that most of those currently smoking (~25%) are addicted to nicotine, and therefore are unable to stop smoking. An alternative strategy to reduce lung cancer mortality is the development of chemopreventive mixtures used to reduce cancer risk. Before entering clinical trails, it is crucial to know the efficacy, toxicity and the molecular mechanism by which the active compounds prevent carcinogenesis. 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), N-nitrosonornicotine (NNN) and benzo[a]pyrene (B[a]P) are among the most carcinogenic compounds in tobacco smoke. All have been widely used as model carcinogens and their tumorigenic activities are well established. It is believed that formation of DNA adducts is a crucial step in carcinogenesis. NNK and NNN form 4-hydroxy-1-(3-pyridyl)-1-butanone releasing and methylating adducts, while B[a]P forms B[a]P-tetraol-releasing adducts. Different isothiocyanates (ITCs) are able to prevent NNK-, NNN- or B[a]P-induced tumor formation, but relative little is know about the mechanism of these preventive effects. In this thesis, the influence of different ITCs on adduct formation from NNK plus B[a]P and NNN were evaluated. Using an A/J mouse lung tumor model, it was first shown that the formation of HPB-releasing, O6-mG and B[a]P-tetraol-releasing adducts were not affected when NNK and B[a]P were given individually or in combination, of by gavage. Using the same model, the effects of different mixtures of PEITC and BITC, given by gavage or in the diet, on DNA adduct formation were evaluated. Dietary treatment with phenethyl isothiocyanate (PEITC) or PEITC plus benzyl isothiocyanate (BITC) reduced levels of HPB-releasing adducts by 40*50%. This is consistent with a previously shown 40% inhibition of tumor multiplicity for the same treatment. In the gavage treatments with ITCs it seemed that PEITC reduced HPB-releasing DNA adducts, while levels of BITC counteracted these effects. Levels of O6-mG were minimally affected by any of the treatments. Levels of B[a]P-tetraol releasing adducts were reduced by gavaged PEITC Summary Page XII and BITC, 120 h after the last carcinogen treatment, while dietary treatment had no effects. We then extended our investigation to F-344 rats by using a similar ITC treatment protocol as in the mouse model. NNK was given in the drinking water and B[a]P in diet. Dietary PEITC reduced the formation of HPB-releasing globin and DNA adducts in lung but not in liver, while levels of B[a]P-tetraol-releasing adducts were unaffected. Additionally, the effects of PEITC, 3-phenlypropyl isothiocyanate, and their N-acetylcystein conjugates in diet on adducts from NNN in drinking water were evaluated in rat esophageal DNA and globin. Using a protocol known to inhibit NNNinduced esophageal tumorigenesis, the levels of HPB-releasing adduct levels were unaffected by the ITCs treatment. The observations that dietary PEITC inhibited the formation of HPB-releasing DNA adducts only in mice where the control levels were above 1 fmol/µg DNA and adduct levels in rat lung were reduced to levels seen in liver, lead to the conclusion that in mice and rats, there are at least two activation pathway of NNK. One is PEITC-sensitive and responsible for the high adduct levels in lung and presumably also for higher carcinogenicity of NNK in lung. The other is PEITC-insensitive and responsible for the remaining adduct levels and tumorigenicity. In conclusion, our results demonstrated that the preventive mechanism by which ITCs inhibit carcinogenesis is only in part due to inhibition of DNA adduct formation and that other mechanisms are involved. There is a large body of evidence indicating that induction of apoptosis may be a mechanism by which ITCs prevent tumor formation, but further studies are required.

In this work the investigation of a (Ti, Al, Si) N system was done. The main point of investigation was to study the possibility of getting the nanocomposite coatings structures by deposition of multilayer films from TiN, AlSiN, . This tries to understand the relation between the mechanical properties (hardness, Young s modulus), and the microstructure (nanocrystalline with individual phases). Particularly special attention was given to the temperature effects on microstructural changes in annealing at 600 °C for the coatings. The surface hardness, elastic modulus, and the multilayers diffusion and compositions were the test tools for the comparison between the different coated samples with and without annealing at 600 °C. To achieve this object a rectangular aluminum vacuum chamber with three unbalanced sputtering magnetrons for the deposition of thin film coatings from different materials was constructed The chamber consists mainly of two chambers, the pre-vacuum chamber to load the workpiece, and the main vacuum chamber where the sputtering deposition of the thin film coatings take place. The workpiece is moving on a car travel on a railway between the two chambers to the position of the magnetrons by step motors. The chambers are divided by a self constructed rectangular gate controlled manually from outside the chamber. The chamber was sealed for vacuum use using glue and screws. Therefore, different types of glue were tested not only for its ability to develop an uniform thin layer in the gap between the aluminum plates to seal the chamber for vacuum use, but also low outgassing rates which made it suitable for vacuum use. A epoxy was able to fulfill this tasks. The evacuation characteristics of the constructed chamber was improved by minimizing the inner surface outgassing rate. Therefore, the throughput outgassing rate test method was used in the comparisons between the selected two aluminum materials (A2017 and A5353) samples short time period (one hour) outgassing rates. Different machining methods and treatments for the inner surface of the vacuum chamber were tested. The machining of the surface of material A (A2017) with ethanol as coolant fluid was able to reduce its outgassing rate a factor of 6 compared with a non-machined sample surface of the same material. The reduction of the surface porous oxide layer on the top of the aluminum surface by the pickling process with HNO3 acid, and the protection of it by producing another passive non-porous oxides layer using anodizing process will protect the surface for longer time and will minimize the outgassing rates even under humid atmosphere The residual gas analyzer (RGA) 6. Summary test shows that more than 85% of the gases inside the test chamber were water vapour (H2O) and the rests are (N2, H2, CO), so liquid nitrogen water vapor trap can enhance the chamber pumping down process. As a result it was possible to construct a chamber that can be pumped down using a turbo molecular pump (450 L/s) to the range of 1x10-6 mbar within one hour of evacuations where the chamber volume is 160 Litters and the inner surface area is 1.6 m2. This is a good base pressure for the process of sputtering deposition of hard thin film coatings. Multilayer thin film coating was deposited to demonstrate that nanostructured thin film within the (Ti, Al, Si) N system could be prepared by reactive magnetron sputtering of multi thin film layers of TiN, AlSiN. The (SNMS) spectrometry of the test samples show that a complete diffusion between the different deposited thin film coating layers in each sample takes place, even at low substrate deposition temperature. The high magnetic flux of the unbalanced magnetrons and the high sputtering power were able to produce a high ion-toatom flux, which give high mobility to the coated atoms. The interactions between the high mobility of the coated atoms and the ion-to-atom flux were sufficient to enhance the diffusion between the different deposited thin layers. It was shown from the XRD patterns for this system that the structure of the formed mixture consists of two phases. One phase is noted as TiN bulk and another detected unknown amorphous phase, which can be SiNx or AlN or a combination of Ti-Al-Si-N. As a result we where able to deposit a nanocomposite coatings by the deposition of multilayers from TiN, AlSiN thin film coatings using the constructed vacuum chamber

Microsystem technology has been a fast evolving field over the last few years. Its ability to handle volumes in the sub-microliter range makes it very interesting for potential application in fields such as biology, medicine and pharmaceutical research. However, the use of micro-fabricated devices for the analysis of liquid biological samples still has to prove its applicability for many particular demands of basic research. This is particularly true for samples consisting of complex protein mixtures. The presented study therefore aimed at evaluating if a commonly used glass-coating technique from the field of micro-fluidic technology can be used to fabricate an analysis system for molecular biology. It was ultimately motivated by the demand to develop a technique that allows the analysis of biological samples at the single-cell level. Gene expression at the transcription level is initiated and regulated by DNA-binding proteins. To fully understand these regulatory processes, it is necessary to monitor the interaction of specific transcription factors with other elements - proteins as well as DNA sites - in living cells. One well-established method to perform such analysis is the Chromatin Immunoprecipitation (CHIP) assay. To map protein-DNA interactions, living cells are treated with formaldehyde in vivo to cross-link DNA-binding proteins to their resident sites. The chromatin is then broken into small fragments, and specific antibodies against the protein of interest are used to immunopurify the chromatin fragments to which those factors are bound. After purification, the associated DNA can be detected and analyzed using Polymerase Chain Reaction (PCR). Current CHIP technology is limited as it needs a relatively large number of cells while there is increasing interest in monitoring DNA-protein interactions in very few, if not single cells. Most notably this is the case in research on early organism development (embryogenesis). To investigate if microsystem technology can be used to analyze DNA-protein complexes from samples containing chromatin from only few cells, a new setup for fluid transport in glass capillaries of 75 µm inner diameter has been developed, forming an array of micro-columns for parallel affinity chromatography. The inner capillary walls were antibody-coated using a silane-based protocol. The remaining surface was made chemically inert by saturating free binding sites with suitable biomolecules. Variations of this protocol have been tested. Furthermore, the sensitivity of the PCR method to detect immunoprecipitated protein-DNA complexes was improved, resulting in the reliable detection of about 100 DNA fragments from chromatin. The aim of the study was to successively decrease the amount of analyzed chromatin in order to investigate the lower limits of this technology in regard to sensitivity and specificity of detection. The Drosophila GAGA transcription factor was used as an established model system. The protein has already been analyzed in several large-scale CHIP experiments and antibodies of excellent specificity are available. The results of the study revealed that this approach is not easily applicable to "real-world" biological samples in regard to volume reduction and specificity. Particularly, material that non-specifically adsorbed to capillary surfaces outweighed the specific antibody-antigen interaction, the system was designed for. It became clear that complex biological structures, such as chromatin-protein compositions, are not as easily accessible by techniques based on chemically modified glass surfaces as pre-purified samples. In the case of the investigated system, it became evident that there is a need for more research that goes beyond the scope of this work. It is necessary to develop novel coatings and materials to prevent non-specific adsorption. In addition to improving existing techniques, fundamentally new concepts, such as microstructures in biocompatible polymers or liquid transport on hydrophobic stripes on planar substrates to minimize surface contact, may also help to advance the miniaturization of biological experiments.

Different aspects of geomagnetic field modelling from satellite data are examined in the framework of modern multiscale approximation. The thesis is mostly concerned with wavelet techniques, i.e. multiscale methods based on certain classes of kernel functions which are able to realize a multiscale analysis of the funtion (data) space under consideration. It is thus possible to break up complicated functions like the geomagnetic field, electric current densities or geopotentials into different pieces and study these pieces separately. Based on a general approach to scalar and vectorial multiscale methods, topics include multiscale denoising, crustal field approximation and downward continuation, wavelet-parametrizations of the magnetic field in Mie-representation as well as multiscale-methods for the analysis of time-dependent spherical vector fields. For each subject the necessary theoretical framework is established and numerical applications examine and illustrate the practical aspects.

Utilization of Correlation Matrices in Adaptive Array Processors for Time-Slotted CDMA Uplinks
(2002)

It is well known that the performance of mobile radio systems can be significantly enhanced by the application of adaptive antennas which consist of multi-element antenna arrays plus signal processing circuitry. In the thesis the utilization of such antennas as receive antennas in the uplink of mobile radio air interfaces of the type TD-CDMA is studied. Especially, the incorporation of covariance matrices of the received interference signals into the signal processing algorithms is investigated with a view to improve the system performance as compared to state of the art adaptive antenna technology. These covariance matrices implicitly contain information on the directions of incidence of the interference signals, and this information may be exploited to reduce the effective interference power when processing the signals received by the array elements. As a basis for the investigations, first directional models of the mobile radio channels and of the interference impinging at the receiver are developed, which can be implemented on the computer at low cost. These channel models cover both outdoor and indoor environments. They are partly based on measured channel impulse responses and, therefore, allow a description of the mobile radio channels which comes sufficiently close to reality. Concerning the interference models, two cases are considered. In the one case, the interference signals arriving from different directions are correlated, and in the other case these signals are uncorrelated. After a visualization of the potential of adaptive receive antennas, data detection and channel estimation schemes for the TD-CDMA uplink are presented, which rely on such antennas under the consideration of interference covariance matrices. Of special interest is the detection scheme MSJD (Multi Step Joint Detection), which is a novel iterative approach to multi-user detection. Concerning channel estimation, the incorporation of the knowledge of the interference covariance matrix and of the correlation matrix of the channel impulse responses is enabled by an MMSE (Minimum Mean Square Error) based channel estimator. The presented signal processing concepts using covariance matrices for channel estimation and data detection are merged in order to form entire receiver structures. Important tasks to be fulfilled in such receivers are the estimation of the interference covariance matrices and the reconstruction of the received desired signals. These reconstructions are required when applying MSJD in data detection. The considered receiver structures are implemented on the computer in order to enable system simulations. The obtained simulation results show that the developed schemes are very promising in cases, where the impinging interference is highly directional, whereas in cases with the interference directions being more homogeneously distributed over the azimuth the consideration of the interference covariance matrices is of only limited benefit. The thesis can serve as a basis for practical system implementations.

One crucial assumption of continuous financial mathematics is that the portfolio can be rebalanced continuously and that there are no transaction costs. In reality, this of course does not work. On the one hand, continuous rebalancing is impossible, on the other hand, each transaction causes costs which have to be subtracted from the wealth. Therefore, we focus on trading strategies which are based on discrete rebalancing - in random or equidistant times - and where transaction costs are considered. These strategies are considered for various utility functions and are compared with the optimal ones of continuous trading.

The immiscible lattice BGK method for solving the two-phase incompressible Navier-Stokes equations is analysed in great detail. Equivalent moment analysis and local differential geometry are applied to examine how interface motion is determined and how surface tension effects can be included such that consistency to the two-phase incompressible Navier-Stokes equations can be expected. The results obtained from theoretical analysis are verified by numerical experiments. Since the intrinsic interface tracking scheme of immiscible lattice BGK is found to produce unsatisfactory results in two-dimensional simulations several approaches to improving it are discussed but all of them turn out to yield no substantial improvement. Furthermore, the intrinsic interface tracking scheme of immiscible lattice BGK is found to be closely connected to the well-known conservative volume tracking method. This result suggests to couple the conservative volume tracking method for determining interface motion with the Navier-Stokes solver of immiscible lattice BGK. Applied to simple flow fields, this coupled method yields much better results than plain immiscible lattice BGK.

The dissertation is concerned with the numerical solution of Fokker-Planck equations in high dimensions arising in the study of dynamics of polymeric liquids. Traditional methods based on tensor product structure are not applicable in high dimensions for the number of nodes required to yield a fixed accuracy increases exponentially with the dimension; a phenomenon often referred to as the curse of dimension. Particle methods or finite point set methods are known to break the curse of dimension. The Monte Carlo method (MCM) applied to such problems are 1/sqrt(N) accurate, where N is the cardinality of the point set considered, independent of the dimension. Deterministic version of the Monte Carlo method called the quasi Monte Carlo method (QMC) are quite effective in integration problems and accuracy of the order of 1/N can be achieved, up to a logarithmic factor. However, such a replacement cannot be carried over to particle simulations due to the correlation among the quasi-random points. The method proposed by Lecot (C.Lecot and F.E.Khettabi, Quasi-Monte Carlo simulation of diffusion, Journal of Complexity, 15 (1999), pp.342-359) is the only known QMC approach, but it not only leads to large particle numbers but also the proven order of convergence is 1/N^(2s) in dimension s. We modify the method presented there, in such a way that the new method works with reasonable particle numbers even in high dimensions and has better order of convergence. Though the provable order of convergence is 1/sqrt(N), the results show less variance and thus the proposed method still slightly outperforms standard MCM.

Based on the framework of continuum mechanics two different concepts to formulate phenomenological anisotropic inelasticity are developed in a thermodynamically consistent manner. On the one hand, special emphasis is placed on the incorporation of structural tensors while on the other hand, fictitious configurations are introduced. Substantial parts of this work deal with the numerical treatment of the presented theory within the finite element method.

In the present work, we investigated how to correct the questionable normality, linear and quadratic assumptions underlying existing Value-at-Risk methodologies. In order to take also into account the skewness, the heavy tailedness and the stochastic feature of the volatility of the market values of financial instruments, the constant volatility hypothesis widely used by existing Value-at-Risk appproches has also been investigated and corrected and the tails of the financial returns distributions have been handled via Generalized Pareto or Extreme Value Distributions. Artificial Neural Networks have been combined by Extreme Value Theory in order to build consistent and nonparametric Value-at-Risk measures without the need to make any of the questionable assumption specified above. For that, either autoregressive models (AR-GARCH) have been used or the direct characterization of conditional quantiles due to Bassett, Koenker [1978] and Smith [1987]. In order to build consistent and nonparametric Value-at-Risk estimates, we have proved some new results extending White Artificial Neural Network denseness results to unbounded random variables and provide a generalisation of the Bernstein inequality, which is needed to establish the consistency of our new Value-at-Risk estimates. For an accurate estimation of the quantile of the unexpected returns, Generalized Pareto and Extreme Value Distributions have been used. The new Artificial Neural Networks denseness results enable to build consistent, asymptotically normal and nonparametric estimates of conditional means and stochastic volatilities. The denseness results uses the Sobolev metric space L^m (my) for some m >= 1 and some probability measure my and which holds for a certain subclass of square integrable functions. The Fourier transform, the new extension of the Bernstein inequality for unbounded random variables from stationary alpha-mixing processes combined with the new generalization of a result of White and Wooldrige [1990] have been the main tool to establich the extension of White's neural network denseness results. To illustrate the goodness and level of accuracy of the new denseness results, we were able to demonstrate the applicability of the new Value-at-Risk approaches by means of three examples with real financial data mainly from the banking sector traded on the Frankfort Stock Exchange.

This thesis builds a bridge between singularity theory and computer algebra. To an isolated hypersurface singularity one can associate a regular meromorphic connection, the Gauß-Manin connection, containing a lattice, the Brieskorn lattice. The leading terms of the Brieskorn lattice with respect to the weight and V-filtration of the Gauß-Manin connection define the spectral pairs. They correspond to the Hodge numbers of the mixed Hodge structure on the cohomology of the Milnor fibre and belong to the finest known invariants of isolated hypersurface singularities. The differential structure of the Brieskorn lattice can be described by two complex endomorphisms A0 and A1 containing even more information than the spectral pairs. In this thesis, an algorithmic approach to the Brieskorn lattice in the Gauß-Manin connection is presented. It leads to algorithms to compute the complex monodromy, the spectral pairs, and the differential structure of the Brieskorn lattice. These algorithms are implemented in the computer algebra system Singular.

In the last decade, injection molding of long-fiber reinforced thermoplastics
(LFT) has been established as a low-cost, high volume technique for manufacturing
parts with complex shape without any post-treatment [1–3]. Applications
are mainly found in the automotive industry with a volume annually
growing by 10% to 15% [4].
While first applications were based on polyamide (PA6 and PA6.6), the market
share of glass fiber reinforced polypropylene (PP) is growing due to cost savings
and ease of processing. With the use of polypropylene, different processing
techniques such as gas-assisted injection molding [5] or injection compression
molding [6] have emerged in addition to injection molding [7, 8].
In order to overcome or justify higher materials costs when compared to short
fiber reinforced thermoplastics, the manufacturing techniques for LFT pellets
with fiber length greater than 10mm have evolved starting from pultrusion by
improving impregnation and throughput [9] or by direct addition of fiber strands
in the mold [10–12].
The benefit of long glass fiber reinforcement either in PP or PA is mainly due
to the enhanced resistance to fiber pull-out resulting in an increase in impact
properties and strength [13–19], even at low temperature levels [20]. Creep
and fatigue resistance are also substantially improved [21, 22].
The performance of fiber reinforced thermoplastics manufactured by injection
molding strongly depends on the flow-induced microstructure which is
driven by materials composition, processing conditions and part geometry.
The anisotropic microstructure is characterized by fiber fraction and dispersion,
fiber length and fiber orientation.
Facing the complexity of this processing technique, simulation becomes a precious
tool already in the concept phase for parts manufactured by injection
molding. Process simulation supports decisions with respect to choice of concepts
and materials. The part design is determined in terms of mold filling
including location of gates, vents and weld lines. Tool design requires the
determination of melt feeding, logistics and mold heating. Subsequently, performance
including prediction of shrinkage and warpage as well as structural
analysis is evaluated [23].
While simulation based on two-dimensional representation of three-dimensional
part geometry has been extensively used during the last two decades, the
complexity of the parts as well as the trend towards solid modelling in CAD
and CAE demands the step towards three-dimensional process simulation. The scope of this work is the prediction of flow-induced microstructure during
injection molding of long glass fiber reinforced polypropylene using threedimensional
process simulation. Modelling of the injection molding process in
three dimensions is supported experimentally by rheological characterization
in both shear and extensional flow and by two- and three-dimensional evaluation
of microstructure.
In chapter 2 the fundamentals of rheometry and rheology are presented with
respect to long fiber reinforced thermoplastics. The influence of parameters
on microstructure is described and approaches for modelling the state of microstructure
and its dynamics are discussed.
Chapter 3 introduces a rheometric technique allowing for rheological characterization
of polymer melts at processing conditions as encountered during
manufacturing. Using this rheometer, both shear and extensional viscosity of
long glass fiber reinforced polypropylene are measured with respect to composition
of materials, processing conditions and geometry of the cavity.
Chapter 4 contains the evaluation of microstructure of long glass fiber reinforced
polypropylene in terms of two-dimensional fiber orientation and its dependence
on materials parameters and processing condition. For the evaluation
of three-dimensional microstructure, a technique based on x-ray tomography
is introduced.
In chapter 5, modelling of microstructural dynamics is addressed. One-way
coupling of interactions between fluid and fibers is described macroscopically.
The flow behavior of fibers in the vicinity of cavity walls is evaluated experimentally.
From these observations, a model for treatment of fiber-wall interaction
with respect to numerical simulation is proposed.
Chapter 6 presents the application of three-dimensional simulation of the injection
molding process. Mold filling simulation is performed using a commercial
code while prediction of 3D fiber orientation is based on a proprietary module.
The rheological and thermal properties derived in chapter 3 are tested by
simulation of the experiments and comparison of predicted pressure and temperature
profile versus recorded results. The performance of fiber orientation
prediction is verified using analytical solutions of test examples from literature.
The capability of three-dimensional simulation is demonstrated based on the
simulation of mold filling and prediction of fiber orientation for an automotive
part.

Solid particle erosion is usually undesirable, as it leads to development of cracks and
holes, material removal and other degradation mechanisms that as final
consequence reduce the durability of the structure imposed to erosion. The main aim
of this study was to characterise the erosion behaviour of polymers and polymer
composites, to understand the nature and the mechanisms of the material removal
and to suggest modifications and protective strategies for the effective reduction of
the material removal due to erosion.
In polymers, the effects of morphology, mechanical-, thermomechanical, and fracture
mechanical- properties were discussed. It was established that there is no general
rule for high resistance to erosive wear. Because of the different erosive wear
mechanisms that can take place, wear resistance can be achieved by more than one
type of materials. Difficulties with materials optimisation for wear reduction arise from
the fact that a material can show different behaviour depending on the impact angle
and the experimental conditions. Effects of polymer modification through mixing or
blending with elastomers and inclusion of nanoparticles were also discussed.
Toughness modification of epoxy resin with hygrothermally decomposed polyesterurethane
can be favourable for the erosion resistance. This type of modification
changes also the crosslinking characteristics of the modified EP and it was
established the crosslink density along with fracture energy are decisive parameters
for the erosion response. Melt blending of thermoplastic polymers with functionalised
rubbers on the other hand, can also have a positive influence whereas inclusion of
nanoparticles deteriorate the erosion resistance at low oblique impact angles (30°).
The effects of fibre length, orientation, fibre/matrix adhesion, stacking sequence,
number, position and existence of interleaves were studied in polymer composites.
Linear and inverse rules of mixture were applied in order to predict the erosion rate of
a composite system as a function of the erosion rate of its constituents and their
relative content. Best results were generally delivered with the inverse rule of mixture
approach.
A semi-empirical model, proposed to describe the property degradation and damage
growth characteristics and to predict residual properties after single impact, was
applied for the case of solid particle erosion. Theoretical predictions and experimental
results were in very good agreement.
Strahlerosionsverschleiß (Erosion) entsteht beim Auftreffen von festen Partikel
auf Oberflächen und zeichnet sich üblicherweise durch einen Materialabtrag aus, der
neben der Partikelgeschwindigkeit und dem Auftreffwinkel stark vom jeweiligen
Werkstoff abhängt. In den letzten Jahren ist die Anwendung von Polymeren und
Verbundwerkstoffen anstelle der traditionellen Materialien stark angestiegen.
Polymere und Polymer-Verbundwerkstoffe weisen eine relativ hohe Erosionsrate
(ER) auf, was die potenzielle Anwendung dieser Werkstoffe unter erosiven
Umgebungsbedingungen erheblich einschränkt.
Untersuchungen des Erosionsverhaltens anhand ausgewählter Polymere und
Polymer-Verbundwerkstoffe haben gezeigt, dass diese Systeme unterschiedlichen
Verschleißmechnismen folgen, die sehr komplex sind und nicht nur von einer
Werkstoffeigenschaft beeinflusst werden. Anhand der ER kann das
Erosionsverhalten grob in zwei Kategorien eingeteilt werden: sprödes und duktiles
Erosionsverhalten. Das spröde Erosionsverhalten zeigt eine maximale ER bei 90°,
während das Maximum bei dem duktilen Verhalten bei 30° liegt. Ob ein Material das
eine oder das andere Erosionsverhalten aufweist, ist nicht nur von seinen
Eigenschaften, sondern auch von den jeweiligen Prüfparametern abhängig.
Das Ziel dieser Forschungsarbeit war, das grundsätzliche Verhalten von
Polymeren und Verbundwerkstoffen unter dem Einfluss von Erosion zu
charakterisieren, die verschiedenen Verschleißmechanismen zu erkennen und die
maßgeblichen Materialeigenschaften und Kennwerte zu erfassen, um Anwendungen
dieser Werkstoffe unter Erosionsbedingungen zu ermöglichen bzw. zu verbessern.
An einer exemplarischen Auswahl von Polymeren, Elastomeren, modifizierten Polymeren und Faserverbundwerkstoffen wurden die wesentlichen Einflussfaktoren
für die Erosion experimentell bestimmt.
Thermoplastische Polymere und thermoplastische- und vernetzte- Elastomere
Die Versuche, den Erosionswiderstand ausgewählter Polymere (Polyethylene
und Polyurethane) mit verschiedenen Materialeigenschaften zu korrelieren, haben
gezeigt, dass es weder eine klare Abhängigkeit von einzelnen Kenngrößen noch von
Eigenschaftskombinationen gibt. Möglicherweise führt die Bestimmung der
Materialeigenschaften unter den gleichen experimentellen Bedingungen wie bei den Erosionsversuchen zu einer besseren Korrelation zwischen ER und
Materialkenngröße.
Modifiziertes Epoxidharz
Am Beispiel eines modifizierten Epoxidharzes (EP) mit verschiedener
Vernetzungsdichte wurde eine Korrelation zwischen Erosionswiderstand und
Bruchenergie bzw. Erosionswiderstand und Vernetzungsdichte gefunden. Die
Modifizierung erfolgte mit verschiedenen Anteilen von einem hygrothermisch
abgebauten Polyurethan (HD-PUR). Der Zusammenhang zwischen ER und
Vernetzungsparametern steht im Einklang mit der Theorie der Kautschukelastizität.
Modifizierungseffizienz in Duromeren, Thermoplasten und Elastomeren
Des weiteren wurde der Einfluss von Modifizierungen von Polymeren und
Elastomeren untersucht. Mit dem obenerwähnten System (d.h. EP/HD-PUR) läßt sich
auch der Einfluss der Zähigkeitsmodifizierung des Epoxidharzes (EP) auf das
Erosionsverhalten untersuchen. Es wurde gezeigt, dass für HD-PUR Anteile von
mehr als 20 Gew.% diese Modifizierung einen positiven Einfluss auf die
Erosionsbeständigkeit hat. Durch Variation der HD-PUR-Anteile können für dieses
EP Materialeigenschaften, die zwischen den Eigenschaften eines üblichen
Duroplasten und eines weniger elastischen Gummis liegen, erzeugt werden.
Deswegen stellt der modifizierte EP-Harz ein sehr gutes Modellmaterial dar, um den
Einfluss der experimentellen Bedingungen zu studieren, und zu untersuchen, ob
verschiedene Erodenten zu gleichen Erosionsmechanismen führen. Der Übergang
vom duroplastischen zum zähen Verhalten wurde anhand von vier Erodenten
untersucht. Aus den Versuchen ergab sich, dass ein solcher Übergang auftritt, wenn
sehr feine, kantige Partikel (Korund) als Erodenten dienen. Die Partikelgröße und -form ist von entscheidender Bedeutung für die jeweiligen Verschleißmechanismen.
Die Effizienz neuartiger thermoplastischer Elastomere mit einer cokontinuierlichen
Phasenstruktur, bestehend aus thermoplastischem Polyester und
Gummi (funktionalisierter NBR und EPDM Kautschuk), wurde in Bezug auf die
Erosionsbeständigkeit untersucht. Große Anteile von funktionalisiertem Gummi (mehr
als 20 Gew.%) sind vorteilhaft für den Erosionswiderstand. Weiterhin wurde
untersucht, ob sich die herausragende Erosionsbeständigkeit von Polyurethan (PUR)
durch Zugabe von Nanosilikaten eventuell noch steigern läßt. Das Ergebnis war,
dass die Nanopartikel sich vor allem bei einem kleinen Verschleißwinkel (30°) negativ
auswirken. Die schwache Adhäsion zwischen Matrix und Partikeln erleichtert den
Beginn und das Wachsen von Rissen. Dies führt zu einem schnelleren
Materialabtrag von der Materialoberfläche.
Faserverbundwerkstoffe
Ferner wurden Faserverbundwerkstoffe (FVW) mit thermoplastischer und
duromerer Matrix auf ihr Verhalten bei Erosivverschleiß untersucht. Es war von
großem Interesse, den Einfluss von Faserlänge und -orientierung zu untersuchen.
Kurzfaserverstärkte Systeme haben einen besseren Erosionswiderstand als die
unidirektionalen (UD) Systeme. Die Rolle der Faserorientierung kann man nur in
Verbindung mit anderen Parametern, wie Matrixzähigkeit, Faseranteil oder Faser-
Matrix Haftung, berücksichtigen. Am Beispiel von GF/PP Verbunden weisen die
parallel zur Verstreckungsrichtung gestrahlten Systeme den geringsten Widerstand
auf. Andererseits findet bei einem GF/EP System die maximale ER in senkrechter
Richtung statt. Eine Verbesserung der Grenzflächenscherfestigkeit beeinflusst die
Erosionsverschleißrate nachhaltig. Wenn die Haftung der Grenzfläche ausreichend
ist, spielt die Erosionsrichtung eine unbedeutende Rolle für die ER. Weiterhin wurde
gezeigt, dass die Präsenz von zähen Zwischenschichten zu einer deutlichen
Verbesserung des Erosionswiderstands von CF/EP- Verbunden führt.
Eine weitere Aufgabenstellung war es, die Rolle des Faservolumenanteils zu
bestimmen. „Lineare, inverse und modifizierte Mischungsregeln“ wurden
angewendet, und es wurde festgestellt, dass die inversen Mischungsregeln besser
die ER in Abhängigkeit des Faservolumenanteils beschreiben können.
Im Anwendungsbereich von Faserverbundwerkstoffen ist nicht nur die Kenntnis
der ER, sondern auch die Kenntnis der Resteigenschaften erforderlich. Ein
halbempirisches Modell für die Vorhersage des Schlagenergieschwellwertes (Uo) für den Beginn der Festigkeitsabnahme und der Restzugfestigkeit nach einer
Schlagbelastung wurde bei der Untersuchung des Erosionsverschleißes
angewendet. Experimentelle Ergebnisse und theoretische Vorhersagen stimmten
nicht nur für duromere CF/EP-Verbundwerkstoffe, sondern auch für
Verbundwerkstoffe mit einer thermoplastischen Matrix (GF/PP) sehr gut überein.

Clusters bridge the gap between single atoms or molecules and the condensed phase and it is the challenge of cluster science to obtain a deeper understanding of the molecular foundation of the observed cluster specific properties/reactivities and their dependence on size. The electronic structure of hydrated magnesium monocations [Mg,nH2O]+, n<20, exhibits a strong cluster size dependency. With increasing number of H2O ligands the SOMO evolves from a quasi-valence state (n=3-5), in which the singly occupied molecular orbital (SOMO) is not yet detached from the metal atom and has distinct sp-hybrid character, to a contact ion pair state. For larger clusters (n=17,19) these ion pair states are best described as solvent separated ion pair states, which are formed by a hydrated dication and a hydrated electron. With growing cluster size the SOMO moves away from the magnesium ion to the cluster surface, where it is localized through mutual attractive interactions between the electron density and dangling H-atoms of H2O ligands forming "molecular tweezers" HO-H (e-) H-OH. In case of the hydrated aluminum monocations [Al,nH2O]+,n=20, different isomers of the formal stoichiometry [Al,20H2O]+ were investigated by using gradient-corrected DFT (BLYP) and three different basic structures for [Al,20H2O]+ were identified: (a) [AlI(H2O)20]+ with a threefold coordinated AlI; (b) [HAlIII(OH)(H2O)19]+ with a fourfold coordinated AlIII; (c) [HAlIII(OH)(H2O)19]+ with a fivefold coordinated AlIII. In ground state [AlI(H2O)20]+ (a) which contains aluminum in oxidation state +1 the 3s2 valence electrons remain located at the aluminium monocation. Different than for open shell magnesium monocations no electron transfer into the hydration shell is observed for closed shell AlI. However, clusters of type (a) are high energy isomers (DE»+190 kJ mol-1) and the activation barrier for reaction into cluster type (b) or (c) is only approximately 14 kJ mol-1. The performed ab initio calculations reveal that unlike in [Mg,nH2O]+, n=7-17, for which H atom eliminiation is found to be the result of an intracluster redoxreaction, in [Al,nH2O]+,n=20, H2 is formed in an intracluster acid-base reaction. In [Mg,nH2O]+, n>17, the magnesium dication was found to coexist with a hydrated electron in larger cluster sizes. This proves that intermolecular electron delocalization - previously almost exclusively studied in (H2O)n- and (NH3)n- clusters - can also be an important issue for water clusters doped with an open shell metal cation or a metal anion. Structures and stabilities of hydrated magnesium water cluster anions with the formal stoichiometry [Mg,nH2O]-, n=1-11, were investigated by application of various correlated ab initio methods (MP2, CCSD, CCSD(T)). Metal cations surely have high relevance in numerous biological processes, and as most biological processes take place in aqueous solution hydrated metal ions will be involved. However, in biological systems solvent molecules (i.e. water) compete with different solvated chelate ligands for coordination sites at the metal ion and the solvent and chelate ligands are in mutual interactions with each other and the metal ion. These interactions were investigated for the hydration of ZnII/carnosine complexes by application of FT-ICR-MS, gas-phase H/D exchange experiments and supporting ab initio calculations. In the last chapter of this work the Free Electron Laser IR Multi Photon Dissocition (FEL-IR-MPD) spectra of mass selected cationic niobium acetonitrile complexes with the formal stoichiometry [Nb,nCH3CN]+, n=4-5, in the spectral range 780 – 2500 cm-1 are reported. In case of n=4 the recorded vibrational bands are close to those of the free CH3CN molecule and the experimental spectra do not contain any evident indication of a potential reaction beyond complex formation. By comparison with B3LYP calculated IR absorption spectra the recorded spectra are assigned to high spin (quintet, S=2), planar [NbI(NCCH3)4]+. In [Nb,nCH3CN]+, n=5, new vibrational bands shifted away from those of the acetonitrile monomer are observed between 1300 – 1550 cm-1. These bands are evidence of a chemical modification due to an intramolecular reaction. Screening on the basis of B3LYP calculated IR absorption spectra allow for an assignment of the recorded spectra to the metallacyclic species [NbIII(NCCH3)3(N=C(CH3)C(CH3)=N)]+ (triplet, S=1), which has formed in a internal reductive nitrile coupling reaction from [NbI(NCCH3)5]+. Calculated reaction coordinates explain the experimentally observed differences in reactivity between ground state [NbI(NCCH3)4]+ and [NbI(NCCH3)5]+. The reductive nitrile coupling reaction is exothermic and accessible (Ea=49 kJ mol-1) only in [NbI(NCCH3)5]+, whereas in [NbI(NCCH3)4]+ the reaction is found to be endothermic and retarded by significantly higher activation barriers (Ea>116 kJ mol-1).

We present new algorithms and provide an overall framework for the interaction of the classically separate steps of logic synthesis and physical layout in the design of VLSI circuits. Due to the continuous development of smaller sized fabrication processes and the subsequent domination of interconnect delays, the traditional separation of logical and physical design results in increasingly inaccurate cost functions and aggravates the design closure problem. Consequently, the interaction of physical and logical domains has become one of the greatest challenges in the design of VLSI circuits. To address this challenge, we propose different solutions for the control and datapath logic of a design, and show how to combine them to reach design closure.

The central theme in this thesis concerns the development of enhanced methods and algorithms for appraising market and credit risks and their application within the context of standard and more advanced market models. Generally, methods and algorithms for analysing market risk of complex portfolios involve detailed knowledge of option sensitivities, the so-called "Greeks". Based on an analysis of symmetries in financial market models, relations between option sensitivities are obtained, which can be used for the efficient valuation of the Greeks. Mainly, the relations are derived within the Black Scholes model, however, some relations are also valid for more general models, for instance the Heston model. Portfolios are usually influenced by lots of underlyings, so it is necessary to characterise the dependencies of these basic instruments. It is usual to describe such dependencies by correlation matrices. However, estimations of correlation matrices in practice are disturbed by statistical noise and usually have the problem of rank deficiency due to missing data. A fast algorithm is presented which performs a generalized Cholesky decomposition of a perturbed correlation matrix. In contrast to the standard Cholesky algorithm, an advantage of the generalized method is that it works for semi-positive, rank deficient matrices as well. Moreover, it gives an approximative decomposition when the input matrix is indefinite. A comparison with known algorithms with similar features is performed and it turns out, that the new algorithm can be recommended in situations where computation time is the critical issue. The determination of a profit and loss distribution by Fourier inversion of its characteristic function is a powerful tool, but it can break down when the characteristic function is not integrable. In this thesis, methods for Fourier inversion of non-integrable characteristic functions are studied. In this respect, two theorems are obtained which are based on a suitable approximation of the unknown distribution with known density and characteristic function. Further it will be shown, that straightforward Fast Fourier inversion works, when the according density lives on a bounded interval. The above techniques are of crucial importance to determine the profit and loss distribution (P&L) of large portfolios efficiently. The so-called Delta Gamma normal approach has become industrial standard for the estimation of market risk. It is shown, that the performance of the Delta Gamma normal approach can be improved substantially by application of the developed methods. The same optimization procedure also applies to the Delta Gamma Student model. A standard tool for computing the P&L distribution of a loan portfolio is the CreditRisk+ model. Basically, the CreditRisk+ distribution is a discrete distribution which can be computed from its probability generating function. For this a numerically stable method is presented and as an alternative, a new algorithm based on Fourier inversion is proposed. Finally, an extension of the CreditRisk+ model to market risk is developed, which distribution can be obtained efficiently by the presented Fourier inversion methods as well.

The thesis deals with the subgradient optimization methods which are serving to solve nonsmooth optimization problems. We are particularly concerned with solving large-scale integer programming problems using the methodology of Lagrangian relaxation and dualization. The goal is to employ the subgradient optimization techniques to solve large-scale optimization problems that originated from radiation therapy planning problem. In the thesis, different kinds of zigzagging phenomena which hamper the speed of the subgradient procedures have been investigated and identified. Moreover, we have established a new procedure which can completely eliminate the zigzagging phenomena of subgradient methods. Procedures used to construct both primal and dual solutions within the subgradient schemes have been also described. We applied the subgradient optimization methods to solve the problem of minimizing total treatment time of radiation therapy. The problem is NP-hard and thus far there exists no method for solving the problem to optimality. We present a new, efficient, and fast algorithm which combines exact and heuristic procedures to solve the problem.

The thesis is concerned with the modelling of ionospheric current systems and induced magnetic fields in a multiscale framework. Scaling functions and wavelets are used to realize a multiscale analysis of the function spaces under consideration and to establish a multiscale regularization procedure for the inversion of the considered operator equation. First of all a general multiscale concept for vectorial operator equations between two separable Hilbert spaces is developed in terms of vector kernel functions. The equivalence to the canonical tensorial ansatz is proven and the theory is transferred to the case of multiscale regularization of vectorial inverse problems. As a first application, a special multiresolution analysis of the space of square-integrable vector fields on the sphere, e.g. the Earth’s magnetic field measured on a spherical satellite’s orbit, is presented. By this, a multiscale separation of spherical vector-valued functions with respect to their sources can be established. The vector field is split up into a part induced by sources inside the sphere, a part which is due to sources outside the sphere and a part which is generated by sources on the sphere, i.e. currents crossing the sphere. The multiscale technqiue is tested on a magnetic field data set of the satellite CHAMP and it is shown that crustal field determination can be improved by previously applying our method. In order to reconstruct ionspheric current systems from magnetic field data, an inversion of the Biot-Savart’s law in terms of multiscale regularization is defined. The corresponding operator is formulated and the singular values are calculated. Based on the konwledge of the singular system a regularzation technique in terms of certain product kernels and correponding convolutions can be formed. The method is tested on different simulations and on real magnetic field data of the satellite CHAMP and the proposed satellite mission SWARM.

We construct and study two surface measures on the space C([0,1],M) of paths in a compact Riemannian manifold M embedded into the Euclidean space R^n. The first one is induced by conditioning the usual Wiener measure on C([0,T],R^n) to the event that the Brownian particle does not leave the tubular epsilon-neighborhood of M up to time T, and passing to the limit. The second one is defined as the limit of the laws of reflected Brownian motions with reflection on the boundaries of the tubular epsilon-neighborhoods of M. We prove that the both surface measures exist and compare them with the Wiener measure W_M on C([0,T],M). We show that the first one is equivalent to W_M and compute the corresponding density explicitly in terms of the scalar curvature and the mean curvature vector of M. Further, we show that the second surface measure coincides with W_M. Finally, we study the limit behavior of the both surface measures as T tends to infinity.

In this thesis the combinatorial framework of toric geometry is extended to equivariant sheaves over toric varieties. The central questions are how to extract combinatorial information from the so developed description and whether equivariant sheaves can, like toric varieties, be considered as purely combinatorial objects. The thesis consists of three main parts. In the first part, by systematically extending the framework of toric geometry, a formalism is developed for describing equivariant sheaves by certain configurations of vector spaces. In the second part, homological properties of a certain class of equivariant sheaves are investigated, namely that of reflexive equivariant sheaves. Several kinds of resolutions for these sheaves are constructed which depend only on the configuration of their associated vector spaces. Thus a partially positive answer to the question of combinatorial representability is given. As a particular result, a new way for computing minimal resolutions for Z^n - graded modules over polynomial rings is obtained. In the third part a complete classification of the simplest nontrivial sheaves, equivariant vector bundles of rank two over smooth toric surfaces, is given. A combinatorial characterization is given and parameter spaces (moduli spaces) are constructed which depend only on this characterization. In appendices a outlook on equivariant sheaves and the relation of Chern classes to their combinatorial classification is given, particularly focussing on the case of the projective plane. A classification of equivariant vector bundles of rank three over the projective plane is given.

Semiparametric estimation of conditional quantiles for time series, with applications in finance
(2003)

The estimation of conditional quantiles has become an increasingly important issue in insurance and financial risk management. The stylized facts of financial time series data has rendered direct applications of extreme value theory methodologies, in the estimation of extreme conditional quantiles, inappropriate. On the other hand, quantile regression based procedures work well in nonextreme parts of a given data but breaks down in extreme probability levels. In order to solve this problem, we combine nonparametric regressions for time series and extreme value theory approaches in the estimation of extreme conditional quantiles for financial time series. To do so, a class of time series models that is similar to nonparametric AR-(G)ARCH models but which does not depend on distributional and moments assumptions, is introduced. We discuss estimation procedures for the nonextreme levels using the models and consider the estimates obtained by inverting conditional distribution estimators and by direct estimation using Koenker-Basset (1978) version for kernels. Under some regularity conditions, the asymptotic normality and uniform convergence, with rates, of the conditional quantile estimator for strong mixing time series, are established. We study the estimation of scale function in the introduced models using similar procedures and show that under some regularity conditions, the scale estimate is weakly consistent and asymptotically normal. The application of introduced models in the estimation of extreme conditional quantiles is achieved by augmenting them with methods in extreme value theory. It is shown that the overal extreme conditional quantiles estimator is consistent. A Monte Carlo study is carried out to illustrate the good performance of the estimates and real data are used to demonstrate the estimation of Value-at-Risk and conditional expected shortfall in financial risk management and their multiperiod predictions discussed.

As the sustained trend towards integrating more and more functionality into systems on a chip can be observed in all fields, their economic realization is a challenge for the chip making industry. This is, however, barely possible today, as the ability to design and verify such complex systems could not keep up with the rapid technological development. Owing to this productivity gap, a design methodology, mainly using pre designed and pre verifying blocks, is mandatory. The availability of such blocks, meeting the highest possible quality standards, is decisive for its success. Cost-effective, this can only be achieved by formal verification on the block-level, namely by checking properties, ranging over finite intervals of time. As this verification approach is based on constructing and solving Boolean equivalence problems, it allows for using backtrack search procedures, such as SAT. Recent improvements of the latter are responsible for its high capacity. Still, the verification of some classes of hardware designs, enjoying regular substructures or complex arithmetic data paths, is difficult and often intractable. For regular designs, this is mainly due to individual treatment of symmetrical parts of the search space by backtrack search procedures used. One approach to tackle these deficiencies, is to exploit the regular structure for problem reduction on the register transfer level (RTL). This work describes a new approach for property checking on the RTL, preserving the problem inherent structure for subsequent reduction. The reduction is based on eliminating symmetrical parts from bitvector functions, and hence, from the search space. Several approaches for symmetry reduction in search problems, based on invariance of a function under permutation of variables, have been previously proposed. Unfortunately, our investigations did not reveal this kind of symmetry in relevant cases. Instead, we propose a reduction based on symmetrical values, as we encounter them much more frequently in our industrial examples. Let \(f\) be a Boolean function. The values \(0\) and \(1\) are symmetrical values for a variable \(x\) in \(f\) iff there is a variable permutation \(\pi\) of the variables of \(f\), fixing \(x\), such that \(f|_{x=0} = \pi(f|_{x=1})\). Then the question whether \(f=1\) holds is independent from this variable, and it can be removed. By iterative application of this approach to all variables of \(f\), they are either all removed, leaving \(f=1\) or \(f=0\) trivially, or there is a variable \(x'\) with no such \(\pi\). The latter leads to the conclusion that \(f=1\) does not hold, as we found a counter-example either with \(x'=0\), or \(x'=1\). Extending this basic idea to vectors of variables, allows to elevate it to the RTL. There, self similarities in the function representation, resulting from the regular structure preserved, can be exploited, and as a consequence, symmetrical bitvector values can be found syntactically. In particular, bitvector term-rewriting techniques, isomorphism procedures for specially manipulated term graphs, and combinations thereof, are proposed. This approach dramatically reduces the computational effort needed for functional verification on the block-level and, in particular, for the important problem class of regular designs. It allows the verification of industrial designs previously intractable. The main contributions of this work are in providing a framework for dealing with bitvector functions algebraically, a concise description of bounded model checking on the register transfer level, as well as new reduction techniques and new approaches for finding and exploiting symmetrical values in bitvector functions.

Extensions of Shallow Water Equations The subject of the thesis of Michael Hilden is the simulation of floods in urban areas. In case of strong rain events, water can flow out of the overloaded sewer system onto the street and damage the connected houses. The dependable simulation of water flow out of a manhole ("manhole") and over a curb ("curb") is crucial for the assessment of the flood risks. The incompressible 3D-Navier-Stokes Equations (3D-NSE) describe the free surface flow of water accurately, but require expensive computations. Therefore, the less CPU-intensive (factor ca.1/100) Shallow Water Equations (SWE) are usually applied in hydrology. They can be derived from 3D-NSE under the assumption of a hydrostatic pressure distribution via depth-integration and are applied successfully in particular to simulations of river flow processes. The SWE-computations of the flow problems "manhole" and "curb" differ to the 3D-NSE results. Thus, SWE need to be extended appropriately to give reliable forecasts for flood risks in urban areas within reduced computational efforts. These extensions are developed based on physical considerations not considered in the classical SWE. In one extension, a vortex layer on the ground is separated from the main flow representing its new bottom. In a further extension, the hydrostatic pressure distribution is corrected by additional terms due to approximations of vertical velocities and their interaction with the flow. These extensions increase the quality of the SWE results for these flow problems up to the quality level of the NSE results within a moderate increase of the CPU efforts.

The thesis discusses discrete-time dynamic flows over a finite time horizon T. These flows take time, called travel time, to pass an arc of the network. Travel times, as well as other network attributes, such as, costs, arc and node capacities, and supply at the source node, can be constant or time-dependent. Here we review results on discrete-time dynamic flow problems (DTDNFP) with constant attributes and develop new algorithms to solve several DTDNFPs with time-dependent attributes. Several dynamic network flow problems are discussed: maximum dynamic flow, earliest arrival flow, and quickest flow problems. We generalize the hybrid capacity scaling and shortest augmenting path algorithmic of the static network flow problem to consider the time dependency of the network attributes. The result is used to solve the maximum dynamic flow problem with time-dependent travel times and capacities. We also develop a new algorithm to solve earliest arrival flow problems with the same assumptions on the network attributes. The possibility to wait (or park) at a node before departing on outgoing arc is also taken into account. We prove that the complexity of new algorithm is reduced when infinite waiting is considered. We also report the computational analysis of this algorithm. The results are then used to solve quickest flow problems. Additionally, we discuss time-dependent bicriteria shortest path problems. Here we generalize the classical shortest path problems in two ways. We consider two - in general contradicting - objective functions and introduce a time dependency of the cost which is caused by a travel time on each arc. These problems have several interesting practical applications, but have not attained much attention in the literature. Here we develop two new algorithms in which one of them requires weaker assumptions as in previous research on the subject. Numerical tests show the superiority of the new algorithms. We then apply dynamic network flow models and their associated solution algorithms to determine lower bounds of the evacuation time, evacuation routes, and maximum capacities of inhabited areas with respect to safety requirements. As a macroscopic approach, our dynamic network flow models are mainly used to produce good lower bounds for the evacuation time and do not consider any individual behavior during the emergency situation. These bounds can be used to analyze existing buildings or help in the design phase of planning a building.

The main two problems of continuous-time financial mathematics are option pricing and portfolio optimization. In this thesis, various new aspects of these major topics of financial mathematics will be discussed. In all our considerations we will assume the standard diffusion type setting for securitiy prices which is today well-know under the term "Black-Scholes model". This setting and the basic results of option pricing and portfolio optimization are surveyed in the first chapter. The next three chapters deal with generalizations of the standard portfolio problem, also know as "Merton's problem". Here, we will always use the stochastic control approach as introduced in the seminal papers by Merton (1969, 1971, 1990). One such problem is the very realistic setting of an investor who is faced with fixed monetary streams. More precisely, in addition to maximizing the utility from final wealth via choosing an investment strategy, the investor also has to fulfill certain consumption needs. Also the opposite situation, an additional income stream can now be taken into account in our portfolio optimization problem. We consider various examples and solve them on one hand via classical stochastic control methods and on the other hand by our new separation theorem. This together with some numerical examples forms Chapter 2. Chapter 3 is mainly concerned with the portfolio problem if the investor has different lending and borrowing rates. We give explicit solutions (where possible) and numerical methods to calculate the optimal strategy in the cases of log utility and HARA utility for three different modelling approaches of the dependence of the borrowing rate on the fraction of wealth financed by a credit. The further generalization of the standard Merton problem in Chapter 4 consists in considering simultaneously the possibilities for continuous and discrete consumption. In our general approach there is a possibility for assigning the different consumption times different weights which is a generalization of the usual way of making them comparable via discounting. Chapter 5 deals with the special case of pricing basket options. Here, the main problem is not path-dependence but the multi-dimensionality which makes it impossible to give usuefull analytical representations of the option price. We review the literature and compare six different numerical methods in a systematic way. Thereby we also look at the influence of various parameters such as strike, correlation, forwards or volatilities on the erformance of the different numerical methods. The problem of pricing Asian options on average spot with average strike is the topic of Chapter 6. We here apply the bivariate normal distribution to obtain an approximate option price. This method proves to be very reliable and e±cient for the valuation of different variants of Asian options on average spot with average strike.

The goal of this thesis is a physically motivated and thermodynamically consistent formulation of higher gradient inelastic material behavior. Thereby, the influence of the material microstructure is incorporated. Next to theoretical aspects, the thesis is complemented with the algorithmic treatment and numerical implementation of the derived model. Hereby, two major inelastic effects will be addressed: on the one hand elasto-plastic processes and on the other hand damage mechanisms, which will both be modeled within a continuum mechanics framework.

The focus of this work has been to develop two families of wavelet solvers for the inner displacement boundary-value problem of elastostatics. Our methods are particularly suitable for the deformation analysis corresponding to geoscientifically relevant (regular) boundaries like sphere, ellipsoid or the actual Earth's surface. The first method, a spatial approach to wavelets on a regular (boundary) surface, is established for the classical (inner) displacement problem. Starting from the limit and jump relations of elastostatics we formulate scaling functions and wavelets within the framework of the Cauchy-Navier equation. Based on numerical integration rules a tree algorithm is constructed for fast wavelet computation. This method can be viewed as a first attempt to "short-wavelength modelling", i.e. high resolution of the fine structure of displacement fields. The second technique aims at a suitable wavelet approximation associated to Green's integral representation for the displacement boundary-value problem of elastostatics. The starting points are tensor product kernels defined on Cauchy-Navier vector fields. We come to scaling functions and a spectral approach to wavelets for the boundary-value problems of elastostatics associated to spherical boundaries. Again a tree algorithm which uses a numerical integration rule on bandlimited functions is established to reduce the computational effort. For numerical realization for both methods, multiscale deformation analysis is investigated for the geoscientifically relevant case of a spherical boundary using test examples. Finally, the applicability of our wavelet concepts is shown by considering the deformation analysis of a particular region of the Earth, viz. Nevada, using surface displacements provided by satellite observations. This represents the first step towards practical applications.

The present thesis deals with coupled steady state laminar flows of isothermal incompressible viscous Newtonian fluids in plain and in porous media. The flow in the pure fluid region is usually described by the (Navier-)Stokes system of equations. The most popular models for the flow in the porous media are those suggested by Darcy and by Brinkman. Interface conditions, proposed in the mathematical literature for coupling Darcy and Navier-Stokes equations, are shortly reviewed in the thesis. The coupling of Navier-Stokes and Brinkman equations in the literature is based on the so called continuous stress tensor interface conditions. One of the main tasks of this thesis is to investigate another type of interface conditions, namely, the recently suggested stress tensor jump interface conditions. The mathematical models based on these interface conditions were not carefully investigated from the mathematical point of view, and also their validity was a subject of discussions. The considerations within this thesis are a step toward better understanding of these interface conditions. Several aspects of the numerical simulations of such coupled flows are considered: -the choice of proper interface conditions between the plain and porous media -analysis of the well-posedness of the arising systems of partial differential equations; -developing numerical algorithm for the stress tensor jump interface conditions, coupling Navier-Stokes equations in the pure liquid media with the Navier-Stokes-Brinkman equations in the porous media; -validation of the macroscale mathematical models on the base of a comparison with the results from a direct numerical simulation of model representative problems, allowing for grid resolution of the pore level geometry; -developing software and performing numerical simulation of 3-D industrial flows, namely of oil flows through car filters.

The question of how to model dependence structures between financial assets was revolutionized since the last decade when the copula concept was introduced in financial research. Even though the concept of splitting marginal behavior and dependence structure (described by a copula) of multidimensional distributions already goes back to Sklar (1955) and Hoeffding (1940), there were very little empirical efforts done to check out the potentials of this approach. The aim of this thesis is to figure out the possibilities of copulas for modelling, estimating and validating purposes. Therefore we extend the class of Archimedean Copulas via a transformation rule to new classes and come up with an explicit suggestion covering the Frank and Gumbel family. We introduce a copula based mapping rule leading to joint independence and as results of this mapping we present an easy method of multidimensional chi²-testing and a new estimate for high dimensional parametric distributions functions. Different ways of estimating the tail dependence coefficient, describing the asymptotic probability of joint extremes, are compared and improved. The limitations of elliptical distributions are carried out and a generalized form of them, preserving their applicability, is developed. We state a method to split a (generalized) elliptical distribution into its radial and angular part. This leads to a positive definite robust estimate of the dispersion matrix (here only given as a theoretical outlook). The impact of our findings is stated by modelling and testing the return distributions of stock- and currency portfolios furthermore of oil related commodities- and LME metal baskets. In addition we show the crash stability of real estate based firms and the existence of nonlinear dependence in between the yield curve.

The hypoxia inducible factor-1 (HIF-1), a heterodimer composed of HIF-1alpha and HIF-1beta, is activated in response to low oxygen tension and serves as the master regulator for cells to adapt to hypoxia. HIF-1 is usually considered to be regulated via degradation of its a-subunit. Recent findings, however, point to the existence of alternative mechanisms of HIF-1 regulation which appear to be important for down-regulating HIF-1 under prolonged and severe oxygen depletion. The aims of my Ph.D. thesis, therefore, were to further elucidate mechanisms involved in such down-regulation of HIF-1. The first part of the thesis addresses the impact of the severity and duration of oxygen depletion on HIF-1alpha protein accumulation and HIF-1 transcriptional activity. A special focus was put on the influence of the transcription factor p53 on HIF-1. I found that p53 only accumulates under prolonged anoxia (but not hypoxia), thus limiting its influence on HIF-1 to severe hypoxic conditions. At low expression levels, p53 inhibits HIF-1 transactivity. I attributed this effect to a competition between p53 and HIF-1alpha for binding to the transcriptional co-factor p300, since p300 overexpression reverses this inhibition. This assumption is corroborated by competitive binding of IVTT-generated p53 and HIF-1alpha to the CH1-domain of p300 in vitro. High p53 expression, on the other hand, affects HIF-1alpha protein negatively, i.e., p53 provokes pVHL-independent degradation of HIF-1alpha. Therefore, I conclude that low p53 expression attenuates HIF-1 transactivation by competing for p300, while high p53 expression negatively affects HIF-1alpha protein, thereby eliminating HIF-1 transactivity. Thus, once p53 becomes activated under prolonged anoxia, it contributes to terminating HIF-1 responses. In the second part of my study, I intended to further characterize the effects induced by prolonged periods of low oxygen, i.e., hypoxia, as compared to anoxia, with respect to alterations in HIF-1alpha mRNA. Prolonged anoxia, but not hypoxia, showed pronounced effects on HIF-1alpha mRNA. Long-term anoxia induced destabilization of HIF-1alpha mRNA, which manifests itself in a dramatic reduction of the half-life. The mechanistic background points to natural anti-sense HIF-1alpha mRNA, which is induced in a HIF-1-dependent manner, and additional factors, which most likely influence HIF-1alpha mRNA indirectly via anti-sense HIF-1alpha mRNA mediated trans-effects. In summary, the data provide new information concerning the impact of p53 on HIF-1, which might be of importance for the decision between pro- and anti-apoptotic mechanisms depending upon the severity and duration of hypoxia. Furthermore, the results of this project give further insights into a novel mechanism of HIF-1 regulation, namely mRNA down-regulation under prolonged anoxic incubations. These mechanisms appear to be activated only in response to prolonged anoxia, but not to hypoxia. These considerations regarding HIF-1 regulation should be taken into account when prolonged incubations to hypoxic or anoxic conditions are analyzed at the level of HIF-1 stability regulation.

Nowadays one of the major objectives in geosciences is the determination of the gravitational field of our planet, the Earth. A precise knowledge of this quantity is not just interesting on its own but it is indeed a key point for a vast number of applications. The important question is how to obtain a good model for the gravitational field on a global scale. The only applicable solution - both in costs and data coverage - is the usage of satellite data. We concentrate on highly precise measurements which will be obtained by GOCE (Gravity Field and Steady State Ocean Circulation Explorer, launch expected 2006). This satellite has a gradiometer onboard which returns the second derivatives of the gravitational potential. Mathematically seen we have to deal with several obstacles. The first one is that the noise in the different components of these second derivatives differs over several orders of magnitude, i.e. a straightforward solution of this outer boundary value problem will not work properly. Furthermore we are not interested in the data at satellite height but we want to know the field at the Earth's surface, thus we need a regularization (downward-continuation) of the data. These two problems are tackled in the thesis and are now described briefly. Split Operators: We have to solve an outer boundary value problem at the height of the satellite track. Classically one can handle first order side conditions which are not tangential to the surface and second derivatives pointing in the radial direction employing integral and pseudo differential equation methods. We present a different approach: We classify all first and purely second order operators which fulfill that a harmonic function stays harmonic under their application. This task is done by using modern algebraic methods for solving systems of partial differential equations symbolically. Now we can look at the problem with oblique side conditions as if we had ordinary i.e. non-derived side conditions. The only additional work which has to be done is an inversion of the differential operator, i.e. integration. In particular we are capable to deal with derivatives which are tangential to the boundary. Auto-Regularization: The second obstacle is finding a proper regularization procedure. This is complicated by the fact that we are facing stochastic rather than deterministic noise. The main question is how to find an optimal regularization parameter which is impossible without any additional knowledge. However we could show that with a very limited number of additional information, which are obtainable also in practice, we can regularize in an asymptotically optimal way. In particular we showed that the knowledge of two input data sets allows an order optimal regularization procedure even under the hard conditions of Gaussian white noise and an exponentially ill-posed problem. A last but rather simple task is combining data from different derivatives which can be done by a weighted least squares approach using the information we obtained out of the regularization procedure. A practical application to the downward-continuation problem for simulated gravitational data is shown.

In traditional portfolio optimization under the threat of a crash the investment horizon or time to maturity is neglected. Developing the so-called crash hedging strategies (which are portfolio strategies which make an investor indifferent to the occurrence of an uncertain (down) jumps of the price of the risky asset) the time to maturity turns out to be essential. The crash hedging strategies are derived as solutions of non-linear differential equations which itself are consequences of an equilibrium strategy. Hereby the situation of changing market coefficients after a possible crash is considered for the case of logarithmic utility as well as for the case of general utility functions. A benefit-cost analysis of the crash hedging strategy is done as well as a comparison of the crash hedging strategy with the optimal portfolio strategies given in traditional crash models. Moreover, it will be shown that the crash hedging strategies optimize the worst-case bound for the expected utility from final wealth subject to some restrictions. Another application is to model crash hedging strategies in situations where both the number and the height of the crash are uncertain but bounded. Taking the additional information of the probability of a possible crash happening into account leads to the development of the q-quantile crash hedging strategy.

In the filling process of a car tank, the formation of foam plays an unwanted role, as it may prevent the tank from being completely filled or at least delay the filling. Therefore it is of interest to optimize the geometry of the tank using numerical simulation in such a way that the influence of the foam is minimized. In this dissertation, we analyze the behaviour of the foam mathematically on the mezoscopic scale, that is for single lamellae. The most important goals are on the one hand to gain a deeper understanding of the interaction of the relevant physical effects, on the other hand to obtain a model for the simulation of the decay of a lamella which can be integrated in a global foam model. In the first part of this work, we give a short introduction into the physical properties of foam and find that the Marangoni effect is the main cause for its stability. We then develop a mathematical model for the simulation of the dynamical behaviour of a lamella based on an asymptotic analysis using the special geometry of the lamella. The result is a system of nonlinear partial differential equations (PDE) of third order in two spatial and one time dimension. In the second part, we analyze this system mathematically and prove an existence and uniqueness result for a simplified case. For some special parameter domains the system can be further simplified, and in some cases explicit solutions can be derived. In the last part of the dissertation, we solve the system using a finite element approach and discuss the results in detail.

Inappropriate speed is the most common reason for road traffic accidents world wide. Thus, a necessity for speed management exists. The so-called SUNflower states Sweden, the United Kingdom and the Netherlands - each spending strong effort in traffic safety policies - have great success in reducing mean road speeds and speed variances through speed management. However, the effect is still insufficient for gaining real traffic safety. Thus, there is a discussion to make use of technical in-vehicle devices. One of these technologies called Intelligent Speed Adaptation (ISA) reduces vehicle speeds. This is done either by warning the driver that he is speeding, or activating the accelerator pedal with a counterforce, or reducing the gasoline supply to the motor. The three ways of reducing the speed are called version 1-3. The EC-project for research on speed adaptation policies on European roads (PROSPER) deals with strategic proposals for the implementation of the different ISA-versions. This thesis includes selected results of PROSPER. In this thesis two empiric surveys were done in order to give an overview about the basic conditions (e.g. social, economic, technical aspects) for an ISA implementation in Germany. On one hand, a stakeholder analysis and questionnaire using the Delphi-method has been accomplished in two rounds. On the other hand, a questionnaire with speed offenders has been accomplished, too, in two rounds. In addition, the author created an expert pool consisting of 23 experts representing the most important fields of science and practice in which ISA is involved. The author made phone or personal interviews with most of the experts. 12 experts also produced a detailed publication on their professional point of view towards ISA. The two surveys and the professional comments on ISA led to four possible implementation scenarios for ISA in Germany. However, due to a strong political opposition against ISA it is also thinkable that ISA is not implemented or the implementation process starts after 2015 (i.e. outside the aimed period of time). The scenarios are as follows: A) Implementation of version 1 by market forces with governmental subventions. B) Implementation of version 2 by market forces supported by traffic safety institutions and image-making processes. C) Implementation of a modified version 3 by law for speed offenders instead of cancellation of the driving licence. D) Implementation of various versions in Germany because of a broad implementation of ISA in the SUNflower states. X) Non-implementation of ISA leads to the necessity of alternative speed management measures. The author prefers scenario B because - ceteris paribus - it seems to be the most likely way to implement the technology. As soon as ISA reaches technical maturity, the implementation process has to be accomplished in three steps. 1) Marketing and image making 2) Margin introduction 3) Market penetration This implementation process for ISA by market forces could effect a percentage of at least 15% of all vehicles equipped with ISA before the year 2015.

In the present work, various aspects of the mixed continuum-atomistic modelling of materials are studied, most of which are related to the problems arising due to a development of microstructures during the transition from an elastic to plastic description within the framework of continuum-atomistics. By virtue of the so-called Cauchy-Born hypothesis, which is an essential part of the continuum-atomistics, a localization criterion has been derived in terms of the loss of infinitesimal rank-one convexity of the strain energy density. According to this criterion, a numerical yield condition has been computed for two different interatomic energy functions. Therewith, the range of the Cauchy-Born rule validity has been defined, since the strain energy density remains quasiconvex only within the computed yield surface. To provide a possibility to continue the simulation of material response after the loss of quasiconvexity, a relaxation procedure proposed by Tadmor et al. leading necessarily to the development of microstructures has been used. Thereby, various notions of convexity have been overviewed in details. Alternatively to the above mentioned criterion, a stability criterion has been applied to detect the critical deformation. For the study in the postcritical region, the path-change procedure proposed by Wagner and Wriggers has been adapted for the continuum-atomistic and modified. To capture the deformation inhomogeneity arising due to the relaxation, the Cauchy-Born hypothesis has been extended by assumption that it represents only the 1st term in the Taylor's series expansion of the deformation map. The introduction of the 2nd, quadratic term results in the higher-order materials theory. Based on a simple computational example, the relevance of this theory in the postcritical region has been shown. For all simulations including the finite element examples, the development tool MATLAB 6.5 has been used.