Refine
Year of publication
Document Type
- Doctoral Thesis (941) (remove)
Language
- English (941) (remove)
Has Fulltext
- yes (941)
Keywords
- Visualisierung (16)
- Visualization (9)
- finite element method (9)
- Infrarotspektroskopie (8)
- Deep Learning (7)
- Finite-Elemente-Methode (7)
- Optimization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Simulation (6)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (278)
- Kaiserslautern - Fachbereich Informatik (218)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (143)
- Kaiserslautern - Fachbereich Chemie (79)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (64)
- Kaiserslautern - Fachbereich Biologie (54)
- Kaiserslautern - Fachbereich Sozialwissenschaften (26)
- Landau - Fachbereich Natur- und Umweltwissenschaften (23)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (19)
- Kaiserslautern - Fachbereich Physik (9)
Termination of Rewriting
(1994)
More and more, term rewriting systems are applied in computer science aswell as in mathematics. They are based on directed equations which may be used as non-deterministic functional programs. Termination is a key property for computing with termrewriting systems.In this thesis, we deal with different classes of so-called simplification orderings which areable to prove the termination of term rewriting systems. Above all, we focus on the problemof applying these termination methods to examples occurring in practice. We introduce aformalism that allows clear representations of orderings. The power of classical simplifica-tion orderings - namely recursive path orderings, path and decomposition orderings, Knuth-Bendix orderings and polynomial orderings - is improved. Further, we restrict these orderingssuch that they are compatible with underlying AC-theories by extending well-known methodsas well as by developing new techniques. For automatically generating all these orderings,heuristic-based algorithms are given. A comparison of these orderings with respect to theirpowers and their time complexities concludes the theoretical part of this thesis. Finally, notonly a detailed statistical evaluation of examples but also a brief introduction into the designof a software tool representing the integration of the specified approaches is given.
In dieser Dissertation wird das Konzept der Gröbnerbasen für endlich erzeugte Monoid-und Gruppenringe verallgemeinert. Dabei werden Reduktionsmethoden sowohl zurDarstellung der Monoid- beziehungsweise Gruppenelemente, als auch zur Beschreibungder Rechtsidealkongruenz in den entsprechenden Monoid- beziehungsweise Gruppenrin-gen benutzt. Da im allgemeinen Monoide und insbesondere Gruppen keine zulässigenOrdnungen mehr erlauben, treten bei der Definition einer geeigneten Reduktionsrela-tion wesentliche Probleme auf: Zum einen ist es schwierig, die Terminierung einer Re-duktionsrelation zu garantieren, zum anderen sind Reduktionsschritte nicht mehr mitMultiplikationen verträglich und daher beschreiben Reduktionen nicht mehr unbedingteine Rechtsidealkongruenz. In dieser Arbeit werden verschiedene Möglichkeiten Reduk-tionsrelationen zu definieren aufgezeigt und im Hinblick auf die beschriebenen Problemeuntersucht. Dabei wird das Konzept der Saturierung, d.h. eine Polynommenge so zu er-weitern, daß man die von ihr erzeugte Rechtsidealkongruenz durch Reduktion erfassenkann, benutzt, um Charakterisierungen von Gröbnerbasen bezüglich der verschiedenenReduktionen durch s-Polynome zu geben. Mithilfe dieser Konzepte ist es gelungenfür spezielle Klassen von Monoiden, wie z.B. endliche, kommutative oder freie, undverschiedene Klassen von Gruppen, wie z.B. endliche, freie, plain, kontext-freie odernilpotente, unter Ausnutzung struktureller Eigenschaften spezielle Reduktionsrelatio-nen zu definieren und terminierende Algorithmen zur Berechnung von Gröbnerbasenbezüglich dieser Reduktionsrelationen zu entwickeln.
Structure and Construction of Instanton Bundles on P3
At present the standardization of third generation (3G) mobile radio systems is the subject of worldwide research activities. These systems will cope with the market demand for high data rate services and the system requirement for exibility concerning the offered services and the transmission qualities. However, there will be de ciencies with respect to high capacity, if 3G mobile radio systems exclusively use single antennas. Very promising technique developed for increasing the capacity of 3G mobile radio systems the application is adaptive antennas. In this thesis, the benefits of using adaptive antennas are investigated for 3G mobile radio systems based on Time Division CDMA (TD-CDMA), which forms part of the European 3G mobile radio air interface standard adopted by the ETSI, and is intensively studied within the standardization activities towards a worldwide 3G air interface standard directed by the 3GPP (3rd Generation Partnership Project). One of the most important issues related to adaptive antennas is the analysis of the benefits of using adaptive antennas compared to single antennas. In this thesis, these bene ts are explained theoretically and illustrated by computer simulation results for both data detection, which is performed according to the joint detection principle, and channel estimation, which is applied according to the Steiner estimator, in the TD-CDMA uplink. The theoretical explanations are based on well-known solved mathematical problems. The simulation results illustrating the benefits of adaptive antennas are produced by employing a novel simulation concept, which offers a considerable reduction of the simulation time and complexity, as well as increased exibility concerning the use of different system parameters, compared to the existing simulation concepts for TD-CDMA. Furthermore, three novel techniques are presented which can be used in systems with adaptive antennas for additionally improving the system performance compared to single antennas. These techniques concern the problems of code-channel mismatch, of user separation in the spatial domain, and of intercell interference, which, as it is shown in the thesis, play a critical role on the performance of TD-CDMA with adaptive antennas. Finally, a novel approach for illustrating the performance differences between the uplink and downlink of TD-CDMA based mobile radio systems in a straightforward manner is presented. Since a cellular mobile radio system with adaptive antennas is considered, the ultimate goal is the investigation of the overall system efficiency rather than the efficiency of a single link. In this thesis, the efficiency of TD-CDMA is evaluated through its spectrum efficiency and capacity, which are two closely related performance measures for cellular mobile radio systems. Compared to the use of single antennas, the use of adaptive antennas allows impressive improvements of both spectrum efficiency and capacity. Depending on the mobile radio channel model and the user velocity, improvement factors range from six to 10.7 for the spectrum efficiency, and from 6.7 to 12.6 for the spectrum capacity of TD-CDMA. Thus, adaptive antennas constitute a promising technique for capacity increase of future mobile communications systems.
In this thesis a new family of codes for the use in optical high bit rate transmission systems with a direct sequence code division multiple access scheme component was developed and its performance examined. These codes were then used as orthogonal sequences for the coding of the different wavelength channels in a hybrid OCDMA/WDMA system. The overall performance was finally compared to a pure WDMA system. The common codes known up to date have the problem of needing very long sequence lengths in order to accommodate an adequate number of users. Thus, code sequence lengths of 1000 or more were necessary to reach bit error ratios of with only about 10 simultaneous users. However, these sequence lengths are unacceptable if signals with data rates higher than 100 MBit/s are to be transmitted, not to speak about the number of simultaneous users. Starting from the well known optical orthogonal codes (OOC) and under the assumption of synchronization among the participating transmitters - justified for high bit rate WDM transmission systems -, a new code family called ?modified optical orthogonal codes? (MOOC) was developed by minimizing the crosscorrelation products of each two sequences. By this, the number of simultaneous users could be increased by several orders of magnitude compared to the known codes so far. The obtained code sequences were then introduced in numerical simulations of a 80 GBit/s DWDM transmission system with 8 channels, each carrying a 10 GBit/s payload. Usual DWDM systems are featured by enormous efforts to minimize the spectral spacing between the various wavelength channels. These small spacings in combination with the high bit rates lead to very strict demands on the system components like laser diode, filters, multiplexers etc. Continuous channel monitoring and temperature regulations of sensitive components are inevitable, but often cannot prevent drop downs of the bit error ratio due to aging effects or outer influences like mechanical stress. The obtained results show that - very different to the pure WDM system - by orthogonally coding adjacent wavelength channels with the proposed MOOC, the overall system performance gets widely independent from system parameters like input powers, channel spacings and link lengths. Nonlinear effects like XPM that insert interchannel crosstalk are effectively fought. Furthermore, one can entirely dispense with the bandpass filters, thus simplifying the receiver structure, which is especially interesting for broadcast networks. A DWDM system upgraded with the OCDMA subsystem shows a very robust behavior against a variety of influences.
Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.
Urban Design Guidelines have been used in Jakarta for controlling the form of the built environment. This planning instrument has been implemented in several central city redevelopment projects particularly in superblock areas. The instrument has gained popularity and implemented in new development and conservation areas as well. Despite its popularity, there is no formal literature on the Indonesian Urban Design Guideline that systematically explain its contents, structure and the formulation process. This dissertation attempts to explain the substantive of urban design guideline and the way to control its implementation. Various streams of urban design theories are presented and evaluated in term of their suitability for attaining a high urbanistic quality in major Indonesian cities. The explanation on the form and the practical application of this planning instrument is elaborated in a comparative investigation of similar instrument in other countries; namely the USA, Britain and Germany. A case study of a superblock development in Jakarta demonstrates the application of the urban design theories and guideline. Currently, the role of computer in the process of formulating the urban design guideline in Indonesia is merely as a replacement of the manual method, particularly in areas of worksheet calculation and design presentation. Further support of computer for urban planning and design tasks has been researched in developed countries, which shows its potential in supporting decision-making process, enabling public participation, team collaboration, documentation and publication of urban design decisions and so on. It is hoped that the computer usage in Indonesian urban design process can catch up with the global trend of multimedia, networking (Internet/Intranet) and interactive functions that is presented with examples from developed countries.
The study of families of curves with prescribed singularities has a long tradition. Its foundations were laid by Plücker, Severi, Segre, and Zariski at the beginning of the 20th century. Leading to interesting results with applications in singularity theory and in the topology of complex algebraic curves and surfaces it has attained the continuous attraction of algebraic geometers since then. Throughout this thesis we examine the varieties V(D,S1,...,Sr) of irreducible reduced curves in a fixed linear system |D| on a smooth projective surface S over the complex numbers having precisely r singular points of types S1,...,Sr. We are mainly interested in the following three questions: 1) Is V(D,S1,...,Sr) non-empty? 2) Is V(D,S1,...,Sr) T-smooth, that is smooth of the expected dimension? 3) Is V(D,S1,...Sr) irreducible? We would like to answer the questions in such a way that we present numerical conditions depending on invariants of the divisor D and of the singularity types S1,...,Sr, which ensure a positive answer. The main conditions which we derive will be of the type inv(S1)+...+inv(Sr) < aD^2+bD.K+c, where inv is some invariant of singularity types, a, b and c are some constants, and K is some fixed divisor. The case that S is the projective plane has been very well studied by many authors, and on other surfaces some results for curves with nodes and cusps have been derived in the past. We, however, consider arbitrary singularity types, and the results which we derive apply to large classes of surfaces, including surfaces in projective three-space, K3-surfaces, products of curves and geometrically ruled surfaces.
In der vorliegenden Arbeit wird das Verhalten von thermoplastischen
Verbundwerkstoffen mittels experimentellen und numerischen Untersuchungen
betrachtet. Das Ziel dieser Untersuchungen ist die Identifikation und Quantifikation
des Versagensverhaltens und der Energieabsorptionsmechanismen von geschichteten,
quasi-isotropen thermoplastischen Faser-Kunststoff-Verbunden und die Umsetzung
der gewonnenen Einsichten in Eigenschaften und Verhalten eines Materialmodells zur
Vorhersage des Crash-Verhaltens dieser Werkstoffe in transienten Analysen.
Vertreter der untersuchten Klassen sind un- und mittel-vertreckte Rundgestricke und
glasfaserverstärkte Thermoplaste (GMT). Die Untersuchungen an rundgestrickten
glasfaser-(GF)-verstärktem Polyethylentherephthalat (PET) waren Teil eines
Forschungsprojektes zur Charakterisierung sowohl der Verarbeitbarkeit als auch des
mechanischen Verhaltens. Experimente an GMT und Schnittfaser-GMT wurden
ebenfalls zum Vergleich mit dem Gestrick durchgeführt und dienen als Bestätigung
des beobachteten Verhaltens des Gestrickes.
Besonderer Aufmerksamkeit wird der Einfluß der Probengeometrie auf die Resultate
gewidmet, weil die Crash-Charakteristiken wesentlich von der Geometrie des
getesteten Probekörpers abhängen. Hierzu wurde ein Rundhutprofil zur Untersuchung
dieses Einflußes definiert. Diese spezielle Geometrie hat insbesondere Vorteile
hinsichtlich Energieabsorptionsvermögen sowie Herstellbarkeit von thermoplastischen
Verbundwerkstoffen (TPCs). Es wurden Impakt- und Perforationsversuche zur
Untersuchung der Schädigungsausbreitung und zur Charakterisierung der Zähigkeit
der untersuchten Materialien durchgeführt.
Geschichtete TPCs versagen hauptsächlich in einem Laminat-Biegemodus mit
kombiniertem intra- und interlaminaren Schub (transversaler Schub zwischen Lagen und teilweise mit transversalen Schubbrüchen in einzelnen Lagen). Durch eine
Kopplung der aktuellen Versagensmodi und Crash-Kennwerten wie der mittleren
Crash-Spannung, konnten Indikationen über die Relation zwischen Materialparameter
und absoluter Energieabsorption gewonnen werden.
Numerische Untersuchungen wurden mit einem expliziten Finiten Elemente-
Programm zur Simulation von dreidimensionalen, großen Verformungen durchgeführt.
Das Modell besteht bezüglich des Querschnittaufbaus aus einer mesoskopischen
Darstellung, die zwischen Matrix-zwischenlagen und mesoskopischen Verbundwerkstofflagen unterscheidet. Die Modellgeometrie stellt einen vereinfachten
Längsquerschnitt durch den Probekörper dar. Dabei wurden Einflüsse der Reibung
zwischen Impaktor und Material sowie zwischen einzelnen Lagen berücksichtigt.
Auch die lokal herrschende Dehnrate, Energie und Spannungs-Dehnungsverteilung
über die mesoskopischen Phasen konnten beobachtet werden. Dieses Modell zeigt
deutlich die verschiedenen Effekte, die durch den heterogenen Charakter des Laminats
entstehen, und gibt auch Hinweise für einige Erklärungen dieser Effekte.
Basierend auf den Resultaten der obengenannten Untersuchungen wurde ein
phänomenologisches Modell mit a-priori Information des inherenten
Materialverhaltens vorgeschlagen. Daher, daß das Crashverhalten vom heterogenen
Charakter des Werkstoffes dominiert wird, werden im Modell die Phasen separat
betrachtet. Eine einfache Methode zur Bestimmung der mesoskopischen Eigenschaften
wird diskutiert.
Zur Beschreibung des Verhaltens vom thermoplastischen Matrixsystem während
„Crushing“ würde ein dehnraten- und temperaturabhängiges Plastizitätsgesetz
ausreichen. Für die Beschreibung des Verhaltens der Verbundwerkstoffschichten wird
eine gekoppelte Plastizitäts- und Schädigungsformulierung vorgeschlagen. Ein solches
Modell kann sowohl den plastischen Anteil des Matrixsystems als auch das
„Softening“ - verursacht durch Faser-Matrix-Grenzflächenversagen und Faserbrüche -
beschreiben. Das vorgeschlagene Modell unterscheidet zwischen Belastungsfällen für
axiales „Crushing“ und Versagen ohne „Crushing“. Diese Unterteilung ermöglicht
eine explizite Modellierung des Werkstoffes unter Berücksichtigung des spezifischen
Materialzustandes und der Geometrie für den außerordentlichen Belastungsfall, der
zum progressiven Versagen führt.
The development of recombinant DNA techniques opened a new era for protein production both in scientific research and industrial application. However, the purification of recombinant proteins is very often quite difficult and inefficient. Therefore, we tried to employ novel techniques for the expression and purification of three pharmacologically interesting proteins: the plant toxin gelonin; a fusion protein of gelonin and the extracellular domain of the subunit of the acetylcholine receptor (gelonin-AchR) and human neurotrophin 3 (hNT3). Recombinant gelonin, acetylcholine receptor a subunit and their fusion product, gelonin-AchR were constructed and expressed. The gelonin gene, a 753 bp polynucleotide was chemically synthesized by Ya-Wei Shi et al. and was kindly provided to us. The gene was first inserted into the vector pUC118 yielding pUC-gel. It was subsequently transferred into pET28a and pET-gel was expressed in E. coli. The product, gelonin was soluble and was purified in two steps showing a homogeneous band corresponding to 28 kD on SDS-PAGE. The expression of the extracellular domain of the -subunit of AchR always led to insoluble aggregates and even upon coexpression with the chaperonin GroESL, very small and hardly reproducible amounts of soluble material were formed, only. Therefore, recombinant AchR- gelonin was cloned and expressed in the same host. The corresponding fusion protein, gelonin-AchR, again formed aggregates and it had to be solubilized in 6 M Gu-HCl for further purification and refolding. The final product, however, was recognized by several monoclonal antibodies directed against the extracellular domain of the -subunit of AchR as well as a polyclonal serum against gelonin. Expression and purification of recombinant hNT3 was achieved by the use of a protein self-splicing system. Based on the reported hNT3 DNA sequence, a 380 bp fragment corresponding to a 14 kD protein was amplified from genomal DNA of human whole blood by PCR. The DNA fragment was cloned into the pTXB1 vector, which contains a DNA fragment of intein and chintin binding domain (CBD). A further construct, pJLA-hNT3, is temperature-inducible. Both constructs expressed the target protein, hNT3-intein-CBD in E. coli by the induction with IPTG or temperature, however, as aggregates. After denaturation and renaturation, the soluble fusion protein was slowly loaded on an affinity column of chitin beads. A 14 kD hNT3 could be isolated after cleavage with DTT either at 4 °C or 25 °C for 48 h. Based on nerve fiber out-growth of the dorsal root ganglia of chicken embryos, both, hNT-3-intein-CBD and hNT3 itself exhibit almost the same biological activity.
Contributions to the application of adaptive antennas and CDMA code pooling in the TD CDMA downlink
(2002)
TD (Time Division)-CDMA is one of the partial standards adopted by 3GPP (3rd Generation Partnership Project) for 3rd Generation (3G) mobile radio systems. An important issue when designing 3G mobile radio systems is the efficient use of the available frequency spectrum, that is the achievement of a spectrum efficiency as high as possible. It is well known that the spectrum efficiency can be enhanced by utilizing multi-element antennas instead of single-element antennas at the base station (BS). Concerning the uplink of TD- CDMA, the benefits achievable by multi-element BS antennas have been quantitatively studied to a satisfactory extent. However, corresponding studies for the downlink are still missing. This thesis has the goal to make contributions to fill this lack of information. For near-to-reality directional mobile radio scenarios TD-CDMA downlink utilizing multi-element antennas at the BS are investigated both on the system level and on the link level. The system level investigations show how the carrier-to-interference ratio can be improved by applying such antennas. As the result of the link level investigations, which rely on the detection scheme Joint Detection (JD), the improvement of the bit er- ror rate by utilizing multi-element antennas at the BS can be quantified. Concerning the link level of TD-CDMA, a number of improvements are proposed which allow considerable performance enhancement of TD-CDMA downlink in connection with multi-element BS antennas. These improvements include * the concept of partial joint detection (PJD), in which at each mobile station (MS) only a subset of the arriving CDMA signals including those being of interest to this MS are jointly detected, * a blind channel estimation algorithm, * CDMA code pooling, that is assigning more than one CDMA code to certain con- nections in order to offer these users higher data rates, * maximizing the Shannon transmission capacity by an interleaving concept termed CDMA code interleaving and by advantageously selecting the assignment of CDMA codes to mobile radio channels, * specific power control schemes, which tackle the problem of different transmission qualities of the CDMA codes. As a comprehensive illustration of the advantages achievable by multi-element BS anten- nas in the TD-CDMA downlink, quantitative results concerning the spectrum efficiency for different numbers of antenna elements at the BS conclude the thesis.
The dissertation is concerned with the numerical solution of Fokker-Planck equations in high dimensions arising in the study of dynamics of polymeric liquids. Traditional methods based on tensor product structure are not applicable in high dimensions for the number of nodes required to yield a fixed accuracy increases exponentially with the dimension; a phenomenon often referred to as the curse of dimension. Particle methods or finite point set methods are known to break the curse of dimension. The Monte Carlo method (MCM) applied to such problems are 1/sqrt(N) accurate, where N is the cardinality of the point set considered, independent of the dimension. Deterministic version of the Monte Carlo method called the quasi Monte Carlo method (QMC) are quite effective in integration problems and accuracy of the order of 1/N can be achieved, up to a logarithmic factor. However, such a replacement cannot be carried over to particle simulations due to the correlation among the quasi-random points. The method proposed by Lecot (C.Lecot and F.E.Khettabi, Quasi-Monte Carlo simulation of diffusion, Journal of Complexity, 15 (1999), pp.342-359) is the only known QMC approach, but it not only leads to large particle numbers but also the proven order of convergence is 1/N^(2s) in dimension s. We modify the method presented there, in such a way that the new method works with reasonable particle numbers even in high dimensions and has better order of convergence. Though the provable order of convergence is 1/sqrt(N), the results show less variance and thus the proposed method still slightly outperforms standard MCM.
Matrix Compression Methods for the Numerical Solution of Radiative Transfer in Scattering Media
(2002)
Radiative transfer in scattering media is usually described by the radiative transfer equation, an integro-differential equation which describes the propagation of the radiative intensity along a ray. The high dimensionality of the equation leads to a very large number of unknowns when discretizing the equation. This is the major difficulty in its numerical solution. In case of isotropic scattering and diffuse boundaries, the radiative transfer equation can be reformulated into a system of integral equations of the second kind, where the position is the only independent variable. By employing the so-called momentum equation, we derive an integral equation, which is also valid in case of linear anisotropic scattering. This equation is very similar to the equation for the isotropic case: no additional unknowns are introduced and the integral operators involved have very similar mapping properties. The discretization of an integral operator leads to a full matrix. Therefore, due to the large dimension of the matrix in practical applcation, it is not feasible to assemble and store the entire matrix. The so-called matrix compression methods circumvent the assembly of the matrix. Instead, the matrix-vector multiplications needed by iterative solvers are performed only approximately, thus, reducing, the computational complexity tremendously. The kernels of the integral equation describing the radiative transfer are very similar to the kernels of the integral equations occuring in the boundary element method. Therefore, with only slight modifications, the matrix compression methods, developed for the latter are readily applicable to the former. As apposed to the boundary element method, the integral kernels for radiative transfer in absorbing and scattering media involve an exponential decay term. We examine how this decay influences the efficiency of the matrix compression methods. Further, a comparison with the discrete ordinate method shows that discretizing the integral equation may lead to reductions in CPU time and to an improved accuracy especially in case of small absorption and scattering coefficients or if local sources are present.
Different aspects of geomagnetic field modelling from satellite data are examined in the framework of modern multiscale approximation. The thesis is mostly concerned with wavelet techniques, i.e. multiscale methods based on certain classes of kernel functions which are able to realize a multiscale analysis of the funtion (data) space under consideration. It is thus possible to break up complicated functions like the geomagnetic field, electric current densities or geopotentials into different pieces and study these pieces separately. Based on a general approach to scalar and vectorial multiscale methods, topics include multiscale denoising, crustal field approximation and downward continuation, wavelet-parametrizations of the magnetic field in Mie-representation as well as multiscale-methods for the analysis of time-dependent spherical vector fields. For each subject the necessary theoretical framework is established and numerical applications examine and illustrate the practical aspects.
Utilization of Correlation Matrices in Adaptive Array Processors for Time-Slotted CDMA Uplinks
(2002)
It is well known that the performance of mobile radio systems can be significantly enhanced by the application of adaptive antennas which consist of multi-element antenna arrays plus signal processing circuitry. In the thesis the utilization of such antennas as receive antennas in the uplink of mobile radio air interfaces of the type TD-CDMA is studied. Especially, the incorporation of covariance matrices of the received interference signals into the signal processing algorithms is investigated with a view to improve the system performance as compared to state of the art adaptive antenna technology. These covariance matrices implicitly contain information on the directions of incidence of the interference signals, and this information may be exploited to reduce the effective interference power when processing the signals received by the array elements. As a basis for the investigations, first directional models of the mobile radio channels and of the interference impinging at the receiver are developed, which can be implemented on the computer at low cost. These channel models cover both outdoor and indoor environments. They are partly based on measured channel impulse responses and, therefore, allow a description of the mobile radio channels which comes sufficiently close to reality. Concerning the interference models, two cases are considered. In the one case, the interference signals arriving from different directions are correlated, and in the other case these signals are uncorrelated. After a visualization of the potential of adaptive receive antennas, data detection and channel estimation schemes for the TD-CDMA uplink are presented, which rely on such antennas under the consideration of interference covariance matrices. Of special interest is the detection scheme MSJD (Multi Step Joint Detection), which is a novel iterative approach to multi-user detection. Concerning channel estimation, the incorporation of the knowledge of the interference covariance matrix and of the correlation matrix of the channel impulse responses is enabled by an MMSE (Minimum Mean Square Error) based channel estimator. The presented signal processing concepts using covariance matrices for channel estimation and data detection are merged in order to form entire receiver structures. Important tasks to be fulfilled in such receivers are the estimation of the interference covariance matrices and the reconstruction of the received desired signals. These reconstructions are required when applying MSJD in data detection. The considered receiver structures are implemented on the computer in order to enable system simulations. The obtained simulation results show that the developed schemes are very promising in cases, where the impinging interference is highly directional, whereas in cases with the interference directions being more homogeneously distributed over the azimuth the consideration of the interference covariance matrices is of only limited benefit. The thesis can serve as a basis for practical system implementations.
Based on the framework of continuum mechanics two different concepts to formulate phenomenological anisotropic inelasticity are developed in a thermodynamically consistent manner. On the one hand, special emphasis is placed on the incorporation of structural tensors while on the other hand, fictitious configurations are introduced. Substantial parts of this work deal with the numerical treatment of the presented theory within the finite element method.
This thesis builds a bridge between singularity theory and computer algebra. To an isolated hypersurface singularity one can associate a regular meromorphic connection, the Gauß-Manin connection, containing a lattice, the Brieskorn lattice. The leading terms of the Brieskorn lattice with respect to the weight and V-filtration of the Gauß-Manin connection define the spectral pairs. They correspond to the Hodge numbers of the mixed Hodge structure on the cohomology of the Milnor fibre and belong to the finest known invariants of isolated hypersurface singularities. The differential structure of the Brieskorn lattice can be described by two complex endomorphisms A0 and A1 containing even more information than the spectral pairs. In this thesis, an algorithmic approach to the Brieskorn lattice in the Gauß-Manin connection is presented. It leads to algorithms to compute the complex monodromy, the spectral pairs, and the differential structure of the Brieskorn lattice. These algorithms are implemented in the computer algebra system Singular.
In the present work, we investigated how to correct the questionable normality, linear and quadratic assumptions underlying existing Value-at-Risk methodologies. In order to take also into account the skewness, the heavy tailedness and the stochastic feature of the volatility of the market values of financial instruments, the constant volatility hypothesis widely used by existing Value-at-Risk appproches has also been investigated and corrected and the tails of the financial returns distributions have been handled via Generalized Pareto or Extreme Value Distributions. Artificial Neural Networks have been combined by Extreme Value Theory in order to build consistent and nonparametric Value-at-Risk measures without the need to make any of the questionable assumption specified above. For that, either autoregressive models (AR-GARCH) have been used or the direct characterization of conditional quantiles due to Bassett, Koenker [1978] and Smith [1987]. In order to build consistent and nonparametric Value-at-Risk estimates, we have proved some new results extending White Artificial Neural Network denseness results to unbounded random variables and provide a generalisation of the Bernstein inequality, which is needed to establish the consistency of our new Value-at-Risk estimates. For an accurate estimation of the quantile of the unexpected returns, Generalized Pareto and Extreme Value Distributions have been used. The new Artificial Neural Networks denseness results enable to build consistent, asymptotically normal and nonparametric estimates of conditional means and stochastic volatilities. The denseness results uses the Sobolev metric space L^m (my) for some m >= 1 and some probability measure my and which holds for a certain subclass of square integrable functions. The Fourier transform, the new extension of the Bernstein inequality for unbounded random variables from stationary alpha-mixing processes combined with the new generalization of a result of White and Wooldrige [1990] have been the main tool to establich the extension of White's neural network denseness results. To illustrate the goodness and level of accuracy of the new denseness results, we were able to demonstrate the applicability of the new Value-at-Risk approaches by means of three examples with real financial data mainly from the banking sector traded on the Frankfort Stock Exchange.
thesis deals with the investigation of the dynamics of optically excited (hot) electrons in thin and ultra-thin layers. The main interests concern about the time behaviour of the dissipation of energy and momentum of the excited electrons. The relevant relaxation times occur in the femtosecond time region. The two-photon photoemission is known to be an adequate tool in order to analyse such dynamical processes in real-time. This work expands the knowledge in the fields of electron relaxation in ultra-thin silver layers on different substrates, as well as in adsorbate states in a bandgap of a semiconductor. It contributes facts to the comprehension of spin transport through an interface between a metal and a semiconductor. The primary goal was to prove the predicted theory by reducing the observed crystal in at least one direction. One expects a change of the electron relaxation behaviour while altering the crystal’s shape from a 3d bulk to a 2d (ultra-thin) layer. This is due to the fact that below a determined layer thickness, the electron gas transfers to a two-dimensional one. This behaviour could be proven in this work. In an about 3nm thin silver layer on graphite, the hot electrons show a jump to longer relaxation time all over the whole accessible energy range. It is the first time that the temporal evolution of the relaxation of excited electrons could be observed during the transition from a 3d to a 2d system. In order to reduce or even eliminate the influence coming from the substrate, the system of silver on the semiconductor GaAs, which has a bandgap of 1.5eV at the Gamma-point, was investigated. The observations of the relaxation behaviour of hot electron in different ultra-thin silver layers on this semiconductor could show, that at metal-insulator-junctions, plasmons in the silver and in the interface, as well as cascading electrons from higher lying energies, have a huge influence to the dissipation of momentum and energy. This comes mainly from the band bending of the semiconductor, and from the electrons, which are excited in GaAs. The limitation of the silver layer on GaAs in one direction led to the expected generation of quantum well states (QWS) in the bandgap. Those adsorbate states have quantised energy- and momentum values, which are directly connected to the layer thickness and the standing electron wave therein. With the experiments of this work, published values could not only be completed and proved, but it could also be determined the time evolution of such a QWS. It came out that this QWS might only be filled by electrons, which are moving from the lower edge of the conduction band of the semiconductor to the silver and suffer cascading steps there. By means of the system silver on GaAs, and of the known fact that an excitation of electrons in GaAs with circularly polarised light of the energy 1.5eV does produce spin polarised electrons in the conduction band, it became possible to bring a contribution to the hot topic of spin injection. The main target of spin injection is the transfer of spin polarised electrons out of a ferromagnet into a semiconductor, in order to develop spin dependent switches and memories. It could be demonstrated here that spin polarised electrons from GaAs can move through the interface into silver, could be photoemitted from there and their spin was still being detectable. As a third investigation system, ultra-thin silver layers were deposited on the insulator MgO, which has a bandgap of 7.8eV. Also in this system, one could recognize a change in the relaxation time while reducing the dimension of the silver layer from thick to ultra-thin. Additionally, it came out an extreme large relaxation time at a layer thickness of 0.6 – 1.2nm. This time is an order of magnitude longer than at thick films, and this is a consequence of two factors: first, the reduction of the phase space due to the confined electron gas in the z-direction, and second, the slowlier thermalisation of the electron gas due to less accessible scattering partners.
Microsystem technology has been a fast evolving field over the last few years. Its ability to handle volumes in the sub-microliter range makes it very interesting for potential application in fields such as biology, medicine and pharmaceutical research. However, the use of micro-fabricated devices for the analysis of liquid biological samples still has to prove its applicability for many particular demands of basic research. This is particularly true for samples consisting of complex protein mixtures. The presented study therefore aimed at evaluating if a commonly used glass-coating technique from the field of micro-fluidic technology can be used to fabricate an analysis system for molecular biology. It was ultimately motivated by the demand to develop a technique that allows the analysis of biological samples at the single-cell level. Gene expression at the transcription level is initiated and regulated by DNA-binding proteins. To fully understand these regulatory processes, it is necessary to monitor the interaction of specific transcription factors with other elements - proteins as well as DNA sites - in living cells. One well-established method to perform such analysis is the Chromatin Immunoprecipitation (CHIP) assay. To map protein-DNA interactions, living cells are treated with formaldehyde in vivo to cross-link DNA-binding proteins to their resident sites. The chromatin is then broken into small fragments, and specific antibodies against the protein of interest are used to immunopurify the chromatin fragments to which those factors are bound. After purification, the associated DNA can be detected and analyzed using Polymerase Chain Reaction (PCR). Current CHIP technology is limited as it needs a relatively large number of cells while there is increasing interest in monitoring DNA-protein interactions in very few, if not single cells. Most notably this is the case in research on early organism development (embryogenesis). To investigate if microsystem technology can be used to analyze DNA-protein complexes from samples containing chromatin from only few cells, a new setup for fluid transport in glass capillaries of 75 µm inner diameter has been developed, forming an array of micro-columns for parallel affinity chromatography. The inner capillary walls were antibody-coated using a silane-based protocol. The remaining surface was made chemically inert by saturating free binding sites with suitable biomolecules. Variations of this protocol have been tested. Furthermore, the sensitivity of the PCR method to detect immunoprecipitated protein-DNA complexes was improved, resulting in the reliable detection of about 100 DNA fragments from chromatin. The aim of the study was to successively decrease the amount of analyzed chromatin in order to investigate the lower limits of this technology in regard to sensitivity and specificity of detection. The Drosophila GAGA transcription factor was used as an established model system. The protein has already been analyzed in several large-scale CHIP experiments and antibodies of excellent specificity are available. The results of the study revealed that this approach is not easily applicable to "real-world" biological samples in regard to volume reduction and specificity. Particularly, material that non-specifically adsorbed to capillary surfaces outweighed the specific antibody-antigen interaction, the system was designed for. It became clear that complex biological structures, such as chromatin-protein compositions, are not as easily accessible by techniques based on chemically modified glass surfaces as pre-purified samples. In the case of the investigated system, it became evident that there is a need for more research that goes beyond the scope of this work. It is necessary to develop novel coatings and materials to prevent non-specific adsorption. In addition to improving existing techniques, fundamentally new concepts, such as microstructures in biocompatible polymers or liquid transport on hydrophobic stripes on planar substrates to minimize surface contact, may also help to advance the miniaturization of biological experiments.
One crucial assumption of continuous financial mathematics is that the portfolio can be rebalanced continuously and that there are no transaction costs. In reality, this of course does not work. On the one hand, continuous rebalancing is impossible, on the other hand, each transaction causes costs which have to be subtracted from the wealth. Therefore, we focus on trading strategies which are based on discrete rebalancing - in random or equidistant times - and where transaction costs are considered. These strategies are considered for various utility functions and are compared with the optimal ones of continuous trading.
The immiscible lattice BGK method for solving the two-phase incompressible Navier-Stokes equations is analysed in great detail. Equivalent moment analysis and local differential geometry are applied to examine how interface motion is determined and how surface tension effects can be included such that consistency to the two-phase incompressible Navier-Stokes equations can be expected. The results obtained from theoretical analysis are verified by numerical experiments. Since the intrinsic interface tracking scheme of immiscible lattice BGK is found to produce unsatisfactory results in two-dimensional simulations several approaches to improving it are discussed but all of them turn out to yield no substantial improvement. Furthermore, the intrinsic interface tracking scheme of immiscible lattice BGK is found to be closely connected to the well-known conservative volume tracking method. This result suggests to couple the conservative volume tracking method for determining interface motion with the Navier-Stokes solver of immiscible lattice BGK. Applied to simple flow fields, this coupled method yields much better results than plain immiscible lattice BGK.
In this work we present and estimate an explanatory model with a predefined system of explanatory equations, a so called lag dependent model. We present a locally optimal, on blocked neural network based lag estimator and theorems about consistensy. We define the change points in context of lag dependent model, and present a powerfull algorithm for change point detection in high dimensional high dynamical systems. We present a special kind of bootstrap for approximating the distribution of statistics of interest in dependent processes.
Lung cancer, mainly caused by tobacco smoke, is the leading cause of cancer mortality. Large efforts in prevention and cessation have reduced smoking rates in the U.S. and other countries. Nevertheless, since 1990, rates have remained constant and it is believed that most of those currently smoking (~25%) are addicted to nicotine, and therefore are unable to stop smoking. An alternative strategy to reduce lung cancer mortality is the development of chemopreventive mixtures used to reduce cancer risk. Before entering clinical trails, it is crucial to know the efficacy, toxicity and the molecular mechanism by which the active compounds prevent carcinogenesis. 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), N-nitrosonornicotine (NNN) and benzo[a]pyrene (B[a]P) are among the most carcinogenic compounds in tobacco smoke. All have been widely used as model carcinogens and their tumorigenic activities are well established. It is believed that formation of DNA adducts is a crucial step in carcinogenesis. NNK and NNN form 4-hydroxy-1-(3-pyridyl)-1-butanone releasing and methylating adducts, while B[a]P forms B[a]P-tetraol-releasing adducts. Different isothiocyanates (ITCs) are able to prevent NNK-, NNN- or B[a]P-induced tumor formation, but relative little is know about the mechanism of these preventive effects. In this thesis, the influence of different ITCs on adduct formation from NNK plus B[a]P and NNN were evaluated. Using an A/J mouse lung tumor model, it was first shown that the formation of HPB-releasing, O6-mG and B[a]P-tetraol-releasing adducts were not affected when NNK and B[a]P were given individually or in combination, of by gavage. Using the same model, the effects of different mixtures of PEITC and BITC, given by gavage or in the diet, on DNA adduct formation were evaluated. Dietary treatment with phenethyl isothiocyanate (PEITC) or PEITC plus benzyl isothiocyanate (BITC) reduced levels of HPB-releasing adducts by 40*50%. This is consistent with a previously shown 40% inhibition of tumor multiplicity for the same treatment. In the gavage treatments with ITCs it seemed that PEITC reduced HPB-releasing DNA adducts, while levels of BITC counteracted these effects. Levels of O6-mG were minimally affected by any of the treatments. Levels of B[a]P-tetraol releasing adducts were reduced by gavaged PEITC Summary Page XII and BITC, 120 h after the last carcinogen treatment, while dietary treatment had no effects. We then extended our investigation to F-344 rats by using a similar ITC treatment protocol as in the mouse model. NNK was given in the drinking water and B[a]P in diet. Dietary PEITC reduced the formation of HPB-releasing globin and DNA adducts in lung but not in liver, while levels of B[a]P-tetraol-releasing adducts were unaffected. Additionally, the effects of PEITC, 3-phenlypropyl isothiocyanate, and their N-acetylcystein conjugates in diet on adducts from NNN in drinking water were evaluated in rat esophageal DNA and globin. Using a protocol known to inhibit NNNinduced esophageal tumorigenesis, the levels of HPB-releasing adduct levels were unaffected by the ITCs treatment. The observations that dietary PEITC inhibited the formation of HPB-releasing DNA adducts only in mice where the control levels were above 1 fmol/µg DNA and adduct levels in rat lung were reduced to levels seen in liver, lead to the conclusion that in mice and rats, there are at least two activation pathway of NNK. One is PEITC-sensitive and responsible for the high adduct levels in lung and presumably also for higher carcinogenicity of NNK in lung. The other is PEITC-insensitive and responsible for the remaining adduct levels and tumorigenicity. In conclusion, our results demonstrated that the preventive mechanism by which ITCs inhibit carcinogenesis is only in part due to inhibition of DNA adduct formation and that other mechanisms are involved. There is a large body of evidence indicating that induction of apoptosis may be a mechanism by which ITCs prevent tumor formation, but further studies are required.
In this work the investigation of a (Ti, Al, Si) N system was done. The main point of investigation was to study the possibility of getting the nanocomposite coatings structures by deposition of multilayer films from TiN, AlSiN, . This tries to understand the relation between the mechanical properties (hardness, Young s modulus), and the microstructure (nanocrystalline with individual phases). Particularly special attention was given to the temperature effects on microstructural changes in annealing at 600 °C for the coatings. The surface hardness, elastic modulus, and the multilayers diffusion and compositions were the test tools for the comparison between the different coated samples with and without annealing at 600 °C. To achieve this object a rectangular aluminum vacuum chamber with three unbalanced sputtering magnetrons for the deposition of thin film coatings from different materials was constructed The chamber consists mainly of two chambers, the pre-vacuum chamber to load the workpiece, and the main vacuum chamber where the sputtering deposition of the thin film coatings take place. The workpiece is moving on a car travel on a railway between the two chambers to the position of the magnetrons by step motors. The chambers are divided by a self constructed rectangular gate controlled manually from outside the chamber. The chamber was sealed for vacuum use using glue and screws. Therefore, different types of glue were tested not only for its ability to develop an uniform thin layer in the gap between the aluminum plates to seal the chamber for vacuum use, but also low outgassing rates which made it suitable for vacuum use. A epoxy was able to fulfill this tasks. The evacuation characteristics of the constructed chamber was improved by minimizing the inner surface outgassing rate. Therefore, the throughput outgassing rate test method was used in the comparisons between the selected two aluminum materials (A2017 and A5353) samples short time period (one hour) outgassing rates. Different machining methods and treatments for the inner surface of the vacuum chamber were tested. The machining of the surface of material A (A2017) with ethanol as coolant fluid was able to reduce its outgassing rate a factor of 6 compared with a non-machined sample surface of the same material. The reduction of the surface porous oxide layer on the top of the aluminum surface by the pickling process with HNO3 acid, and the protection of it by producing another passive non-porous oxides layer using anodizing process will protect the surface for longer time and will minimize the outgassing rates even under humid atmosphere The residual gas analyzer (RGA) 6. Summary test shows that more than 85% of the gases inside the test chamber were water vapour (H2O) and the rests are (N2, H2, CO), so liquid nitrogen water vapor trap can enhance the chamber pumping down process. As a result it was possible to construct a chamber that can be pumped down using a turbo molecular pump (450 L/s) to the range of 1x10-6 mbar within one hour of evacuations where the chamber volume is 160 Litters and the inner surface area is 1.6 m2. This is a good base pressure for the process of sputtering deposition of hard thin film coatings. Multilayer thin film coating was deposited to demonstrate that nanostructured thin film within the (Ti, Al, Si) N system could be prepared by reactive magnetron sputtering of multi thin film layers of TiN, AlSiN. The (SNMS) spectrometry of the test samples show that a complete diffusion between the different deposited thin film coating layers in each sample takes place, even at low substrate deposition temperature. The high magnetic flux of the unbalanced magnetrons and the high sputtering power were able to produce a high ion-toatom flux, which give high mobility to the coated atoms. The interactions between the high mobility of the coated atoms and the ion-to-atom flux were sufficient to enhance the diffusion between the different deposited thin layers. It was shown from the XRD patterns for this system that the structure of the formed mixture consists of two phases. One phase is noted as TiN bulk and another detected unknown amorphous phase, which can be SiNx or AlN or a combination of Ti-Al-Si-N. As a result we where able to deposit a nanocomposite coatings by the deposition of multilayers from TiN, AlSiN thin film coatings using the constructed vacuum chamber
Solid particle erosion is usually undesirable, as it leads to development of cracks and
holes, material removal and other degradation mechanisms that as final
consequence reduce the durability of the structure imposed to erosion. The main aim
of this study was to characterise the erosion behaviour of polymers and polymer
composites, to understand the nature and the mechanisms of the material removal
and to suggest modifications and protective strategies for the effective reduction of
the material removal due to erosion.
In polymers, the effects of morphology, mechanical-, thermomechanical, and fracture
mechanical- properties were discussed. It was established that there is no general
rule for high resistance to erosive wear. Because of the different erosive wear
mechanisms that can take place, wear resistance can be achieved by more than one
type of materials. Difficulties with materials optimisation for wear reduction arise from
the fact that a material can show different behaviour depending on the impact angle
and the experimental conditions. Effects of polymer modification through mixing or
blending with elastomers and inclusion of nanoparticles were also discussed.
Toughness modification of epoxy resin with hygrothermally decomposed polyesterurethane
can be favourable for the erosion resistance. This type of modification
changes also the crosslinking characteristics of the modified EP and it was
established the crosslink density along with fracture energy are decisive parameters
for the erosion response. Melt blending of thermoplastic polymers with functionalised
rubbers on the other hand, can also have a positive influence whereas inclusion of
nanoparticles deteriorate the erosion resistance at low oblique impact angles (30°).
The effects of fibre length, orientation, fibre/matrix adhesion, stacking sequence,
number, position and existence of interleaves were studied in polymer composites.
Linear and inverse rules of mixture were applied in order to predict the erosion rate of
a composite system as a function of the erosion rate of its constituents and their
relative content. Best results were generally delivered with the inverse rule of mixture
approach.
A semi-empirical model, proposed to describe the property degradation and damage
growth characteristics and to predict residual properties after single impact, was
applied for the case of solid particle erosion. Theoretical predictions and experimental
results were in very good agreement.
Strahlerosionsverschleiß (Erosion) entsteht beim Auftreffen von festen Partikel
auf Oberflächen und zeichnet sich üblicherweise durch einen Materialabtrag aus, der
neben der Partikelgeschwindigkeit und dem Auftreffwinkel stark vom jeweiligen
Werkstoff abhängt. In den letzten Jahren ist die Anwendung von Polymeren und
Verbundwerkstoffen anstelle der traditionellen Materialien stark angestiegen.
Polymere und Polymer-Verbundwerkstoffe weisen eine relativ hohe Erosionsrate
(ER) auf, was die potenzielle Anwendung dieser Werkstoffe unter erosiven
Umgebungsbedingungen erheblich einschränkt.
Untersuchungen des Erosionsverhaltens anhand ausgewählter Polymere und
Polymer-Verbundwerkstoffe haben gezeigt, dass diese Systeme unterschiedlichen
Verschleißmechnismen folgen, die sehr komplex sind und nicht nur von einer
Werkstoffeigenschaft beeinflusst werden. Anhand der ER kann das
Erosionsverhalten grob in zwei Kategorien eingeteilt werden: sprödes und duktiles
Erosionsverhalten. Das spröde Erosionsverhalten zeigt eine maximale ER bei 90°,
während das Maximum bei dem duktilen Verhalten bei 30° liegt. Ob ein Material das
eine oder das andere Erosionsverhalten aufweist, ist nicht nur von seinen
Eigenschaften, sondern auch von den jeweiligen Prüfparametern abhängig.
Das Ziel dieser Forschungsarbeit war, das grundsätzliche Verhalten von
Polymeren und Verbundwerkstoffen unter dem Einfluss von Erosion zu
charakterisieren, die verschiedenen Verschleißmechanismen zu erkennen und die
maßgeblichen Materialeigenschaften und Kennwerte zu erfassen, um Anwendungen
dieser Werkstoffe unter Erosionsbedingungen zu ermöglichen bzw. zu verbessern.
An einer exemplarischen Auswahl von Polymeren, Elastomeren, modifizierten Polymeren und Faserverbundwerkstoffen wurden die wesentlichen Einflussfaktoren
für die Erosion experimentell bestimmt.
Thermoplastische Polymere und thermoplastische- und vernetzte- Elastomere
Die Versuche, den Erosionswiderstand ausgewählter Polymere (Polyethylene
und Polyurethane) mit verschiedenen Materialeigenschaften zu korrelieren, haben
gezeigt, dass es weder eine klare Abhängigkeit von einzelnen Kenngrößen noch von
Eigenschaftskombinationen gibt. Möglicherweise führt die Bestimmung der
Materialeigenschaften unter den gleichen experimentellen Bedingungen wie bei den Erosionsversuchen zu einer besseren Korrelation zwischen ER und
Materialkenngröße.
Modifiziertes Epoxidharz
Am Beispiel eines modifizierten Epoxidharzes (EP) mit verschiedener
Vernetzungsdichte wurde eine Korrelation zwischen Erosionswiderstand und
Bruchenergie bzw. Erosionswiderstand und Vernetzungsdichte gefunden. Die
Modifizierung erfolgte mit verschiedenen Anteilen von einem hygrothermisch
abgebauten Polyurethan (HD-PUR). Der Zusammenhang zwischen ER und
Vernetzungsparametern steht im Einklang mit der Theorie der Kautschukelastizität.
Modifizierungseffizienz in Duromeren, Thermoplasten und Elastomeren
Des weiteren wurde der Einfluss von Modifizierungen von Polymeren und
Elastomeren untersucht. Mit dem obenerwähnten System (d.h. EP/HD-PUR) läßt sich
auch der Einfluss der Zähigkeitsmodifizierung des Epoxidharzes (EP) auf das
Erosionsverhalten untersuchen. Es wurde gezeigt, dass für HD-PUR Anteile von
mehr als 20 Gew.% diese Modifizierung einen positiven Einfluss auf die
Erosionsbeständigkeit hat. Durch Variation der HD-PUR-Anteile können für dieses
EP Materialeigenschaften, die zwischen den Eigenschaften eines üblichen
Duroplasten und eines weniger elastischen Gummis liegen, erzeugt werden.
Deswegen stellt der modifizierte EP-Harz ein sehr gutes Modellmaterial dar, um den
Einfluss der experimentellen Bedingungen zu studieren, und zu untersuchen, ob
verschiedene Erodenten zu gleichen Erosionsmechanismen führen. Der Übergang
vom duroplastischen zum zähen Verhalten wurde anhand von vier Erodenten
untersucht. Aus den Versuchen ergab sich, dass ein solcher Übergang auftritt, wenn
sehr feine, kantige Partikel (Korund) als Erodenten dienen. Die Partikelgröße und -form ist von entscheidender Bedeutung für die jeweiligen Verschleißmechanismen.
Die Effizienz neuartiger thermoplastischer Elastomere mit einer cokontinuierlichen
Phasenstruktur, bestehend aus thermoplastischem Polyester und
Gummi (funktionalisierter NBR und EPDM Kautschuk), wurde in Bezug auf die
Erosionsbeständigkeit untersucht. Große Anteile von funktionalisiertem Gummi (mehr
als 20 Gew.%) sind vorteilhaft für den Erosionswiderstand. Weiterhin wurde
untersucht, ob sich die herausragende Erosionsbeständigkeit von Polyurethan (PUR)
durch Zugabe von Nanosilikaten eventuell noch steigern läßt. Das Ergebnis war,
dass die Nanopartikel sich vor allem bei einem kleinen Verschleißwinkel (30°) negativ
auswirken. Die schwache Adhäsion zwischen Matrix und Partikeln erleichtert den
Beginn und das Wachsen von Rissen. Dies führt zu einem schnelleren
Materialabtrag von der Materialoberfläche.
Faserverbundwerkstoffe
Ferner wurden Faserverbundwerkstoffe (FVW) mit thermoplastischer und
duromerer Matrix auf ihr Verhalten bei Erosivverschleiß untersucht. Es war von
großem Interesse, den Einfluss von Faserlänge und -orientierung zu untersuchen.
Kurzfaserverstärkte Systeme haben einen besseren Erosionswiderstand als die
unidirektionalen (UD) Systeme. Die Rolle der Faserorientierung kann man nur in
Verbindung mit anderen Parametern, wie Matrixzähigkeit, Faseranteil oder Faser-
Matrix Haftung, berücksichtigen. Am Beispiel von GF/PP Verbunden weisen die
parallel zur Verstreckungsrichtung gestrahlten Systeme den geringsten Widerstand
auf. Andererseits findet bei einem GF/EP System die maximale ER in senkrechter
Richtung statt. Eine Verbesserung der Grenzflächenscherfestigkeit beeinflusst die
Erosionsverschleißrate nachhaltig. Wenn die Haftung der Grenzfläche ausreichend
ist, spielt die Erosionsrichtung eine unbedeutende Rolle für die ER. Weiterhin wurde
gezeigt, dass die Präsenz von zähen Zwischenschichten zu einer deutlichen
Verbesserung des Erosionswiderstands von CF/EP- Verbunden führt.
Eine weitere Aufgabenstellung war es, die Rolle des Faservolumenanteils zu
bestimmen. „Lineare, inverse und modifizierte Mischungsregeln“ wurden
angewendet, und es wurde festgestellt, dass die inversen Mischungsregeln besser
die ER in Abhängigkeit des Faservolumenanteils beschreiben können.
Im Anwendungsbereich von Faserverbundwerkstoffen ist nicht nur die Kenntnis
der ER, sondern auch die Kenntnis der Resteigenschaften erforderlich. Ein
halbempirisches Modell für die Vorhersage des Schlagenergieschwellwertes (Uo) für den Beginn der Festigkeitsabnahme und der Restzugfestigkeit nach einer
Schlagbelastung wurde bei der Untersuchung des Erosionsverschleißes
angewendet. Experimentelle Ergebnisse und theoretische Vorhersagen stimmten
nicht nur für duromere CF/EP-Verbundwerkstoffe, sondern auch für
Verbundwerkstoffe mit einer thermoplastischen Matrix (GF/PP) sehr gut überein.
In the last decade, injection molding of long-fiber reinforced thermoplastics
(LFT) has been established as a low-cost, high volume technique for manufacturing
parts with complex shape without any post-treatment [1–3]. Applications
are mainly found in the automotive industry with a volume annually
growing by 10% to 15% [4].
While first applications were based on polyamide (PA6 and PA6.6), the market
share of glass fiber reinforced polypropylene (PP) is growing due to cost savings
and ease of processing. With the use of polypropylene, different processing
techniques such as gas-assisted injection molding [5] or injection compression
molding [6] have emerged in addition to injection molding [7, 8].
In order to overcome or justify higher materials costs when compared to short
fiber reinforced thermoplastics, the manufacturing techniques for LFT pellets
with fiber length greater than 10mm have evolved starting from pultrusion by
improving impregnation and throughput [9] or by direct addition of fiber strands
in the mold [10–12].
The benefit of long glass fiber reinforcement either in PP or PA is mainly due
to the enhanced resistance to fiber pull-out resulting in an increase in impact
properties and strength [13–19], even at low temperature levels [20]. Creep
and fatigue resistance are also substantially improved [21, 22].
The performance of fiber reinforced thermoplastics manufactured by injection
molding strongly depends on the flow-induced microstructure which is
driven by materials composition, processing conditions and part geometry.
The anisotropic microstructure is characterized by fiber fraction and dispersion,
fiber length and fiber orientation.
Facing the complexity of this processing technique, simulation becomes a precious
tool already in the concept phase for parts manufactured by injection
molding. Process simulation supports decisions with respect to choice of concepts
and materials. The part design is determined in terms of mold filling
including location of gates, vents and weld lines. Tool design requires the
determination of melt feeding, logistics and mold heating. Subsequently, performance
including prediction of shrinkage and warpage as well as structural
analysis is evaluated [23].
While simulation based on two-dimensional representation of three-dimensional
part geometry has been extensively used during the last two decades, the
complexity of the parts as well as the trend towards solid modelling in CAD
and CAE demands the step towards three-dimensional process simulation. The scope of this work is the prediction of flow-induced microstructure during
injection molding of long glass fiber reinforced polypropylene using threedimensional
process simulation. Modelling of the injection molding process in
three dimensions is supported experimentally by rheological characterization
in both shear and extensional flow and by two- and three-dimensional evaluation
of microstructure.
In chapter 2 the fundamentals of rheometry and rheology are presented with
respect to long fiber reinforced thermoplastics. The influence of parameters
on microstructure is described and approaches for modelling the state of microstructure
and its dynamics are discussed.
Chapter 3 introduces a rheometric technique allowing for rheological characterization
of polymer melts at processing conditions as encountered during
manufacturing. Using this rheometer, both shear and extensional viscosity of
long glass fiber reinforced polypropylene are measured with respect to composition
of materials, processing conditions and geometry of the cavity.
Chapter 4 contains the evaluation of microstructure of long glass fiber reinforced
polypropylene in terms of two-dimensional fiber orientation and its dependence
on materials parameters and processing condition. For the evaluation
of three-dimensional microstructure, a technique based on x-ray tomography
is introduced.
In chapter 5, modelling of microstructural dynamics is addressed. One-way
coupling of interactions between fluid and fibers is described macroscopically.
The flow behavior of fibers in the vicinity of cavity walls is evaluated experimentally.
From these observations, a model for treatment of fiber-wall interaction
with respect to numerical simulation is proposed.
Chapter 6 presents the application of three-dimensional simulation of the injection
molding process. Mold filling simulation is performed using a commercial
code while prediction of 3D fiber orientation is based on a proprietary module.
The rheological and thermal properties derived in chapter 3 are tested by
simulation of the experiments and comparison of predicted pressure and temperature
profile versus recorded results. The performance of fiber orientation
prediction is verified using analytical solutions of test examples from literature.
The capability of three-dimensional simulation is demonstrated based on the
simulation of mold filling and prediction of fiber orientation for an automotive
part.
Extensions of Shallow Water Equations The subject of the thesis of Michael Hilden is the simulation of floods in urban areas. In case of strong rain events, water can flow out of the overloaded sewer system onto the street and damage the connected houses. The dependable simulation of water flow out of a manhole ("manhole") and over a curb ("curb") is crucial for the assessment of the flood risks. The incompressible 3D-Navier-Stokes Equations (3D-NSE) describe the free surface flow of water accurately, but require expensive computations. Therefore, the less CPU-intensive (factor ca.1/100) Shallow Water Equations (SWE) are usually applied in hydrology. They can be derived from 3D-NSE under the assumption of a hydrostatic pressure distribution via depth-integration and are applied successfully in particular to simulations of river flow processes. The SWE-computations of the flow problems "manhole" and "curb" differ to the 3D-NSE results. Thus, SWE need to be extended appropriately to give reliable forecasts for flood risks in urban areas within reduced computational efforts. These extensions are developed based on physical considerations not considered in the classical SWE. In one extension, a vortex layer on the ground is separated from the main flow representing its new bottom. In a further extension, the hydrostatic pressure distribution is corrected by additional terms due to approximations of vertical velocities and their interaction with the flow. These extensions increase the quality of the SWE results for these flow problems up to the quality level of the NSE results within a moderate increase of the CPU efforts.
The thesis discusses discrete-time dynamic flows over a finite time horizon T. These flows take time, called travel time, to pass an arc of the network. Travel times, as well as other network attributes, such as, costs, arc and node capacities, and supply at the source node, can be constant or time-dependent. Here we review results on discrete-time dynamic flow problems (DTDNFP) with constant attributes and develop new algorithms to solve several DTDNFPs with time-dependent attributes. Several dynamic network flow problems are discussed: maximum dynamic flow, earliest arrival flow, and quickest flow problems. We generalize the hybrid capacity scaling and shortest augmenting path algorithmic of the static network flow problem to consider the time dependency of the network attributes. The result is used to solve the maximum dynamic flow problem with time-dependent travel times and capacities. We also develop a new algorithm to solve earliest arrival flow problems with the same assumptions on the network attributes. The possibility to wait (or park) at a node before departing on outgoing arc is also taken into account. We prove that the complexity of new algorithm is reduced when infinite waiting is considered. We also report the computational analysis of this algorithm. The results are then used to solve quickest flow problems. Additionally, we discuss time-dependent bicriteria shortest path problems. Here we generalize the classical shortest path problems in two ways. We consider two - in general contradicting - objective functions and introduce a time dependency of the cost which is caused by a travel time on each arc. These problems have several interesting practical applications, but have not attained much attention in the literature. Here we develop two new algorithms in which one of them requires weaker assumptions as in previous research on the subject. Numerical tests show the superiority of the new algorithms. We then apply dynamic network flow models and their associated solution algorithms to determine lower bounds of the evacuation time, evacuation routes, and maximum capacities of inhabited areas with respect to safety requirements. As a macroscopic approach, our dynamic network flow models are mainly used to produce good lower bounds for the evacuation time and do not consider any individual behavior during the emergency situation. These bounds can be used to analyze existing buildings or help in the design phase of planning a building.
The main two problems of continuous-time financial mathematics are option pricing and portfolio optimization. In this thesis, various new aspects of these major topics of financial mathematics will be discussed. In all our considerations we will assume the standard diffusion type setting for securitiy prices which is today well-know under the term "Black-Scholes model". This setting and the basic results of option pricing and portfolio optimization are surveyed in the first chapter. The next three chapters deal with generalizations of the standard portfolio problem, also know as "Merton's problem". Here, we will always use the stochastic control approach as introduced in the seminal papers by Merton (1969, 1971, 1990). One such problem is the very realistic setting of an investor who is faced with fixed monetary streams. More precisely, in addition to maximizing the utility from final wealth via choosing an investment strategy, the investor also has to fulfill certain consumption needs. Also the opposite situation, an additional income stream can now be taken into account in our portfolio optimization problem. We consider various examples and solve them on one hand via classical stochastic control methods and on the other hand by our new separation theorem. This together with some numerical examples forms Chapter 2. Chapter 3 is mainly concerned with the portfolio problem if the investor has different lending and borrowing rates. We give explicit solutions (where possible) and numerical methods to calculate the optimal strategy in the cases of log utility and HARA utility for three different modelling approaches of the dependence of the borrowing rate on the fraction of wealth financed by a credit. The further generalization of the standard Merton problem in Chapter 4 consists in considering simultaneously the possibilities for continuous and discrete consumption. In our general approach there is a possibility for assigning the different consumption times different weights which is a generalization of the usual way of making them comparable via discounting. Chapter 5 deals with the special case of pricing basket options. Here, the main problem is not path-dependence but the multi-dimensionality which makes it impossible to give usuefull analytical representations of the option price. We review the literature and compare six different numerical methods in a systematic way. Thereby we also look at the influence of various parameters such as strike, correlation, forwards or volatilities on the erformance of the different numerical methods. The problem of pricing Asian options on average spot with average strike is the topic of Chapter 6. We here apply the bivariate normal distribution to obtain an approximate option price. This method proves to be very reliable and e±cient for the valuation of different variants of Asian options on average spot with average strike.
The focus of this work has been to develop two families of wavelet solvers for the inner displacement boundary-value problem of elastostatics. Our methods are particularly suitable for the deformation analysis corresponding to geoscientifically relevant (regular) boundaries like sphere, ellipsoid or the actual Earth's surface. The first method, a spatial approach to wavelets on a regular (boundary) surface, is established for the classical (inner) displacement problem. Starting from the limit and jump relations of elastostatics we formulate scaling functions and wavelets within the framework of the Cauchy-Navier equation. Based on numerical integration rules a tree algorithm is constructed for fast wavelet computation. This method can be viewed as a first attempt to "short-wavelength modelling", i.e. high resolution of the fine structure of displacement fields. The second technique aims at a suitable wavelet approximation associated to Green's integral representation for the displacement boundary-value problem of elastostatics. The starting points are tensor product kernels defined on Cauchy-Navier vector fields. We come to scaling functions and a spectral approach to wavelets for the boundary-value problems of elastostatics associated to spherical boundaries. Again a tree algorithm which uses a numerical integration rule on bandlimited functions is established to reduce the computational effort. For numerical realization for both methods, multiscale deformation analysis is investigated for the geoscientifically relevant case of a spherical boundary using test examples. Finally, the applicability of our wavelet concepts is shown by considering the deformation analysis of a particular region of the Earth, viz. Nevada, using surface displacements provided by satellite observations. This represents the first step towards practical applications.
The central theme in this thesis concerns the development of enhanced methods and algorithms for appraising market and credit risks and their application within the context of standard and more advanced market models. Generally, methods and algorithms for analysing market risk of complex portfolios involve detailed knowledge of option sensitivities, the so-called "Greeks". Based on an analysis of symmetries in financial market models, relations between option sensitivities are obtained, which can be used for the efficient valuation of the Greeks. Mainly, the relations are derived within the Black Scholes model, however, some relations are also valid for more general models, for instance the Heston model. Portfolios are usually influenced by lots of underlyings, so it is necessary to characterise the dependencies of these basic instruments. It is usual to describe such dependencies by correlation matrices. However, estimations of correlation matrices in practice are disturbed by statistical noise and usually have the problem of rank deficiency due to missing data. A fast algorithm is presented which performs a generalized Cholesky decomposition of a perturbed correlation matrix. In contrast to the standard Cholesky algorithm, an advantage of the generalized method is that it works for semi-positive, rank deficient matrices as well. Moreover, it gives an approximative decomposition when the input matrix is indefinite. A comparison with known algorithms with similar features is performed and it turns out, that the new algorithm can be recommended in situations where computation time is the critical issue. The determination of a profit and loss distribution by Fourier inversion of its characteristic function is a powerful tool, but it can break down when the characteristic function is not integrable. In this thesis, methods for Fourier inversion of non-integrable characteristic functions are studied. In this respect, two theorems are obtained which are based on a suitable approximation of the unknown distribution with known density and characteristic function. Further it will be shown, that straightforward Fast Fourier inversion works, when the according density lives on a bounded interval. The above techniques are of crucial importance to determine the profit and loss distribution (P&L) of large portfolios efficiently. The so-called Delta Gamma normal approach has become industrial standard for the estimation of market risk. It is shown, that the performance of the Delta Gamma normal approach can be improved substantially by application of the developed methods. The same optimization procedure also applies to the Delta Gamma Student model. A standard tool for computing the P&L distribution of a loan portfolio is the CreditRisk+ model. Basically, the CreditRisk+ distribution is a discrete distribution which can be computed from its probability generating function. For this a numerically stable method is presented and as an alternative, a new algorithm based on Fourier inversion is proposed. Finally, an extension of the CreditRisk+ model to market risk is developed, which distribution can be obtained efficiently by the presented Fourier inversion methods as well.
The thesis deals with the subgradient optimization methods which are serving to solve nonsmooth optimization problems. We are particularly concerned with solving large-scale integer programming problems using the methodology of Lagrangian relaxation and dualization. The goal is to employ the subgradient optimization techniques to solve large-scale optimization problems that originated from radiation therapy planning problem. In the thesis, different kinds of zigzagging phenomena which hamper the speed of the subgradient procedures have been investigated and identified. Moreover, we have established a new procedure which can completely eliminate the zigzagging phenomena of subgradient methods. Procedures used to construct both primal and dual solutions within the subgradient schemes have been also described. We applied the subgradient optimization methods to solve the problem of minimizing total treatment time of radiation therapy. The problem is NP-hard and thus far there exists no method for solving the problem to optimality. We present a new, efficient, and fast algorithm which combines exact and heuristic procedures to solve the problem.
The thesis is concerned with the modelling of ionospheric current systems and induced magnetic fields in a multiscale framework. Scaling functions and wavelets are used to realize a multiscale analysis of the function spaces under consideration and to establish a multiscale regularization procedure for the inversion of the considered operator equation. First of all a general multiscale concept for vectorial operator equations between two separable Hilbert spaces is developed in terms of vector kernel functions. The equivalence to the canonical tensorial ansatz is proven and the theory is transferred to the case of multiscale regularization of vectorial inverse problems. As a first application, a special multiresolution analysis of the space of square-integrable vector fields on the sphere, e.g. the Earth’s magnetic field measured on a spherical satellite’s orbit, is presented. By this, a multiscale separation of spherical vector-valued functions with respect to their sources can be established. The vector field is split up into a part induced by sources inside the sphere, a part which is due to sources outside the sphere and a part which is generated by sources on the sphere, i.e. currents crossing the sphere. The multiscale technqiue is tested on a magnetic field data set of the satellite CHAMP and it is shown that crustal field determination can be improved by previously applying our method. In order to reconstruct ionspheric current systems from magnetic field data, an inversion of the Biot-Savart’s law in terms of multiscale regularization is defined. The corresponding operator is formulated and the singular values are calculated. Based on the konwledge of the singular system a regularzation technique in terms of certain product kernels and correponding convolutions can be formed. The method is tested on different simulations and on real magnetic field data of the satellite CHAMP and the proposed satellite mission SWARM.
We construct and study two surface measures on the space C([0,1],M) of paths in a compact Riemannian manifold M embedded into the Euclidean space R^n. The first one is induced by conditioning the usual Wiener measure on C([0,T],R^n) to the event that the Brownian particle does not leave the tubular epsilon-neighborhood of M up to time T, and passing to the limit. The second one is defined as the limit of the laws of reflected Brownian motions with reflection on the boundaries of the tubular epsilon-neighborhoods of M. We prove that the both surface measures exist and compare them with the Wiener measure W_M on C([0,T],M). We show that the first one is equivalent to W_M and compute the corresponding density explicitly in terms of the scalar curvature and the mean curvature vector of M. Further, we show that the second surface measure coincides with W_M. Finally, we study the limit behavior of the both surface measures as T tends to infinity.
In this thesis the combinatorial framework of toric geometry is extended to equivariant sheaves over toric varieties. The central questions are how to extract combinatorial information from the so developed description and whether equivariant sheaves can, like toric varieties, be considered as purely combinatorial objects. The thesis consists of three main parts. In the first part, by systematically extending the framework of toric geometry, a formalism is developed for describing equivariant sheaves by certain configurations of vector spaces. In the second part, homological properties of a certain class of equivariant sheaves are investigated, namely that of reflexive equivariant sheaves. Several kinds of resolutions for these sheaves are constructed which depend only on the configuration of their associated vector spaces. Thus a partially positive answer to the question of combinatorial representability is given. As a particular result, a new way for computing minimal resolutions for Z^n - graded modules over polynomial rings is obtained. In the third part a complete classification of the simplest nontrivial sheaves, equivariant vector bundles of rank two over smooth toric surfaces, is given. A combinatorial characterization is given and parameter spaces (moduli spaces) are constructed which depend only on this characterization. In appendices a outlook on equivariant sheaves and the relation of Chern classes to their combinatorial classification is given, particularly focussing on the case of the projective plane. A classification of equivariant vector bundles of rank three over the projective plane is given.
Semiparametric estimation of conditional quantiles for time series, with applications in finance
(2003)
The estimation of conditional quantiles has become an increasingly important issue in insurance and financial risk management. The stylized facts of financial time series data has rendered direct applications of extreme value theory methodologies, in the estimation of extreme conditional quantiles, inappropriate. On the other hand, quantile regression based procedures work well in nonextreme parts of a given data but breaks down in extreme probability levels. In order to solve this problem, we combine nonparametric regressions for time series and extreme value theory approaches in the estimation of extreme conditional quantiles for financial time series. To do so, a class of time series models that is similar to nonparametric AR-(G)ARCH models but which does not depend on distributional and moments assumptions, is introduced. We discuss estimation procedures for the nonextreme levels using the models and consider the estimates obtained by inverting conditional distribution estimators and by direct estimation using Koenker-Basset (1978) version for kernels. Under some regularity conditions, the asymptotic normality and uniform convergence, with rates, of the conditional quantile estimator for strong mixing time series, are established. We study the estimation of scale function in the introduced models using similar procedures and show that under some regularity conditions, the scale estimate is weakly consistent and asymptotically normal. The application of introduced models in the estimation of extreme conditional quantiles is achieved by augmenting them with methods in extreme value theory. It is shown that the overal extreme conditional quantiles estimator is consistent. A Monte Carlo study is carried out to illustrate the good performance of the estimates and real data are used to demonstrate the estimation of Value-at-Risk and conditional expected shortfall in financial risk management and their multiperiod predictions discussed.
The goal of this thesis is a physically motivated and thermodynamically consistent formulation of higher gradient inelastic material behavior. Thereby, the influence of the material microstructure is incorporated. Next to theoretical aspects, the thesis is complemented with the algorithmic treatment and numerical implementation of the derived model. Hereby, two major inelastic effects will be addressed: on the one hand elasto-plastic processes and on the other hand damage mechanisms, which will both be modeled within a continuum mechanics framework.
The present thesis deals with coupled steady state laminar flows of isothermal incompressible viscous Newtonian fluids in plain and in porous media. The flow in the pure fluid region is usually described by the (Navier-)Stokes system of equations. The most popular models for the flow in the porous media are those suggested by Darcy and by Brinkman. Interface conditions, proposed in the mathematical literature for coupling Darcy and Navier-Stokes equations, are shortly reviewed in the thesis. The coupling of Navier-Stokes and Brinkman equations in the literature is based on the so called continuous stress tensor interface conditions. One of the main tasks of this thesis is to investigate another type of interface conditions, namely, the recently suggested stress tensor jump interface conditions. The mathematical models based on these interface conditions were not carefully investigated from the mathematical point of view, and also their validity was a subject of discussions. The considerations within this thesis are a step toward better understanding of these interface conditions. Several aspects of the numerical simulations of such coupled flows are considered: -the choice of proper interface conditions between the plain and porous media -analysis of the well-posedness of the arising systems of partial differential equations; -developing numerical algorithm for the stress tensor jump interface conditions, coupling Navier-Stokes equations in the pure liquid media with the Navier-Stokes-Brinkman equations in the porous media; -validation of the macroscale mathematical models on the base of a comparison with the results from a direct numerical simulation of model representative problems, allowing for grid resolution of the pore level geometry; -developing software and performing numerical simulation of 3-D industrial flows, namely of oil flows through car filters.
The question of how to model dependence structures between financial assets was revolutionized since the last decade when the copula concept was introduced in financial research. Even though the concept of splitting marginal behavior and dependence structure (described by a copula) of multidimensional distributions already goes back to Sklar (1955) and Hoeffding (1940), there were very little empirical efforts done to check out the potentials of this approach. The aim of this thesis is to figure out the possibilities of copulas for modelling, estimating and validating purposes. Therefore we extend the class of Archimedean Copulas via a transformation rule to new classes and come up with an explicit suggestion covering the Frank and Gumbel family. We introduce a copula based mapping rule leading to joint independence and as results of this mapping we present an easy method of multidimensional chi²-testing and a new estimate for high dimensional parametric distributions functions. Different ways of estimating the tail dependence coefficient, describing the asymptotic probability of joint extremes, are compared and improved. The limitations of elliptical distributions are carried out and a generalized form of them, preserving their applicability, is developed. We state a method to split a (generalized) elliptical distribution into its radial and angular part. This leads to a positive definite robust estimate of the dispersion matrix (here only given as a theoretical outlook). The impact of our findings is stated by modelling and testing the return distributions of stock- and currency portfolios furthermore of oil related commodities- and LME metal baskets. In addition we show the crash stability of real estate based firms and the existence of nonlinear dependence in between the yield curve.
We present new algorithms and provide an overall framework for the interaction of the classically separate steps of logic synthesis and physical layout in the design of VLSI circuits. Due to the continuous development of smaller sized fabrication processes and the subsequent domination of interconnect delays, the traditional separation of logical and physical design results in increasingly inaccurate cost functions and aggravates the design closure problem. Consequently, the interaction of physical and logical domains has become one of the greatest challenges in the design of VLSI circuits. To address this challenge, we propose different solutions for the control and datapath logic of a design, and show how to combine them to reach design closure.
Clusters bridge the gap between single atoms or molecules and the condensed phase and it is the challenge of cluster science to obtain a deeper understanding of the molecular foundation of the observed cluster specific properties/reactivities and their dependence on size. The electronic structure of hydrated magnesium monocations [Mg,nH2O]+, n<20, exhibits a strong cluster size dependency. With increasing number of H2O ligands the SOMO evolves from a quasi-valence state (n=3-5), in which the singly occupied molecular orbital (SOMO) is not yet detached from the metal atom and has distinct sp-hybrid character, to a contact ion pair state. For larger clusters (n=17,19) these ion pair states are best described as solvent separated ion pair states, which are formed by a hydrated dication and a hydrated electron. With growing cluster size the SOMO moves away from the magnesium ion to the cluster surface, where it is localized through mutual attractive interactions between the electron density and dangling H-atoms of H2O ligands forming "molecular tweezers" HO-H (e-) H-OH. In case of the hydrated aluminum monocations [Al,nH2O]+,n=20, different isomers of the formal stoichiometry [Al,20H2O]+ were investigated by using gradient-corrected DFT (BLYP) and three different basic structures for [Al,20H2O]+ were identified: (a) [AlI(H2O)20]+ with a threefold coordinated AlI; (b) [HAlIII(OH)(H2O)19]+ with a fourfold coordinated AlIII; (c) [HAlIII(OH)(H2O)19]+ with a fivefold coordinated AlIII. In ground state [AlI(H2O)20]+ (a) which contains aluminum in oxidation state +1 the 3s2 valence electrons remain located at the aluminium monocation. Different than for open shell magnesium monocations no electron transfer into the hydration shell is observed for closed shell AlI. However, clusters of type (a) are high energy isomers (DE»+190 kJ mol-1) and the activation barrier for reaction into cluster type (b) or (c) is only approximately 14 kJ mol-1. The performed ab initio calculations reveal that unlike in [Mg,nH2O]+, n=7-17, for which H atom eliminiation is found to be the result of an intracluster redoxreaction, in [Al,nH2O]+,n=20, H2 is formed in an intracluster acid-base reaction. In [Mg,nH2O]+, n>17, the magnesium dication was found to coexist with a hydrated electron in larger cluster sizes. This proves that intermolecular electron delocalization - previously almost exclusively studied in (H2O)n- and (NH3)n- clusters - can also be an important issue for water clusters doped with an open shell metal cation or a metal anion. Structures and stabilities of hydrated magnesium water cluster anions with the formal stoichiometry [Mg,nH2O]-, n=1-11, were investigated by application of various correlated ab initio methods (MP2, CCSD, CCSD(T)). Metal cations surely have high relevance in numerous biological processes, and as most biological processes take place in aqueous solution hydrated metal ions will be involved. However, in biological systems solvent molecules (i.e. water) compete with different solvated chelate ligands for coordination sites at the metal ion and the solvent and chelate ligands are in mutual interactions with each other and the metal ion. These interactions were investigated for the hydration of ZnII/carnosine complexes by application of FT-ICR-MS, gas-phase H/D exchange experiments and supporting ab initio calculations. In the last chapter of this work the Free Electron Laser IR Multi Photon Dissocition (FEL-IR-MPD) spectra of mass selected cationic niobium acetonitrile complexes with the formal stoichiometry [Nb,nCH3CN]+, n=4-5, in the spectral range 780 – 2500 cm-1 are reported. In case of n=4 the recorded vibrational bands are close to those of the free CH3CN molecule and the experimental spectra do not contain any evident indication of a potential reaction beyond complex formation. By comparison with B3LYP calculated IR absorption spectra the recorded spectra are assigned to high spin (quintet, S=2), planar [NbI(NCCH3)4]+. In [Nb,nCH3CN]+, n=5, new vibrational bands shifted away from those of the acetonitrile monomer are observed between 1300 – 1550 cm-1. These bands are evidence of a chemical modification due to an intramolecular reaction. Screening on the basis of B3LYP calculated IR absorption spectra allow for an assignment of the recorded spectra to the metallacyclic species [NbIII(NCCH3)3(N=C(CH3)C(CH3)=N)]+ (triplet, S=1), which has formed in a internal reductive nitrile coupling reaction from [NbI(NCCH3)5]+. Calculated reaction coordinates explain the experimentally observed differences in reactivity between ground state [NbI(NCCH3)4]+ and [NbI(NCCH3)5]+. The reductive nitrile coupling reaction is exothermic and accessible (Ea=49 kJ mol-1) only in [NbI(NCCH3)5]+, whereas in [NbI(NCCH3)4]+ the reaction is found to be endothermic and retarded by significantly higher activation barriers (Ea>116 kJ mol-1).
As the sustained trend towards integrating more and more functionality into systems on a chip can be observed in all fields, their economic realization is a challenge for the chip making industry. This is, however, barely possible today, as the ability to design and verify such complex systems could not keep up with the rapid technological development. Owing to this productivity gap, a design methodology, mainly using pre designed and pre verifying blocks, is mandatory. The availability of such blocks, meeting the highest possible quality standards, is decisive for its success. Cost-effective, this can only be achieved by formal verification on the block-level, namely by checking properties, ranging over finite intervals of time. As this verification approach is based on constructing and solving Boolean equivalence problems, it allows for using backtrack search procedures, such as SAT. Recent improvements of the latter are responsible for its high capacity. Still, the verification of some classes of hardware designs, enjoying regular substructures or complex arithmetic data paths, is difficult and often intractable. For regular designs, this is mainly due to individual treatment of symmetrical parts of the search space by backtrack search procedures used. One approach to tackle these deficiencies, is to exploit the regular structure for problem reduction on the register transfer level (RTL). This work describes a new approach for property checking on the RTL, preserving the problem inherent structure for subsequent reduction. The reduction is based on eliminating symmetrical parts from bitvector functions, and hence, from the search space. Several approaches for symmetry reduction in search problems, based on invariance of a function under permutation of variables, have been previously proposed. Unfortunately, our investigations did not reveal this kind of symmetry in relevant cases. Instead, we propose a reduction based on symmetrical values, as we encounter them much more frequently in our industrial examples. Let \(f\) be a Boolean function. The values \(0\) and \(1\) are symmetrical values for a variable \(x\) in \(f\) iff there is a variable permutation \(\pi\) of the variables of \(f\), fixing \(x\), such that \(f|_{x=0} = \pi(f|_{x=1})\). Then the question whether \(f=1\) holds is independent from this variable, and it can be removed. By iterative application of this approach to all variables of \(f\), they are either all removed, leaving \(f=1\) or \(f=0\) trivially, or there is a variable \(x'\) with no such \(\pi\). The latter leads to the conclusion that \(f=1\) does not hold, as we found a counter-example either with \(x'=0\), or \(x'=1\). Extending this basic idea to vectors of variables, allows to elevate it to the RTL. There, self similarities in the function representation, resulting from the regular structure preserved, can be exploited, and as a consequence, symmetrical bitvector values can be found syntactically. In particular, bitvector term-rewriting techniques, isomorphism procedures for specially manipulated term graphs, and combinations thereof, are proposed. This approach dramatically reduces the computational effort needed for functional verification on the block-level and, in particular, for the important problem class of regular designs. It allows the verification of industrial designs previously intractable. The main contributions of this work are in providing a framework for dealing with bitvector functions algebraically, a concise description of bounded model checking on the register transfer level, as well as new reduction techniques and new approaches for finding and exploiting symmetrical values in bitvector functions.
In my doctoral thesis, I present new information about the developmental expression pattern of the potassium chloride cotransporter KCC2 in the rat auditory brain stem and the morphometrical effects caused by KCC2 gene silencing in mice. The thesis is divided into 3 Chapters. Chapter 1 is a general introduction which gives a brief outline of the primary ascending auditory pathway in mammals. Also, it provides information about the presence of a large number of inhibitory inputs in the auditory system and how these inputs develop; the involvement of inhibition in the acoustic processing is mentioned. In addition, the role of the KCC2 cotransporter in the shift of GABA/glycine transmission, and thus, in maintaining the normal level of inhibition in the mature brain, is described. The focus of Chapter 2 was to investigate the KCC2 immunofluorescent signal from postnatal day (P) 0 to P60 in four major nuclei of the rats superior olivary complex (SOC), namely the medial nucleus of the trapezoid body (MNTB), the medial superior olive (MSO), the lateral superior olive (LSO), and the superior paraolivary nucleus (SPN). The lack of a correlation between the continuous presence of KCC2 mRNA/protein in the postnatal rat brain stem on one side, and the shift in GABA/glycinergic polarity (i.e. KCC2 functionality) on the other side, prompted me to search for a specific cellular expression pattern of the KCC2 protein that might correlate with the switch in GABA/glycine signalling. To do so, the KCC2 immunoreactivity was analysed using high-resolution confocal microscopy in three cellular regions of interest: the soma surface, the soma interior, and the neuropil. In the soma surface, I observed an increase of the KCC2 immunofluorescent signal intensity, yet with a moderate magnitude (1.1 to 1.6-fold). Therefore, I conclude that the change in the soma surface signal is only of minor importance and does not explain the change in KCC2 functionality. The KCC2 signal intensity in the soma interior decreased in all nuclei (1.4 to 2-fold) with the exception of the MNTB where no statistically significant change was found. The decrease in the soma interior was probably related to the increase in the soma surface immunoreactivity and the proposed (weak) intracellular trafficking process of the KCC2 protein. The main developmental reorganization (in qualitative as well as in quantitative aspects) of the KCC2 immunofluorescence in the SOC nuclei was observed in the neuropil. The signal changed its pattern from a diffusely stained neuropil early in development (P0-P4) to a crisp and membrane-confined signal later on (P8-P60), with single dendrites becoming apparent. The exception was found in the MNTB, where the neuropil became almost unlabeled. Quantification revealed a statistically significant decrease (2.2 to 3.8-fold) in the neuropil immunoreactivity in all four nuclei, although the remaining KCC2-stained dendrites became thicker and the signal became stronger. I suppose that, at least in part, the neuropil reorganization can be explained by an age-related reduction of dendritic branches via a pruning mechanism and with the absence of an abnormal Cl- load via extrasynaptic GABAA receptors. This is consistent with the proposed additional role of KCC2, namely to maintain the cellular ionic homeostasis and to prevent dendritic swelling (Gulyás et al., 2001). In conclusion, neither the increase in the KCC2 soma surface signal intensity, nor the reorganization in the neuropil can be strictly related to the developmental switch in the GABA/glycine polarity and the onset of KCC2 function, although some correlation (the appearance of a specific membrane-confined dendritic pattern) between structure and function was found. Further implication of different molecular methods, regarding the proposed posttranslational modification of KCC2, will shed light upon the question of what leads to the functional activation of the cotransporter. In Chapter 3, the advantage of loss-of-function KCC2 mice made it possible, via manipulating the duration of the depolarizing phase of GABA/glycine transmission, to analyse the effect of disturbed Cl- regulation and, thus, the effect of disrupted GABA/glycine neurotransmission (lack of inhibition). I asked the following question: how important is the Cl- homeostasis to maintain general aspects (brain weight) and specific aspects (nucleus volume, neuron number, and soma cross-sectional area) of brain development? Brain stem slices from KCC2 knock-out animals (-/-), with a trace amount of transporter (~5%), as well as from wild type animals (+/+) at P3 and P12 were stained for Nissl substance and the analyses were performed with the help of basic morphometrical and stereological methods. In KCC2 (-/-) animals, body growth impairment was observed, in part related to the seizure activity preventing normal feeding (Woo et al., 2002). However, their brains, in terms of brain weight, were less affected. Therefore, I conclude that Cl- homeostasis is not essential per se to maintain the brain weight. Four auditory nuclei (MNTB, MSO, LSO, and ventral cochlear nucleus (VCN)), were compared with respect to the KCC2 null mutation. The SOC nuclei were not influenced by the lack of KCC2 at P3 considering the morphometric parameters. A difference in the number of neurons occurred in the VCN at P3. I suggest to perform additional immunohistochemical studies of glial presence related to its involvement in the structural and functional support of the neurons and their survival. At P12, the volume of the auditory nuclei in KCC2 (-/-) animals was smaller than in (+/+) animals. However, this is likely to be an epiphenomenon since the brain weight increase was also impaired with the same magnitude. Therefore, I suppose that the Cl- homeostasis is not crucial for the nucleus volume increase in the VCN, the MNTB and the MSO during development. An exception was found for the LSO. Regarding the other morphometric parameters at P12, the four nuclei behaved in a different way: (1) in the VCN, after P3, no parameter underwent a disproportional change due to impaired Cl- homeostasis; (2) the MNTB and the LSO showed less pronounced neuropil in mutants in comparison to age-matched controls and two reasons were proposed: first, the depolarizing GABA/glycine transmission in mutants may contribute to excessive Ca2+ load, excitotoxicity and dendrite damage; second, a decrease of some trophic factors may prevent dendrite development in addition to impaired normal body growth; (3) the MSO neurons in P12 (-/-) animals had smaller soma cross-sectional area than in P12 (+/+) animals. I conclude that the normal Cl- homeostasis is required in the MSO at older ages (P12) to achieve and maintain a proper soma size; (4) the lack of KCC2 did not prevent the process of neuronal differentiation in the VCN and the MNTB during development in both mutant and control animals. In conclusion, the various auditory nuclei have to be discussed independently regarding the influence of Cl- homeostasis on some morphometric parameters. Presumably, this is related to the different time of the shift in the GABA/glycine polarity i.e., the onset of KCC2 function (Srinivasan et al., 2004a). Taken together, my thesis accumulated data about the immunohistological expression pattern of KCC2 in various auditory brain stem nuclei and the influence of impaired Cl- homeostasis on some morphometric features in these nuclei. This information will be helpful for further investigations involved to discover the mechanisms and the events that govern the inhibition and the inhibitory pathway in the central auditory system.
In this thesis we propose an efficient method to compute the automorphism group of an arbitrary hyperelliptic function field over a given constant field of odd characteristic as well as over its algebraic extensions. Beside theoretical applications, knowing the automorphism group also is useful in cryptography: The Jacobians of hyperelliptic curves have been suggested by Koblitz as groups for cryptographic purposes, because the discrete logarithm is believed to be hard in this kind of groups. In order to obtain "secure" Jacobians, it is necessary to prevent attacks like Pohlig/Hellman's and Duursma/Gaudry/Morain's. The latter is only feasible, if the corresponding function field has an automorphism of large order. According to a theorem by Madan, automorphisms seem to allow the Pohlig/Hellman attack, too. Hence, the function field of a secure Jacobian will most likely have trivial automorphism group. In other words: Computing the automorphism group of a hyperelliptic function field promises to be a quick test for insecure Jacobians. Let us outline our algorithm for computing the automorphism group Aut(F/k) of a hyperelliptic function field F/k. It is well known that Aut(F/k) is finite. For each possible subgroup U of Aut(F/k), Rolf Brandt has given a normal form for F if k is algebraically closed. Hence our problem reduces to deciding, whether a given hyperelliptic function field F=k(x,y), y^2=D_x has a defining equation of the form given by Brandt. This question can be answered using theorem III.18: We have F=k(t,u), u^2=D_t iff x is a fraction of linear polynomials in t and y=pu, where the factor p is a rational function w.r.t. t which can be determined explicitly from the coefficients of x. This condition can be checked efficiently using Gröbner basis techniques. With additional effort, it is also possible to compute Aut(F/k) if k is not algebraically closed. Investigating a huge number of examples one gets the impression that the above motivation of getting a quick test for insecure Jacobians is partially fulfilled: The computation of automorphism groups is quite fast using the suggested algorithm. Furthermore, fields with nontrivial automorphism groups seem to have insecure Jacobians. Only fields of small characteristic seem to have a reasonable chance of having nontrivial automorphisms. Hence, from a cryptographic point of view, computing Aut(F/k) seems to make sense whenever k has small characteristic.
In this thesis, the enhanced Galerkin (eG) finite element method in time is presented. The eG method leads to higher order accurate energy and momentum conserving time integrators for the underlying finite-dimensional Hamiltonian systems. This thesis is concerned with particle dynamics and semi-discrete nonlinear elastodynamics. The conservation is generally related to the collocation property of the eG method. The momentum conservation renders the Gaussian quadrature and the energy conservation is obtained by using a new projection technique. An objective time discretisation of the used strain measures avoids artificial strains for large superimposed rigid body motions. The numerical examples show the well long term performance in the presence of stiffness as well as for calculating large-strain motions.
In this thesis we show that the theory of algebraic correspondences introduced by Deuring in the 1930s can be applied to construct non-trivial homomorphisms between the Jacobi groups of hyperelliptic function fields. Concretely, we deduce algorithms to add and multiply correspondences which perform in a reasonable time if the degrees of the associated divisors of the double field are small. Moreover, we show how to compute the differential matrices associated to prime divisors of the double field for arbitrary genus. These matrices give a representation for the homomorphisms or endomorphisms in the additive group (ring) of matrices which is even faithful if the ground field has characteristic zero. As first examples for non-trivial correspondences we investigate multiplication by m endomorphisms. Afterwards we use factorisations of certain bivariate polynomials to construct prime divisors of the double field that are not equivalent to 0 in a coarser sense. Applying the theory of Deuring, these divisors yield homomorphisms between the Jacobi groups of special classes of hyperelliptic function fields. Finally, we generalise the Richelot isogeny to higher genus and by this way derive a class of hyperelliptic function fields given in terms of their defining polynomials which admit non-trivial homomorphisms. These include homomorphisms between the Jacobi groups of hyperelliptic curves of different as well as of equal genus. In addition we provide an explicit method to construct genus 2 function fields the endomorphism ring of which contains a sqrt(2) multiplication with the help of the Cholesky decomposition of a certain matrix.
Compared to our current knowledge of neuronal excitation, little is known about the development and maturation of inhibitory circuits. Recent studies show that inhibitory circuits develop and mature in a similar way like excitatory circuit. One such similarity is the development through excitation, irrespective of its inhibitory nature. Here in this current study, I used the inhibitory projection between the medial nucleus of the trapezoid body (MNTB) and the lateral superior olive (LSO) as a model system to unravel some aspects of the development of inhibitory synapses. In LSO neurons of the rat auditory brainstem, glycine receptor-mediated responses change from depolarizing to hyperpolarizing during the first two postnatal weeks (Kandler and Friauf 1995, J. Neurosci. 15:6890-6904). The depolarizing effect of glycine is due to a high intracellular chloride concentration ([Cl-]i), which induces a reversal potential of glycine (EGly) more positive than the resting membrane potential (Vrest). In older LSO neurons, the hyperpolarizing effect is due to a low [Cl-]i (Ehrlich et al., 1999, J. Physiol. 520:121-137). Aim of the present study was to elucidate the molecular mechanism behind Clhomeostasis in LSO neurons which determines polarity of glycine response. To do so, the role and developmental expression of Cl-cotransporters, such as NKCC1 and KCC2 were investigated. Molecular biological and gramicidin perforated patchclamp experiments revealed, the role of KCC2 as an outward Cl-cotransporter in mature LSO neurons (Balakrishnan et al., 2003, J Neurosci. 23:4134-4145). But, NKCC1 does not appear to be involved in accumulating chloride in immature LSO neurons. Further experiments, indicated the role of GABA and glycine transporters (GAT1 and GLYT2) in accumulating Cl- in immature LSO neurons. Finally, the experiments with hypothyroid animals suggest the possible role of thyroid hormone in the maturation of inhibitory synapse. Altogether, this thesis addressed the molecular mechanism underlying the Cl- regulation in LSO neurons and deciphered it to some extent.
The polydispersive nature of the turbulent droplet swarm in agitated liquid-liquid contacting equipment makes its mathematical modelling and the solution methodologies a rather sophisticated process. This polydispersion could be modelled as a population of droplets randomly distributed with respect to some internal properties at a specific location in space using the population balance equation as a mathematical tool. However, the analytical solution of such a mathematical model is hardly to obtain except for particular idealized cases, and hence numerical solutions are resorted to in general. This is due to the inherent nonlinearities in the convective and diffusive terms as well as the appearance of many integrals in the source term. In this work two conservative discretization methodologies for both internal (droplet state) and external (spatial) coordinates are extended and efficiently implemented to solve the population balance equation (PBE) describing the hydrodynamics of liquid-liquid contacting equipment. The internal coordinate conservative discretization techniques of Kumar and Ramkrishna (1996a, b) originally developed for the solution of PBE in simple batch systems are extended to continuous flow systems and validated against analytical solutions as well as published experimental droplet interaction functions and hydrodynamic data. In addition to these methodologies, we presented a conservative discretization approach for droplet breakage in batch and continuous flow systems, where it is found to have identical convergence characteristics when compared to the method of Kumar and Ramkrishna (1996a). Apart from the specific discretization schemes, the numerical solution of droplet population balance equations by discretization is known to suffer from inherent finite domain errors (FDE). Two approaches that minimize the total FDE during the solution of the discrete PBEs using an approximate optimal moving (for batch) and fixed (for continuous systems) grids are introduced (Attarakih, Bart & Faqir, 2003a). As a result, significant improvements are achieved in predicting the number densities, zero and first moments of the population. For spatially distributed populations (such as extraction columns) the resulting system of partial differential equations is spatially discretized in conservative form using a simplified first order upwind scheme as well as first and second order nonoscillatory central differencing schemes (Kurganov & Tadmor, 2000). This spatial discretization avoids the characteristic decomposition of the convective flux based on the approximate Riemann Solvers and the operator splitting technique required by classical upwind schemes (Karlsen et al., 2001). The time variable is discretized using an implicit strongly stable approach that is formulated by careful lagging of the nonlinear parts of the convective and source terms. The present algorithms are tested against analytical solutions of the simplified PBE through many case studies. In all these case studies the discrete models converges successfully to the available analytical solutions and to solutions on relatively fine grids when the analytical solution is not available. This is accomplished by deriving five analytical solutions of the PBE in continuous stirred tank and liquid-liquid extraction column for especial cases of breakage and coalescence functions. As an especial case, these algorithms are implemented via a windows computer code called LLECMOD (Liquid-Liquid Extraction Column Module) to simulate the hydrodynamics of general liquid-liquid extraction columns (LLEC). The user input dialog makes the LLECMOD a user-friendly program that enables the user to select grids, column dimensions, flow rates, velocity models, simulation parameters, dispersed and continuous phases chemical components, and droplet phase space-time solvers. The graphical output within the windows environment adds to the program a distinctive feature and makes it very easy to examine and interpret the results very quickly. Moreover, the dynamic model of the dispersed phase is carefully treated to correctly predict the oscillatory behavior of the LLEC hold up. In this context, a continuous velocity model corresponding to the manipulation of the inlet continuous flow rate through the control of the dispersed phase level is derived to get rid of this behavior.
Herbivory is discussed as a key agent in maintaining dynamics and stability of tropical forested ecosystems. Accordingly increasing attention has been paid to the factors that structure tropical herbivore communities. The aim of this study was (1) to describe diversity, density, distribution and host range of the phasmid community (Phasmatodea) of a moist neotropical forest in Panamá, and (2) to experimentally assess bottom-up and top-down factors that may regulate populations of the phasmid Metriophasma diocles. The phasmid community of Barro Colorado Island was poor in species and low in density. Phasmids mainly occurred along forest edges and restricted host ranges of phasmid species reflected the successional status of their host plants. Only M. diocles that fed on early and late successional plants occurred regularly in the forest understory. A long generation time with a comparably low fecundity converted into a low biotic potential of M. diocles. However, modeled potential population density increased exponentially and exceeded the realized densities of this species already after one generation indicating that control factors continuously affect M. diocles natural populations. Egg hatching failure decreased potential population growth by 10 % but was of no marked effect at larger temporal scale. Interspecific differences in defensive physical and chemical leaf traits of M. diocles host plants, amongst them leaf toughness the supposedly most effective anti-herbivore defense, seemed not to affect adult female preference and nymph performance. Alternatively to these defenses, I suggest that the pattern of differential preference and performance may be based on interspecific differences in qualitative toxic compounds or in nutritive quality of leaves. The significant rejection of leaf tissue with a low artificial increase of natural phenol contents by nymphs indicated a qualitative defensive pathway in Piper evolution. In M. diocles, oviposition may not be linked to nymph performance, because the evolutionary prediction of a relation between female adult preference and nymph performance was missing. Consequently, the recruitment of nymphs into the reproductive adult phase may be crucially affected by differential performance of nymphs. Neonate M. diocles nymphs suffered strong predation pressure when exposed to natural levels of predation. Concluding from significantly increased predation-related mortality at night, I argue that arthropods may be the main predators of this nocturnal herbivore. Migratory behavior of nymphs seemed not to reflect predation avoidance. Instead, I provided first evidence that host plant quality may trigger off-plant migration. In conclusion, I suggest that predation pressure with its direct effects on nymph survival may be a stronger factor regulating M. diocles populations, compared to direct and indirect effects of host plant quality, particularly because slow growth and off-host migration both may feed back into an increase of predation related mortality.
In this dissertation we consider complex, projective hypersurfaces with many isolated singularities. The leading questions concern the maximal number of prescribed singularities of such hypersurfaces in a given linear system, and geometric properties of the equisingular stratum. In the first part a systematic introduction to the theory of equianalytic families of hypersurfaces is given. Furthermore, the patchworking method for constructing hypersurfaces with singularities of prescribed types is described. In the second part we present new existence results for hypersurfaces with many singularities. Using the patchworking method, we show asymptotically proper results for hypersurfaces in P^n with singularities of corank less than two. In the case of simple singularities, the results are even asymptotically optimal. These statements improve all previous general existence results for hypersurfaces with these singularities. Moreover, the results are also transferred to hypersurfaces defined over the real numbers. The last part of the dissertation deals with the Castelnuovo function for studying the cohomology of ideal sheaves of zero-dimensional schemes. Parts of the theory of this function for schemes in P^2 are generalized to the case of schemes on general surfaces in P^3. As an application we show an H^1-vanishing theorem for such schemes.
In the present work, various aspects of the mixed continuum-atomistic modelling of materials are studied, most of which are related to the problems arising due to a development of microstructures during the transition from an elastic to plastic description within the framework of continuum-atomistics. By virtue of the so-called Cauchy-Born hypothesis, which is an essential part of the continuum-atomistics, a localization criterion has been derived in terms of the loss of infinitesimal rank-one convexity of the strain energy density. According to this criterion, a numerical yield condition has been computed for two different interatomic energy functions. Therewith, the range of the Cauchy-Born rule validity has been defined, since the strain energy density remains quasiconvex only within the computed yield surface. To provide a possibility to continue the simulation of material response after the loss of quasiconvexity, a relaxation procedure proposed by Tadmor et al. leading necessarily to the development of microstructures has been used. Thereby, various notions of convexity have been overviewed in details. Alternatively to the above mentioned criterion, a stability criterion has been applied to detect the critical deformation. For the study in the postcritical region, the path-change procedure proposed by Wagner and Wriggers has been adapted for the continuum-atomistic and modified. To capture the deformation inhomogeneity arising due to the relaxation, the Cauchy-Born hypothesis has been extended by assumption that it represents only the 1st term in the Taylor's series expansion of the deformation map. The introduction of the 2nd, quadratic term results in the higher-order materials theory. Based on a simple computational example, the relevance of this theory in the postcritical region has been shown. For all simulations including the finite element examples, the development tool MATLAB 6.5 has been used.
Characterization of neuronal activity in the auditory brainstem of rats: An optical imaging approach
(2004)
In this doctoral thesis, several aspects of neuronal activity in the rat superior olivary complex (SOC), an auditory brainstem structure, were analyzed using optical imaging with voltage-sensitive dyes (VSD). The thesis is divided into 5 Chapters. Chapter 1 is a general introduction, which gives an overview of the auditory brainstem and VSD imaging. In Chapter 2, an optical imaging method for the SOC was standardized, using the VSD RH795. To do so, the following factors were optimized: (1) An extracellular potassium concentration of 5 mM is necessary during the incubation and recording to observe synaptically evoked responses in the SOC. (2) Employing different power supplies reduced the noise. (3) Averaging of 10 subsequent trials yielded a better signal-to-noise ratio. (4) RH795 of 100 µM with 50 min prewash was optimal to image SOC slices for more than one hour. (5) Stimulus-evoked optical signals were TTX sensitive, revealing action potential-driven input. (6) Synaptically evoked optical signals were characterized to be composed of pre- and postsynaptic components. (7) Optical signals were well correlated with anatomical structures. Overall, this method allows the comparative measurement of electrical activity of cell ensembles with high spatio-temporal resolution. In Chapter 3, the nature of functional inputs to the lateral superior olive (LSO), the medial superior olive (MSO), and the superior paraolivary nucleus (SPN) were analyzed using the glycine receptor blocker strychnine and the AMPA/kainate receptor blocker CNQX. In the LSO, the known glutamatergic inputs from the ipsilateral, and the glycinergic inputs from the ipsilateral and contralateral sides, were confirmed. Furthermore, a CNQX-sensitive input from the contralateral was identified. In the MSO, the glutamatergic and glycinergic inputs from the ipsilateral and contralateral sides were corroborated. In the SPN, besides the known glycinergic input from the contralateral, I found a glycinergic input from the ipsilateral and I also identified CNQX-sensitive inputs from the contralateral and ipsilateral sides. Together, my results thus corroborate findings obtained with different preparations and methods, and provide additional information on the pharmacological nature of the inputs. In Chapter 4, the development of glycinergic inhibition for the LSO, the MSO, the SPN, and the medial nucleus of the trapezoid body (MNTB) was studied by characterizing the polarity of strychnine-sensitive responses. In the LSO, the high frequency region displayed a shift in the polarity at P4, whereas the low frequency region displayed at P6. In the MSO, both the regions displayed the shift at P5. The SPN displayed a shift in the polarity at E18-20 without any regional differences. The MNTB lacked a shift between P3-10. Together, these results demonstrate a differential timing in the development of glycinergic inhibition in these nuclei. In Chapter 5, the role of the MSO in processing bilateral time differences (t) was investigated. This was done by stimulating ipsilateral and contralateral inputs to the MSO with different t values. In preliminary experiments, the postsynaptic responses showed a differential pattern in the spread of activity upon different t values. This data demonstrates a possible presence of delay lines as proposed by Jeffress in the interaural time difference model of sound localization. In conclusion, this study demonstrates the usage of VSD imaging to analyze the neuronal activity in auditory brainstem slices. Moreover, this study expands the knowledge of the inputs to the SOC, and has identified one glycinergic and three AMPA/kainate glutamatergic novel inputs to the SOC nuclei.
In this text we survey some large deviation results for diffusion processes. The first chapters present results from the literature such as the Freidlin-Wentzell theorem for diffusions with small noise. We use these results to prove a new large deviation theorem about diffusion processes with strong drift. This is the main result of the thesis. In the later chapters we give another application of large deviation results, namely to determine the exponential decay rate for the Bayes risk when separating two different processes. The final chapter presents techniques which help to experiment with rare events for diffusion processes by means of computer simulations.
The present thesis deals with a novel air interface concept for beyond 3G mobile radio systems. Signals received at a certain reference cell in a cellular system which originate in neighboring cells of the same cellular system are undesired and constitute the intercell interference. Due to intercell interference, the spectrum capacity of cellular systems is limited and therefore the reduction of intercell interference is an important goal in the design of future mobile radio systems. In the present thesis, a novel service area based air interface concept is investigated in which interference is combated by joint detection and joint transmission, providing an increased spectrum capacity as compared to state-of-the-art cellular systems. Various algorithms are studied, with the aid of which intra service area interference can be combated. In the uplink transmission, by optimum joint detection the probability of erroneous decision is minimized. Alternatively, suboptimum joint detection algorithms can be applied offering reduced complexity. By linear receive zero-forcing joint detection interference in a service area is eliminated, while by linear minimum mean square error joint detection a trade-off is performed between interference elimination and noise enhancement. Moreover, iterative joint detection is investigated and it is shown that convergence of the data estimates of iterative joint detection without data estimate refinement towards the data estimates of linear joint detection can be achieved. Iterative joint detection can be further enhanced by the refinement of the data estimates in each iteration. For the downlink transmission, the reciprocity of uplink and downlink channels is used by joint transmission eliminating the need for channel estimation and therefore allowing for simple mobile terminals. A novel algorithm for optimum joint transmission is presented and it is shown how transmit signals can be designed which result in the minimum possible average bit error probability at the mobile terminals. By linear transmit zero-forcing joint transmission interference in the downlink transmission is eliminated, whereas by iterative joint transmission transmit signals are constructed in an iterative manner. In a next step, the performance of joint detection and joint transmission in service area based systems is investigated. It is shown that the price to be paid for the interference suppression in service area based systems is the suboptimum use of the receive energy in the uplink transmission and of the transmit energy in the downlink transmission, with respect to the single user reference system. In the case of receive zero-forcing joint detection in the uplink and transmit zero-forcing joint transmission in the downlink, i.e., in the case of linear unbiased data transmission, it is shown that the same price, quantified by the energy efficiency, has to be paid for interference elimination in both uplink and downlink. Finally it is shown that if the system load is fixed, the number of active mobile terminals in a SA and hence the spectrum capacity can be increased without any significant reduction in the average energy efficiency of the data transmission.
In traditional portfolio optimization under the threat of a crash the investment horizon or time to maturity is neglected. Developing the so-called crash hedging strategies (which are portfolio strategies which make an investor indifferent to the occurrence of an uncertain (down) jumps of the price of the risky asset) the time to maturity turns out to be essential. The crash hedging strategies are derived as solutions of non-linear differential equations which itself are consequences of an equilibrium strategy. Hereby the situation of changing market coefficients after a possible crash is considered for the case of logarithmic utility as well as for the case of general utility functions. A benefit-cost analysis of the crash hedging strategy is done as well as a comparison of the crash hedging strategy with the optimal portfolio strategies given in traditional crash models. Moreover, it will be shown that the crash hedging strategies optimize the worst-case bound for the expected utility from final wealth subject to some restrictions. Another application is to model crash hedging strategies in situations where both the number and the height of the crash are uncertain but bounded. Taking the additional information of the probability of a possible crash happening into account leads to the development of the q-quantile crash hedging strategy.
The hypoxia inducible factor-1 (HIF-1), a heterodimer composed of HIF-1alpha and HIF-1beta, is activated in response to low oxygen tension and serves as the master regulator for cells to adapt to hypoxia. HIF-1 is usually considered to be regulated via degradation of its a-subunit. Recent findings, however, point to the existence of alternative mechanisms of HIF-1 regulation which appear to be important for down-regulating HIF-1 under prolonged and severe oxygen depletion. The aims of my Ph.D. thesis, therefore, were to further elucidate mechanisms involved in such down-regulation of HIF-1. The first part of the thesis addresses the impact of the severity and duration of oxygen depletion on HIF-1alpha protein accumulation and HIF-1 transcriptional activity. A special focus was put on the influence of the transcription factor p53 on HIF-1. I found that p53 only accumulates under prolonged anoxia (but not hypoxia), thus limiting its influence on HIF-1 to severe hypoxic conditions. At low expression levels, p53 inhibits HIF-1 transactivity. I attributed this effect to a competition between p53 and HIF-1alpha for binding to the transcriptional co-factor p300, since p300 overexpression reverses this inhibition. This assumption is corroborated by competitive binding of IVTT-generated p53 and HIF-1alpha to the CH1-domain of p300 in vitro. High p53 expression, on the other hand, affects HIF-1alpha protein negatively, i.e., p53 provokes pVHL-independent degradation of HIF-1alpha. Therefore, I conclude that low p53 expression attenuates HIF-1 transactivation by competing for p300, while high p53 expression negatively affects HIF-1alpha protein, thereby eliminating HIF-1 transactivity. Thus, once p53 becomes activated under prolonged anoxia, it contributes to terminating HIF-1 responses. In the second part of my study, I intended to further characterize the effects induced by prolonged periods of low oxygen, i.e., hypoxia, as compared to anoxia, with respect to alterations in HIF-1alpha mRNA. Prolonged anoxia, but not hypoxia, showed pronounced effects on HIF-1alpha mRNA. Long-term anoxia induced destabilization of HIF-1alpha mRNA, which manifests itself in a dramatic reduction of the half-life. The mechanistic background points to natural anti-sense HIF-1alpha mRNA, which is induced in a HIF-1-dependent manner, and additional factors, which most likely influence HIF-1alpha mRNA indirectly via anti-sense HIF-1alpha mRNA mediated trans-effects. In summary, the data provide new information concerning the impact of p53 on HIF-1, which might be of importance for the decision between pro- and anti-apoptotic mechanisms depending upon the severity and duration of hypoxia. Furthermore, the results of this project give further insights into a novel mechanism of HIF-1 regulation, namely mRNA down-regulation under prolonged anoxic incubations. These mechanisms appear to be activated only in response to prolonged anoxia, but not to hypoxia. These considerations regarding HIF-1 regulation should be taken into account when prolonged incubations to hypoxic or anoxic conditions are analyzed at the level of HIF-1 stability regulation.
Nowadays one of the major objectives in geosciences is the determination of the gravitational field of our planet, the Earth. A precise knowledge of this quantity is not just interesting on its own but it is indeed a key point for a vast number of applications. The important question is how to obtain a good model for the gravitational field on a global scale. The only applicable solution - both in costs and data coverage - is the usage of satellite data. We concentrate on highly precise measurements which will be obtained by GOCE (Gravity Field and Steady State Ocean Circulation Explorer, launch expected 2006). This satellite has a gradiometer onboard which returns the second derivatives of the gravitational potential. Mathematically seen we have to deal with several obstacles. The first one is that the noise in the different components of these second derivatives differs over several orders of magnitude, i.e. a straightforward solution of this outer boundary value problem will not work properly. Furthermore we are not interested in the data at satellite height but we want to know the field at the Earth's surface, thus we need a regularization (downward-continuation) of the data. These two problems are tackled in the thesis and are now described briefly. Split Operators: We have to solve an outer boundary value problem at the height of the satellite track. Classically one can handle first order side conditions which are not tangential to the surface and second derivatives pointing in the radial direction employing integral and pseudo differential equation methods. We present a different approach: We classify all first and purely second order operators which fulfill that a harmonic function stays harmonic under their application. This task is done by using modern algebraic methods for solving systems of partial differential equations symbolically. Now we can look at the problem with oblique side conditions as if we had ordinary i.e. non-derived side conditions. The only additional work which has to be done is an inversion of the differential operator, i.e. integration. In particular we are capable to deal with derivatives which are tangential to the boundary. Auto-Regularization: The second obstacle is finding a proper regularization procedure. This is complicated by the fact that we are facing stochastic rather than deterministic noise. The main question is how to find an optimal regularization parameter which is impossible without any additional knowledge. However we could show that with a very limited number of additional information, which are obtainable also in practice, we can regularize in an asymptotically optimal way. In particular we showed that the knowledge of two input data sets allows an order optimal regularization procedure even under the hard conditions of Gaussian white noise and an exponentially ill-posed problem. A last but rather simple task is combining data from different derivatives which can be done by a weighted least squares approach using the information we obtained out of the regularization procedure. A practical application to the downward-continuation problem for simulated gravitational data is shown.
In the filling process of a car tank, the formation of foam plays an unwanted role, as it may prevent the tank from being completely filled or at least delay the filling. Therefore it is of interest to optimize the geometry of the tank using numerical simulation in such a way that the influence of the foam is minimized. In this dissertation, we analyze the behaviour of the foam mathematically on the mezoscopic scale, that is for single lamellae. The most important goals are on the one hand to gain a deeper understanding of the interaction of the relevant physical effects, on the other hand to obtain a model for the simulation of the decay of a lamella which can be integrated in a global foam model. In the first part of this work, we give a short introduction into the physical properties of foam and find that the Marangoni effect is the main cause for its stability. We then develop a mathematical model for the simulation of the dynamical behaviour of a lamella based on an asymptotic analysis using the special geometry of the lamella. The result is a system of nonlinear partial differential equations (PDE) of third order in two spatial and one time dimension. In the second part, we analyze this system mathematically and prove an existence and uniqueness result for a simplified case. For some special parameter domains the system can be further simplified, and in some cases explicit solutions can be derived. In the last part of the dissertation, we solve the system using a finite element approach and discuss the results in detail.
Inappropriate speed is the most common reason for road traffic accidents world wide. Thus, a necessity for speed management exists. The so-called SUNflower states Sweden, the United Kingdom and the Netherlands - each spending strong effort in traffic safety policies - have great success in reducing mean road speeds and speed variances through speed management. However, the effect is still insufficient for gaining real traffic safety. Thus, there is a discussion to make use of technical in-vehicle devices. One of these technologies called Intelligent Speed Adaptation (ISA) reduces vehicle speeds. This is done either by warning the driver that he is speeding, or activating the accelerator pedal with a counterforce, or reducing the gasoline supply to the motor. The three ways of reducing the speed are called version 1-3. The EC-project for research on speed adaptation policies on European roads (PROSPER) deals with strategic proposals for the implementation of the different ISA-versions. This thesis includes selected results of PROSPER. In this thesis two empiric surveys were done in order to give an overview about the basic conditions (e.g. social, economic, technical aspects) for an ISA implementation in Germany. On one hand, a stakeholder analysis and questionnaire using the Delphi-method has been accomplished in two rounds. On the other hand, a questionnaire with speed offenders has been accomplished, too, in two rounds. In addition, the author created an expert pool consisting of 23 experts representing the most important fields of science and practice in which ISA is involved. The author made phone or personal interviews with most of the experts. 12 experts also produced a detailed publication on their professional point of view towards ISA. The two surveys and the professional comments on ISA led to four possible implementation scenarios for ISA in Germany. However, due to a strong political opposition against ISA it is also thinkable that ISA is not implemented or the implementation process starts after 2015 (i.e. outside the aimed period of time). The scenarios are as follows: A) Implementation of version 1 by market forces with governmental subventions. B) Implementation of version 2 by market forces supported by traffic safety institutions and image-making processes. C) Implementation of a modified version 3 by law for speed offenders instead of cancellation of the driving licence. D) Implementation of various versions in Germany because of a broad implementation of ISA in the SUNflower states. X) Non-implementation of ISA leads to the necessity of alternative speed management measures. The author prefers scenario B because - ceteris paribus - it seems to be the most likely way to implement the technology. As soon as ISA reaches technical maturity, the implementation process has to be accomplished in three steps. 1) Marketing and image making 2) Margin introduction 3) Market penetration This implementation process for ISA by market forces could effect a percentage of at least 15% of all vehicles equipped with ISA before the year 2015.
The fact that long fibre reinforced thermoplastic composites (LFT) have higher tensile
strength, modulus and even toughness, compared to short fibre reinforced
thermoplastics with the same fibre loading has been well documented in literature.
These are the underlying factors that have made LFT materials one of the most
rapidly growing sectors of plastics industry. New developments in manufacturing of
LFT composites have led to improvements in mechanical properties and price
reduction, which has made these materials an attractive choice as a replacement for
metals in automobile parts and other similar applications. However, there are still
several open scientific questions concerning the material selection leading to the
optimal property combinations. The present work is an attempt to clarify some of
these questions. The target was to develop tools that can be used to modify, or to
“tailor”, the properties of LFT composite materials, according to the requirements of
automobile and other applications.
The present study consisted of three separate case studies, focusing on the current
scientific issues on LFT material systems. The first part of this work was focused on
LGF reinforced thermoplastic styrenic resins. The target was to find suitable maleic
acid anhydride (MAH) based coupling agents in order to improve the fibre-matrix
interfacial strength, and, in this way, to develop an LGF concentrate suitable for
thermoplastic styrenic resins. It was shown that the mechanical properties of LGF
reinforced “styrenics” were considerably improved when a small amount of MAH
functionalised polymer was added to the matrix. This could be explained by the better fibre-matrix adhesion, revealed by scanning electron microscopy of fracture surfaces.
A novel LGF concentrate concept showed that one particular base material can be
used to produce parts with different mechanical and thermal properties by diluting the
fibre content with different types of thermoplastic styrenic resins. Therefore, this
concept allows a flexible production of parts, and it can be used in the manufacturing
of interior parts for automobile components.The second material system dealt with so called hybrid composites, consisting of
long glass fibre reinforced polypropylene (LGF-PP) and mineral fillers like calcium
carbonate and talcum. The aim was to get more information about the fracture
behaviour of such hybrid composites under tensile and impact loading, and to
observe the influence of the fillers on properties. It was found that, in general, the
addition of fillers in LGF-PP, increased stiffness but the strength and fracture
toughness were decreased. However, calcium carbonate and talcum fillers resulted
in different mechanical properties, when added to LGF-PP: better mechanical
properties were achieved by using talcum, compared to calcium carbonate. This
phenomenon could be explained by the different nucleation effect of these fillers,
which resulted in a different crystalline morphology of polypropylene, and by the
particle orientation during the processing when talc was used. Furthermore, the
acoustic emission study revealed that the fracture mode of LGF-PP changed when
calcium carbonate was added. The characteristic acoustic signals revealed that the
addition of filler led to the fibre debonding at an earlier stage of fracture sequence
when compared to unfilled LGF-PP.
In the third material system, the target was to develop a novel long glass fibre
reinforced composite material based on the blend of polyamide with thermoset
resins. In this study a blend of polyamide-66 (PA66) and phenol formaldehyde resin
(PFR) was used. The chemical structure of the PA66-PFR resin was analysed by
using small molecular weight analogues corresponding to PA66 and PFR
components, as well as by carrying out experiments using the macromolecular
system. Theoretical calculations and experiments showed that there exists a strong
hydrogen bonding between the carboxylic groups of PA66 and the hydroxylic groups
of PFR, exceeding even the strength of amide-water hydrogen bonds. This was
shown to lead to the miscible blends, when PFR was not crosslinked. It was also
found that the morphology of such thermoplastic-thermoset blends can be controlled
by altering ratio of blend components (PA66, PFR and crosslinking agent). In the
next phase, PA66-PFR blends were reinforced by long glass fibres. The studies
showed that the water absorption of the blend samples was considerably decreased,
which was also reflected in higher mechanical properties at equilibrium state.
Wie man aus zahlreichen Untersuchungen und Anwendungsbeispielen entnehmen
kann, besitzen langfaserverstärkte Thermoplaste (LFT) eine bessere Zugfestigkeit,
Biege- und Schlagzähigkeit im Vergleich zu kurzfaserverstärkten Thermoplasten. Die
Vorteile in den mechanischen Eigenschaften haben die LFT zu einem
schnellwachsenden Bereich in der Kunststoffindustrie gemacht. Neue Entwicklungen
in Bereich der Herstellung von LFT haben für zusätzliche Verbesserungen der
mechanischen Eigenschaften sowie eine Preisreduzierung der Materialien in den
vergangenen Jahren gesorgt, was die LFT zu einer attraktiven Wahl u.a. als Ersatz
von Metallen in Automobilteilen macht. Es stellen sich allerdings immer noch einige
offene wissenschaftliche Fragen in Bezug auf z.B. die Materialbeschaffenheit, um
optimale Eigenschaftskombinationen zu erreichen. Die vorliegende Arbeit versucht,
einige dieser Fragen zu beantworten. Ziel war es, Vorgehensweisen zu entwickeln,
mit denen man die Eigenschaften von LFT gezielt beeinflussen und so den
Anforderungen von Automobilen oder anderen Anwendungen anpassen oder
„maßschneidern“ kann.
Die vorliegende Arbeit besteht aus drei Teilen, welche sich auf unterschiedliche
Materialsysteme, angepasst an den aktuellen Bedarf und das Interesse der Industrie,
konzentrieren.
Der erste Teil der Arbeit richtet sich auf die Eigenschaftsoptimierung von
langglasfaserverstärkten (LGF) thermoplastischen Styrolcopolymeren und von
Blends aus diesen Materialien. Es wurden passende, auf Maleinsäureanhydride
(MAH) basierende Kopplungsmittel gefunden, um die Faser-Matrix-Haftung zu
optimieren. Weiterhin wurde ein LGF Konzentrat entwickelt, welches mit
verschiedenen thermoplastischen Styrolcopolymeren kompatibel ist und somit als
„Verstärkungsadditiv“ eingesetzt werden kann.Das Konzept für ein neues LGF-Konzentrat auf Basis des kompatiblen
Materialsystems konzentriert sich insbesondere darauf, dass ein Basismaterial für
die Herstellung von Bauteilen bereit gestellt werden kann, mit dessen Hilfe gezielt
verschiedene mechanische und thermomechanischen Eigenschaften durch das
Zumischen von verschiedenen Styrolcopoylmeren und Blends verbessert werden
können. Dieses Konzept ermöglicht eine sehr flexible Produktion von Bauteilen und
wird seine Anwendung bei der Herstellung von Bauteilen u.a. im Interieur von Autos
finden.
Das zweite Materialsystem basiert auf sogenannten hybriden Verbundwerkstoffen,
welche aus Langglasfasern und mineralischen Füllstoffen wie Kalziumkarbonat und
Talkum in einer Polypropylen (PP) - Matrix zusammengesetzt sind. Ziel war es, durch
detaillierte bruchmechanische Analysen genaue Informationen über das
Bruchverhalten dieser hybriden Verbundwerkstoffe bei Zug- und Schlagbelastung zu
bekommen, um dann die Unterschiede zwischen den verschiedenen Füllstoffen in
Bezug auf ihre Eigenschaften zu dokumentieren. Es konnte beobachtet werden, dass
bei Zugabe der Füllstoffe zum LGF-PP normalerweise die Steifigkeit weiter
verbessert wurde, jedoch die Festigkeit und Schlagzähigkeit abnahmen. Weiterhin
zeigten die verschiedenen Füllstoffe wie Kalziumkarbonat und Talkum
unterschiedliche mechanische Eigenschaften auf, wenn sie zusammen mit LGF
Verstärkung eingesetzt wurden: Bei der Zugabe von Talkum wurde u.a. eine deutlich
bessere Schlagzähigkeit als bei der Zugabe von Kalziumkarbonat festgestellt. Dieses
Phänomen konnte durch das unterschiedliche Nukleierungsverhalten des PPs erklärt
werden, welches in einer unterschiedlichen Kristallmorphologie von Polypropylen
resultierte. Weiterhin konnte man durch Messungen der akustischen Emmissionen
während der Zugbelastung eines bruchmechanischen Versuchskörpers aufzeigen,
dass die höhere Bruchzähigkeit von LGF-PP ohne Füllstoffe daraus resultiert, dass
Faser-Pullout schon bei geringeren Kräften vorhanden war.
Materials in general can be divided into insulators, semiconductors and conductors,
depending on their degree of electrical conductivity. Polymers are classified as
electrically insulating materials, having electrical conductivity values lower than 10-12
S/cm. Due to their favourable characteristics, e.g. their good physical characteristics,
their low density, which results in weight reduction, etc., polymers are also
considered for applications where a certain degree of conductivity is required. The
main aim of this study was to develop electrically conductive composite materials
based on epoxy (EP) matrix, and to study their thermal, electrical, and mechanical
properties. The target values of electrical conductivity were mainly in the range of
electrostatic discharge protection (ESD, 10-9-10-6 S/cm).
Carbon fibres (CF) were the first type of conductive filler used. It was established that
there is a significant influence of the fibre aspect ratio on the electrical properties of
the fabricated composite materials. With longer CF the percolation threshold value
could be achieved at lower concentrations. Additional to the homogeneous CF/EP
composites, graded samples were also developed. By the use of a centrifugation
method, the CF created a graded distribution along one dimension of the samples.
The effect of the different processing parameters on the resulting graded structures
and consequently on their gradients in the electrical and mechanical properties were
systematically studied.
An intrinsically conductive polyaniline (PANI) salt was also used for enhancing the
electrical properties of the EP. In this case, a much lower percolation threshold was
observed compared to that of CF. PANI was found out to have, up to a particular
concentration, a minimal influence on the thermal and mechanical properties of the
EP system.
Furthermore, the two above-mentioned conductive fillers were jointly added to the EP
matrix. Improved electrical and mechanical properties were observed by this
incorporation. A synergy effect between the two fillers took place regarding the
electrical conductivity of the composites.
The last part of this work was engaged in the application of existing theoretical
models for the prediction of the electrical conductivity of the developed polymer composites. A good correlation between the simulation and the experiments was
observed.
Allgemein werden Materialien in Bezug auf ihre elektrische Leitfähigkeit in Isolatoren,
Halbleiter oder Leiter unterteilt. Polymere gehören mit einer elektrischen Leitfähigkeit
niedriger als 10-12 S/cm in die Gruppe der Isolatoren. Aufgrund vorteilhafter
Eigenschaften der Polymere, wie z.B. ihren guten physikalischen Eigenschaften,
ihrer geringen Dichte, welche zur Gewichtsreduktion beiträgt, usw., werden Polymere
auch für Anwendungen in Betracht gezogen, bei denen ein gewisser Grad an
Leitfähigkeit gefordert wird. Das Hauptziel dieser Studie war, elektrisch leitende
Verbundwerkstoffe auf der Basis von Epoxidharz (EP) zu entwickeln und deren
elektrische, mechanische und thermische Eigenschaften zu studieren. Die Zielwerte
der elektrischen Leitfähigkeit lagen hauptsächlich im Bereich der Vermeidung
elektrostatischer Aufladungen (ESD, 10-9-10-6 S/cm).
Bei der Herstellung elektrisch leitender Kunststoffen wurden als erstes
Kohlenstofffasern (CF) als leitfähige Füllstoffe benutzt. Bei den durchgeführten
Experimenten konnte man beobachten, dass das Faserlängenverhältnis einen
bedeutenden Einfluss auf die elektrischen Eigenschaften der fabrizierten
Verbundwerkstoffe hat. Mit längeren CF wurde die Perkolationsschwelle bereits bei
einer niedrigeren Konzentration erreicht. Zusätzlich zu den homogenen CF/EP
Verbundwerkstoffen, wurden auch Gradientenwerkstoffe entwickelt. Mit Hilfe einer
Zentrifugation konnte eine gradierte Verteilung der CF entlang der Probenlängeachse
erreicht werden. Die Effekte der unterschiedlichen Zentrifugationsparameter
auf die resultierenden Gradientenwerkstoffe und die daraus
resultierenden, gradierten elektrischen und mechanischen Eigenschaften wurden
systematisch studiert.
Ein intrinsisch leitendes Polyanilin-Salz (PANI) wurde auch für das Erhöhen der
elektrischen Eigenschaften des EP benutzt. In diesem Fall wurde eine viel niedrigere
Perkolationsschwelle verglichen mit der von CF beobachtet. Der Einsatz von PANI hat bis zu einer bestimmten Konzentration nur einen minimalen Einfluß auf die
thermischen und mechanischen Eigenschaften des EP Systems.
In einem dritte Schritt wurden die zwei oben erwähnten, leitenden Füllstoffe
gemeinsam der EP Matrix hinzugefügt. Erhöhte elektrische und mechanische
Eigenschaften wurden in diesem Fall beobachtet, wobei sich ein Synergie-Effekt
zwischen den zwei Füllstoffen bezogen auf die elektrische Leitfähigkeit der
Verbundwerkstoffe ergab.
Im letzten Teil dieser Arbeit fand die Anwendung von theoretischen Modelle zur
Vorhersage der elektrischen Leitfähigkeit der entwickelten Verbundwerkstoffe statt.
Dabei konnte eine gute Übereinstimmung mit den experimentellen Ergebnissen
festgestellt werden .
Competing Neural Networks as Models for Non Stationary Financial Time Series -Changepoint Analysis-
(2005)
The problem of structural changes (variations) play a central role in many scientific fields. One of the most current debates is about climatic changes. Further, politicians, environmentalists, scientists, etc. are involved in this debate and almost everyone is concerned with the consequences of climatic changes. However, in this thesis we will not move into the latter direction, i.e. the study of climatic changes. Instead, we consider models for analyzing changes in the dynamics of observed time series assuming these changes are driven by a non-observable stochastic process. To this end, we consider a first order stationary Markov Chain as hidden process and define the Generalized Mixture of AR-ARCH model(GMAR-ARCH) which is an extension of the classical ARCH model to suit to model with dynamical changes. For this model we provide sufficient conditions that ensure its geometric ergodic property. Further, we define a conditional likelihood given the hidden process and a pseudo conditional likelihood in turn. For the pseudo conditional likelihood we assume that at each time instant the autoregressive and volatility functions can be suitably approximated by given Feedfoward Networks. Under this setting the consistency of the parameter estimates is derived and versions of the well-known Expectation Maximization algorithm and Viterbi Algorithm are designed to solve the problem numerically. Moreover, considering the volatility functions to be constants, we establish the consistency of the autoregressive functions estimates given some parametric classes of functions in general and some classes of single layer Feedfoward Networks in particular. Beside this hidden Markov Driven model, we define as alternative a Weighted Least Squares for estimating the time of change and the autoregressive functions. For the latter formulation, we consider a mixture of independent nonlinear autoregressive processes and assume once more that the autoregressive functions can be approximated by given single layer Feedfoward Networks. We derive the consistency and asymptotic normality of the parameter estimates. Further, we prove the convergence of Backpropagation for this setting under some regularity assumptions. Last but not least, we consider a Mixture of Nonlinear autoregressive processes with only one abrupt unknown changepoint and design a statistical test that can validate such changes.
In conventional radio communication systems, the system design generally starts from the transmitter (Tx), i.e. the signal processing algorithm in the transmitter is a priori selected, and then the signal processing algorithm in the receiver is a posteriori determined to obtain the corresponding data estimate. Therefore, in these conventional communication systems, the transmitter can be considered the master and the receiver can be considered the slave. Consequently, such systems can be termed transmitter (Tx) oriented. In the case of Tx orientation, the a priori selected transmitter algorithm can be chosen with a view to arrive at particularly simple transmitter implementations. This advantage has to be countervailed by a higher implementation complexity of the a posteriori determined receiver algorithm. Opposed to the conventional scheme of Tx orientation, the design of communication systems can alternatively start from the receiver (Rx). Then, the signal processing algorithm in the receiver is a priori determined, and the transmitter algorithm results a posteriori. Such an unconventional approach to system design can be termed receiver (Rx) oriented. In the case of Rx orientation, the receiver algorithm can be a priori selected in such a way that the receiver complexity is minimum, and the a posteriori determined transmitter has to tolerate more implementation complexity. In practical communication systems the implementation complexity corresponds to the weight, volume, cost etc of the equipment. Therefore, the complexity is an important aspect which should be taken into account, when building practical communication systems. In mobile radio communication systems, the complexity of the mobile terminals (MTs) should be as low as possible, whereas more complicated implementations can be tolerated in the base station (BS). Having in mind the above mentioned complexity features of the rationales Tx orientation and Rx orientation, this means that in the uplink (UL), i.e. in the radio link from the MT to the BS, the quasi natural choice would be Tx orientation, which leads to low cost transmitters at the MTs, whereas in the downlink (DL), i.e. in the radio link from the BS to the MTs, the rationale Rx orientation would be the favorite alternative, because this results in simple receivers at the MTs. Mobile radio downlinks with the rationale Rx orientation are considered in the thesis. Modern mobile radio communication systems are cellular systems, in which both the intracell and intercell interferences exist. These interferences are the limiting factors for the performance of mobile radio systems. The intracell interference can be eliminated or at least reduced by joint signal processing with consideration of all the signals in the considered cell. However such joint signal processing is not feasible for the elimination of intercell interference in practical systems. Knowing that the detrimental effect of intercell interference grows with its average energy, the transmit energy radiated from the transmitter should be as low as possible to keep the intercell interference low. Low transmit energy is required also with respect to the growing electro-phobia of the public. The transmit energy reduction for multi-user mobile radio downlinks by the rationale Rx orientation is dealt with in the thesis. Among the questions still open in this research area, two questions of major importance are considered here. MIMO is an important feature with respect to the transmit power reduction of mobile radio systems. Therefore, first questionconcerns the linear Rx oriented transmission schemes combined with MIMO antenna structures. The investigations of the MIMO benefit on the linear Rx oriented transmission schemes are studied in the thesis. Utilization of unconventional multiply connected quantization schemes at the receiver has also great potential to reduce the transmit energy. Therefore, the second question considers the designing of non-linear Rx oriented transmission schemes combined with multiply connected quantization schemes.
The thesis is focused on modelling and simulation of a Joint Transmission and Detection Integrated Network (JOINT), a novel air interface concept for B3G mobile radio systems. Besides the utilization of the OFDM transmission technique, which is a promising candidate for future mobile radio systems, and of the duplexing scheme time division duplexing (TDD), the subdivision of the geographical domain to be supported by mobile radio communications into service areas (SAs) is a highlighted concept of JOINT. A SA consists of neighboring sub-areas, which correspond to the cells of conventional cellular systems. The signals in a SA are jointly processed in a Central Unit (CU) in each SA. The CU performs joint channel estimation (JCE) and joint detection (JD) in the form of the receive-zero-forcing (RxZF) Filter for the uplink (UL) transmission and joint transmission (JT) in the form of the transmit-zero-forcing (TxZF) Filter for the downlink (DL) transmission. By these algorithms intra-SA multiple access interference (MAI) can be eliminated within the limits of the used model so that unbiased data estimates are obtained, and most of the computational effort is moved from mobile terminals (MTs) to the CU so that the MTs can do with low complexity. A simulation chain of JOINT has been established in the software MLDesigner by the author based on time discrete equivalent lowpass modelling. In this simulation chain, all key functionalities of JOINT are implemented. The simulation chain is designed for link level investigations. A number of channel models are implemented both for the single-SA scenario and the multiple-SA scenario so that the system performance of JOINT can be comprehensively studied. It is shown that in JOINT a duality or a symmetry of the MAI elimination in the UL and in the DL exists. Therefore, the typical noise enhancement going along with the MAI elimination by JD and JT, respectively, is the same in both links. In the simulations also the impact of channel estimation errors on the system performance is studied. In the multiple-SA scenario, due to the existence of the inter-SA MAI, which cannot be suppressed by the algorithms of JD and JT, the system performance in terms of the average bit error rate (BER) and the BER statistics degrades. A collection of simulation results show the potential of JOINT with respect to the improvement of the system performance and the enhancement of the spectrum e±ciency as compared to conventional cellular systems.
In the thesis the task of channel estimation in beyond 3G service area based mobile radio air interfaces is considered. A system concept named Joint Transmission and Detection Integrated Network (JOINT) forms the target platform for the investigations. A single service area of JOINT is considered, in which a number of mobile terminals is supported by a number of radio access points, which are connected to a central unit responsible for the signal processing. The modulation scheme of JOINT is OFDM. Pilot-aided channel estimation is considered, which has to be performed only in the uplink of JOINT, because the duplexing scheme TDD is applied. In this way, the complexity of the mobile terminals is reduced, because they do not need a channel estimator. Based on the signals received by the access points, the central unit estimates the channel transfer functions jointly for all mobile terminals. This is done by resorting to the a priori knowledge of the radiated pilot signals and by applying the technique of joint channel estimation, which is developed in the thesis. The quality of the gained estimates is judged by the degradation of their signal-to-noise ratio as compared to the signal-to-noise ratio of the respective estimates gained in the case of a single mobile terminal radiating its pilots. In the case of single-element receive antennas at the access points, said degradation depends solely on the structure of the applied pilots. In the thesis it is shown how by a proper design of the pilots the SNR degradation can be minimized. Besides using appropriate pilots, the performance of joint channel estimation can be further improved by the inclusion of additional a-priori information in the estimation process. An example of such additional information would be the knowledge of the directional properties of the radio channels. This knowledge can be gained if multi-element antennas are applied at the access points. Further, a-priori channel state information in the form of the power delay profiles of the radio channels can be included in the estimation process by the application of the minimum mean square error estimation principle for joint channel estimation. After having intensively studied the problem of joint channel estimation in JOINT, the thesis rounds itself by considering the impact of the unavoidable channel estimation errors on the performance of data estimation in JOINT. For the case of small channel estimation errors occurring due to the presence of noise at the access points, the performance of joint detection in the uplink and of joint transmission in the downlink of JOINT are investigated based on simulations. For the uplink, which utilizes joint detection, it is shown to which degree the bit error probability increases due to channel estimation errors. For the downlink, which utilizes joint transmission, channel estimation errors lead to an increase of the required transmit power, which can be quantified by the simulation results.
In many industrial applications fast and accurate solutions of linear elliptic partial differential equations are needed as one of the building blocks of more complex problems. The domains are often highly complex and meshing turns out to be expensive and difficult to obtain with a sufficient quality. In such cases methods with a regular, not boundary adapted grid offer an attractive alternative. The Explicit Jump Immersed Interface Method is one of these algorithms. The main interest of this work lies in solving the linear elasticity equations. For this purpose the existing EJIIM algorithm has been extended to three dimensions. The Poisson equation is always considered in parallel as the most typical representative of elliptic PDEs. During the work it became clear that EJIIM can have very high computational memory requirements. To overcome this problem an improvement, Reduced EJIIM is proposed. The main theoretical result in this work is the proof of the smoothing property of inverses of elliptic finite difference operators in two and three space dimensions. It is an often observed phenomena that the local truncation error is allowed to be of lower order along some lower dimensional manifold without influencing the global convergence order of the solution.
An autoregressive-ARCH model with possible exogeneous variables is treated. We estimate the conditional volatility of the model by applying feedforward networks to the residuals and prove consistency and asymptotic normality for the estimates under the rate of feedforward networks complexity. Recurrent neural networks estimates of GARCH and value-at-risk is studied. We prove consistency and asymptotic normality for the recurrent neural networks ARMA estimator under the rate of recurrent networks complexity. We also overcome the estimation problem in stochastic variance models in discrete time by feedforward networks and the introduction of a new distributions on the innovations. We use the method to calculate market risk such as expected shortfall and Value-at risk. We tested this distribution together with other new distributions on the GARCH family models against other common distributions on the financial market such as Normal Inverse Gaussian, normal and the Student's t- distributions. As an application of the models, some German stocks are studied and the different approaches are compared together with the most common method of GARCH(1,1) fit.
The HMG-CoA reductase inhibitors SIM, LOV, ATV, PRA, FV and NKS were investigated for their effects on human SkMCs. We were able to demonstrate that statins can induce oxidative stress (ROS formation, GSH-depletion, TBARS), apoptosis (, caspase-3 activity, nuclear morphology) and necrosis (LDH-leakage) in hSkMCs. After incubation with statins, the sequence of cellular events starts by the increased formation of ROS (30 min) followed by caspase-3 activation (2-4 hours) and necrosis (LDH-leakage) and formation of condensed and fragmented nuclei after 24-72 hours. It was shown that, antioxidants (NAC, DTT, TPGS, M-2 and M-3) and the HMG-CoA reductase downstream metabolites (MVA, F, FPP, GG and GGPP) protected against statin-induced ROS formation, caspase-3 activation and partially from necrosis. The caspase-3 inhibitor Ac-DEVD-CHO rescues cells partially from necrosis. These results suggest that the statin-induced necrosis is HMG-CoA dependent and occurs secondary to apoptosis, which by decrease of ATP is driven into necrosis. The increase of ATP observed at low concentrations and early time points suggest an increased glycolytic activity. This was confirmed by increased PDK-4 gene expression and increased PFK2/F-2,6-BPase expression both activator of glycolysis. Glycolysis was also confirmed for some statins by increased cellular lactate concentations. The consequence of PDK-4 mediated pyruvate dehydrogenase inactivation is the metabolic switching from fatty acid to amino acid from proteins as energy source. The oxidative stress hypothesis was further supported by the induction of the FOXO3A transcription factor, which is involved in regulating MnSOD-2 expression in the mitochondrium. The mechanism by which statins produce ROS is still not resolved. There is an indirect evidence from our experiments as well as from the literature, that immediately after the statin treatment, intracellular Ca2+ is mobilized due to HMG-CoA reductase inhibition, which after mitochondrial uptake could lead to increased ROS formation.
In the first part of this work, called Simple node singularity, are computed matrix factorizations of all isomorphism classes, up to shiftings, of rank one and two, graded, indecomposable maximal Cohen--Macaulay (shortly MCM) modules over the affine cone of the simple node singularity. The subsection 2.2 contains a description of all rank two graded MCM R-modules with stable sheafification on the projective cone of R, by their matrix factorizations. It is given also a general description of such modules, of any rank, over a projective curve of arithmetic genus 1, using their matrix factorizations. The non-locally free rank two MCM modules are computed using an alghorithm presented in the Introduction of this work, that gives a matrix factorization of any extension of two MCM modules over a hypersurface. In the second part, called Fermat surface, are classified all graded, rank two, MCM modules over the affine cone of the Fermat surface. For the classification of the orientable rank two graded MCM R-modules, is used a description of the orientable modules (over normal rings) with the help of codimension two Gorenstein ideals, realized by Herzog and Kühl. It is proven (in section 4), that they have skew symmetric matrix factorizations (over any normal hypersurface ring). For the classification of the non-orientable rank two MCM R-modules, we use a similar idea as in the case of the orientable ones, only that the ideal is not any more Gorenstein.
In this dissertation a model of melt spinning (by Doufas, McHugh and Miller) has been investigated. The model (DMM model) which takes into account effects of inertia, air drag, gravity and surface tension in the momentum equation and heat exchange between air and fibre surface, viscous dissipation and crystallization in the energy equation also has a complicated coupling with the microstructure. The model has two parts, before onset of crystallization (BOC) and after onset of crystallization (AOC) with the point of onset of crystallization as the unknown interface. Mathematically the model has been formulated as a Free boundary value problem. Changes have been introduced in the model with respect to the air drag and an interface condition at the free boundary. The mathematical analysis of the nonlinear, coupled free boundary value problem shows that the solution of this problem depends heavily on initial conditions and parameters which renders the global analysis impossible. But by defining a physically acceptable solution, it is shown that for a more restricted set of initial conditions if a unique solution exists for IVP BOC then it is physically acceptable. For this the important property of the positivity of the conformation tensor variables has been proved. Further it is shown that if a physically acceptable solution exists for IVP BOC then under certain conditions it also exists for IVP AOC. This gives an important relation between the initial conditions of IVP BOC and the existence of a physically acceptable solution of IVP AOC. A new investigation has been done for the melt spinning process in the framework of classical mechanics. A Hamiltonian formulation has been done for the melt spinning process for which appropriate Poisson brackets have been derived for the 1-d, elongational flow of a viscoelastic fluid. From the Hamiltonian, cross sectionally averaged balance mass and momentum equations of melt spinning can be derived along with the microstructural equations. These studies show that the complicated problem of melt spinning can also be studied under the framework of classical mechanics. This work provides the basic groundwork on which further investigations on the dynamics of a fibre could be carried out. The Free boundary value problem has been solved numerically using shooting method. Matlab routines have been used to solve the IVPs arising in the problem. Some numerical case studies have been done to study the sensitivity of the ODE systems with respect to the initial guess and parameters. These experiments support the analysis done and throw more light on the stiff nature and ill posedness of the ODE systems. To validate the model, simulations have been performed on sets of data provided by the company. Comparison of numerical results (axial velocity profiles) has been done with the experimental profiles provided by the company. Numerical results have been found to be in excellent agreement with the experimental profiles.
Metallocenes containing diarylethene type photochromic switches are synthesized, characterized and tested in polyolefin catalysts. Propylene polymerizations using unbridged bis(2,3-dibenzo[b]thiophen-3-yl)cyclopenta[b]thien-3-yl)zirconium dichloride/MAO (80) treated with 254nm UV irradiation produced bimodal polymer distributions by GPC. This was due to an increase in the low molecular weight fractions when the closed form of the catalyst/photoswitch was made. Comparison with similarly structured catalyst without photoisomerization properties did not produce bimodal polymer under identical conditions. Propylene polymerizations made with dimethylsilyl[(1,5-dimethyl-3-phenylcyclopenta[b]thien-6-yl)][(2,3-dibenzothien-3-yl)cyclopenta[b]thien-6-yl)]zirconium dichloride/MAO (86) with 254nm UV irradiation caused a 3 fold increase in the polymer molecular weight. Polymers made with ethylene and ethylene/hexene using (80) after UV irradiation did not show differences in measured polymer properties. Polymerizations with ethylene/ hexene mixtures using (86) had increased activity and co-monomer (hexene) incorporation with UV irradiation.
In modern textile manufacturing industries, the function of human eyes to detect disturbances in the production processes which yield defective products is switched to cameras. The camera images are analyzed with various methods to detect these disturbances automatically. There are, however, still problems with in particular semi-regular textures which are typical for weaving patterns. We study three parts of that problem of automatic texture analysis: image smoothing, texture synthesis and defect detection. In image smoothing, we develop a two dimensional kernel smoothing method with locally and directionally adaptive bandwidths allowing correlation in the errors. Two approaches are used in synthesising texture. The first is based on constructing a generalized Ising energy function in the Markov Random Field setup, and for the second, we use two-dimensional periodic bootstrap methods for semi-regular texture synthesis. We treat defect detection as multihypothesis testing problem with the null hypothesis representing the absence of defects and the other hypotheses representing various types of defects. We develop a test based on a nonparametric regression setup, and we use the bootstrap for approximating the distribution of our test statistic.
Since its invention by Sir Allistair Pilkington in 1952, the float glass process has been used to manufacture long thin flat sheets of glass. Today, float glass is very popular due to its high quality and relatively low production costs. When producing thinner glass the main concern is to retain its optical quality, which can be deteriorated during the manufacturing process. The most important stage of this process is the floating part, hence is considered to be responsible for the loss in the optical quality. A series of investigations performed on the finite products showed the existence of many short wave patterns, which strongly affect the optical quality of the glass. Our work is concerned with finding the mechanism for wave development, taking into account all possible factors. In this thesis, we model the floating part of the process by an theoretical study of the stability of two superposed fluids confined between two infinite plates and subjected to a large horizontal temperature gradient. Our approach is to take into account the mixed convection effects (viscous shear and buoyancy), neglecting on the other hand the thermo-capillarity effects due to the length of our domain and the presence of a small stabilizing vertical temperature gradient. Both fluids are treated as Newtonian with constant viscosity. They are immiscible, incompressible, have very different properties and have a free surface between them. The lower fluid is a liquid metal with a very small kinematic viscosity, whereas the upper fluid is less dense. The two fluids move with different velocities: the speed of the upper fluid is imposed, whereas the lower fluid moves as a result of buoyancy effects. We examine the problem by means of small perturbation analysis, and obtain a system of two Orr-Sommerfeld equations coupled with two energy equations, and general interface and boundary conditions. We solve the system analytically in the long- and short- wave limit, by using asymptotic expansions with respect to the wave number. Moreover, we write the system in the form of a general eigenvalue problem and we solve the system numerically by using Chebyshev spectral methods for fluid dynamics. The results (both analytical and numerical) show the existence of the small-amplitude travelling waves, which move with constant velocity for wave numbers in the intermediate range. We show that the stability of the system is ensured in the long wave limit, a fact which is in agreement with the real float glass process. We analyze the stability for a wide range of wave numbers, Reynolds, Weber and Grashof number, and explain the physical implications on the dynamics of the problem. The consequences of the linear stability results are discussed. In reality in the float glass process, the temperature strongly influences the viscosity of both molten metal and hot glass, which will have direct consequences on the stability of the system. We investigate the linear stability of two superposed fluids with temperature dependent viscosities by considering a different model for the viscosity dependence of each fluid. Although, the temperature-viscosity relationships for glass and metal are more complex than those used in our computations, our intention is to emphasize the effects of this dependence on the stability of the system. It is known from the literature that in the case of one fluid, the heat, which causes viscosity to decrease along the domain, usually destabilizes the flow. For the two superposed fluids problem we investigate this behaviour and discuss the consequences of the linear stability in this new case.
Non-commutative polynomial algebras appear in a wide range of applications, from quantum groups and theoretical physics to linear differential and difference equations. In the thesis, we have developed a framework, unifying many important algebras in the classes of \(G\)- and \(GR\)-algebras and studied their ring-theoretic properties. Let \(A\) be a \(G\)-algebra in \(n\) variables. We establish necessary and sufficient conditions for \(A\) to have a Poincar'e-Birkhoff-Witt (PBW) basis. Further on, we show that besides the existence of a PBW basis, \(A\) shares some other properties with the commutative polynomial ring \(\mathbb{K}[x_1,\ldots,x_n]\). In particular, \(A\) is a Noetherian integral domain of Gel'fand-Kirillov dimension \(n\). Both Krull and global homological dimension of \(A\) are bounded by \(n\); we provide examples of \(G\)-algebras where these inequalities are strict. Finally, we prove that \(A\) is Auslander-regular and a Cohen-Macaulay algebra. In order to perform symbolic computations with modules over \(GR\)-algebras, we generalize Gröbner bases theory, develop and respectively enhance new and existing algorithms. We unite the most fundamental algorithms in a suite of applications, called "Gröbner basics" in the literature. Furthermore, we discuss algorithms appearing in the non-commutative case only, among others two-sided Gröbner bases for bimodules, annihilators of left modules and operations with opposite algebras. An important role in Representation Theory is played by various subalgebras, like the center and the Gel'fand-Zetlin subalgebra. We discuss their properties and their relations to Gröbner bases, and briefly comment some aspects of their computation. We proceed with these subalgebras in the chapter devoted to the algorithmic study of morphisms between \(GR\)-algebras. We provide new results and algorithms for computing the preimage of a left ideal under a morphism of \(GR\)-algebras and show both merits and limitations of several methods that we propose. We use this technique for the computation of the kernel of a morphism, decomposition of a module into central characters and algebraic dependence of pairwise commuting elements. We give an algorithm for computing the set of one-dimensional representations of a \(G\)-algebra \(A\), and prove, moreover, that if the set of finite dimensional representations of \(A\) over a ground field \(K\) is not empty, then the homological dimension of \(A\) equals \(n\). All the algorithms are implemented in a kernel extension Plural of the computer algebra system Singular. We discuss the efficiency of computations and provide a comparison with other computer algebra systems. We propose a collection of benchmarks for testing the performance of algorithms; the comparison of timings shows that our implementation outperforms all of the modern systems with the combination of both broad functionality and fast implementation. In the thesis, there are many new non-trivial examples, and also the solutions to various problems, arising in different fields of mathematics. All of them were obtained with the developed theory and the implementation in Plural, most of them are treated computationally in this thesis for the first time.
We work in the setting of time series of financial returns. Our starting point are the GARCH models, which are very common in practice. We introduce the possibility of having crashes in such GARCH models. A crash will be modeled by drawing innovations from a distribution with much mass on extremely negative events, while in ''normal'' times the innovations will be drawn from a normal distribution. The probability of a crash is modeled to be time dependent, depending on the past of the observed time series and/or exogenous variables. The aim is a splitting of risk into ''normal'' risk coming mainly from the GARCH dynamic and extreme event risk coming from the modeled crashes. We will present several incarnations of this modeling idea and give some basic properties like the conditional first and second moments. For the special case that we just have an ARCH dynamic we can establish geometric ergodicity and, thus, stationarity and mixing conditions. Also in the ARCH case we formulate (quasi) maximum likelihood estimators and can derive conditions for consistency and asymptotic normality of the parameter estimates. In a special case of genuine GARCH dynamic we are able to establish L_1-approximability and hence laws of large numbers for the processes itself. We can formulate a conditional maximum likelihood estimator in this case, but cannot completely establish consistency for them. On the practical side we look for the outcome of estimating models with genuine GARCH dynamic and compare the result to classical GARCH models. We apply the models to Value at Risk estimation and see that in comparison to the classical models many of ours seem to work better although we chose the crash distributions quite heuristically.
This thesis deals with the development of thermoplastic polyolefin elastomers using recycled polyolefins and ground tyre rubber (GTR). The disposal of worn tyres and their economic recycling mean a great challenge nowadays. Material recycling is a preferred way in Europa owing to legislative actions and ecological arguments. This first step with worn tyres is already done in this direc-tion as GTR is available in different fractions in guaranteed quality. As the traditional applications of GTR are saturated, there is a great demand for new, value-added products containing GTR. So, the objective of this work was to convert GTR by reac-tive blending with polyolefins into thermoplastic elastomers (TPE) of suitable me-chanical and rheological properties. It has been established that bituminous reclamation of GTR prior to extrusion melt compounding with polyolefins is a promising way of TPE production. By this way the sol-content (acetone soluble fraction) of the GTR increases and the GTR particles can be better incorporated in the corresponding polyolefin matrix. The adhesion be-tween GTR and matrix is given by molecular intermingling in the resulting interphase. GTR particles of various production and mean particle size were involved in this study. As polyolefins recycled low-density polyethylene (LDPE), recycled high-density polyethylene (HDPE) and polypropylene (PP) were selected. First, the opti-mum conditions for the GTR reclamation in bitumen were established (160 °C < T < 180 °C; time ca. 4 hours). Polyolefin based TPEs were produced after GTR reclamation in extrusion compounding. Their mechanical (tensile behaviour, set properties), thermal (dynamic-mechanical thermal analysis, differential scanning calorimetry) and rheological properties (both in low- and high-shear rates ) were determined. The PE-based blends contained an ethylene/propylene/diene (EPDM) rubber as compatibilizer and their composition was as follows: PE/EPDM/GTR:bitumen = 50/25/25:25. The selected TPEs met the most important criterion, i.e. elongation at break > 100 %; compression set < 50%. The LDPE-based TPE (TPE(LDPE)) showed better me-chanical performance compared to the TPE(HDPE). This was assigned to the higher crystallinity of the HDPE. The PP-based blends of the compositions PP/(GTR-bitumen) 50/50 and 25/75, whereby the ratio of GTR/bitumen was 60/40, outperformed those containing non-reclaimed GTR. The related blends showed also a better compatibility with a PP-based commercial thermoplastic dynamic vulcanizate (TDV). Surprisingly, the mean particle size of the GTR, varied between < 0.2 and 0.4-0.7 mm, had a small effect on the mechanical properties, however somewhat larger for the rheological behaviour of the TPEs produced.
Within the last decades, a remarkable development in materials science took place -- nowadays, materials are not only constructed for the use of inert structures but rather designed for certain predefined functions. This innovation was accompanied with the appearance of smart materials with reliable recognition, discrimination and capability of action as well as reaction. Even though ferroelectric materials serve smartly in real applications, they also possess several restrictions at high performance usage. The behavior of these materials is almost linear under the action of low electric fields or low mechanical stresses, but exhibits strong non-linear response under high electric fields or mechanical stresses. High electromechanical loading conditions result in a change of the spontaneous polarization direction with respect to individual domains, which is commonly referred to as domain switching. The aim of the present work is to develop a three-dimensional coupled finite element model, to study the rate-independent and rate-dependent behavior of piezoelectric materials including domain switching based on a micromechanical approach. The proposed model is first elaborated within a two-dimensional finite element setting for piezoelectric materials. Subsequently, the developed two-dimensional model is extended to the three-dimensional case. This work starts with developing a micromechanical model for ferroelectric materials. Ferroelectric materials exhibit ferroelectric domain switching, which refers to the reorientation of domains and occurs under purely electrical loading. For the simulation, a bulk piezoceramic material is considered and each grain is represented by one finite element. In reality, the grains in the bulk ceramics material are randomly oriented. This property is taken into account by applying random orientation as well as uniform distribution for individual elements. Poly-crystalline ferroelectric materials at un-poled virgin state can consequently be characterized by randomly oriented polarization vectors. Energy reduction of individual domains is adopted as a criterion for the initiation of domain switching processes. The macroscopic response of the bulk material is predicted by classical volume-averaging techniques. In general, domain switching does not only depend on external loads but also on neighboring grains, which is commonly denoted as the grain boundary effect. These effects are incorporated into the developed framework via a phenomenologically motivated probabilistic approach by relating the actual energy level to a critical energy level. Subsequently, the order of the chosen polynomial function is optimized so that simulations nicely match measured data. A rate-dependent polarization framework is proposed, which is applied to cyclic electrical loading at various frequencies. The reduction in free energy of a grain is used as a criterion for the onset of the domain switching processes. Nucleation in new grains and propagation of the domain walls during domain switching is modeled by a linear kinetics theory. The simulated results show that for increasing loading frequency the macroscopic coercive field is also increasing and the remanent polarization increases at lower loading amplitudes. The second part of this work is focused on ferroelastic domain switching, which refers to the reorientation of domains under purely mechanical loading. Under sufficiently high mechanical loading, however, the strain directions within single domains reorient with respect to the applied loading direction. The reduction in free energy of a grain is used as a criterion for the domain switching process. The macroscopic response of the bulk material is computed for the hysteresis curve (stress vs strain) whereby uni-axial and quasi-static loading conditions are applied on the bulk material specimen. Grain boundary effects are addressed by incorporating the developed probabilistic approach into this framework and the order of the polynomial function is optimized so that simulations match measured data. Rate dependent domain switching effects are captured for various frequencies and mechanical loading amplitudes by means of the developed volume fraction concept which relates the particular time interval to the switching portion. The final part of this work deals with ferroelectric and ferroelastic domain switching and refers to the reorientation of domains under coupled electromechanical loading. If this free energy for combined electromechanical loading exceeds the critical energy barrier elements are allowed to switch. Firstly, hysteresis and butterfly curves under purely electrical loading are discussed. Secondly, additional mechanical loads in axial and lateral directions are applied to the specimen. The simulated results show that an increasing compressive stress results in enlarged domain switching ranges and that the hysteresis and butterfly curves flatten at higher mechanical loading levels.
This thesis aims at an overall improvement of the diffusion coefficient predictions. For this reason the theoretical determination of diffusion, viscosity, and thermodynamics in liquid systems is discussed. Furthermore, the experimental determination of diffusion coefficients is also part of this work. All investigations presented are carried out for organic binary liquid mixtures. Diffusion coefficient data of 9 highly nonideal binary mixtures are reported over the whole concentration range at various temperatures, (25, 30, and 35) °C. All mixtures investigated in a Taylor dispersion apparatus consist of an alcohol (ethanol, 1-propanol, or 1-butanol) dissolved in hexane, cyclohexane, carbon tetrachloride, or toluene. The uncertainty of the reported data is estimated to be within 310-11 m2s-1. To compute the thermodynamic correction factor an excess Gibbs energy model is required. Therefore, the applicability of COSMOSPACE to binary VLE predictions is thoroughly investigated. For this purpose a new method is developed to determine the required molecular parameters such as segment types, areas, volumes, and interaction parameters. So-called sigma profiles form the basis of this approach which describe the screening charge densities appearing on a molecule’s surface. To improve the prediction results a constrained two-parameter fitting strategy is also developed. These approaches are crucial to guarantee the physical significance of the segment parameters. Finally, the prediction quality of this approach is compared to the findings of the Wilson model, UNIQUAC, and the a priori predictive method COSMO-RS for a broad range of thermodynamic situations. The results show that COSMOSPACE yields results of similar quality compared to the Wilson model, while both perform much better than UNIQUAC and COSMO-RS. Since viscosity influences also the diffusion process, a new mixture viscosity model has been developed on the basis of Eyring’s absolute reaction rate theory. The nonidealities of the mixture are accounted for with the thermodynamically consistent COSMOSPACE approach. The required model and component parameters are derived from sigma-profiles, which form the basis of the a priori predictive method COSMO-RS. To improve the model performance two segment parameters are determined from a least-squares analysis to experimental viscosity data, whereas a constraint optimisation procedure is applied. In this way the parameters retain their physical meaning. Finally, the viscosity calculations of this approach are compared to the findings of the Eyring-UNIQUAC model for a broad range of chemical mixtures. These results show that the new Eyring-COSMOSPACE approach is superior to the frequently employed Eyring-UNIQUAC method. Finally, on the basis of Eyring’s absolute reaction rate theory a new model for the Maxwell-Stefan diffusivity has been developed. This model, an extension of the Vignes equation, describes the concentration dependence of the diffusion coefficient in terms of the diffusivities at infinite dilution and an additional excess Gibbs energy contribution. This energy part allows the explicit consideration of thermodynamic nonidealities within the modelling of this transport property. If the same set of interaction parameters, which has been derived from VLE data, is applied for this part and for the thermodynamic correction, a theoretically sound modelling of VLE and diffusion can be achieved. The influence of viscosity and thermodynamics on the model accuracy is thoroughly investigated. For this purpose diffusivities of 85 binary mixtures consisting of alkanes, cycloalkanes, halogenated alkanes, aromatics, ketones, and alcohols are computed. The average relative deviation between experimental data and computed values is approximately 8 % depending on the choice of the gE-model. These results indicate that this model is superior to some widely used methods. In summary, it can be said that the new approach facilitates the prediction of diffusion coefficients. The final equation is mathematically simple, universally applicable, and the prediction quality is as good as other models recently developed without having to worry about additional parameters, like pure component physical property data, self diffusion coefficients, or mixture viscosities. In contrast to many other models, the influence of the mixture viscosity can be omitted. Though a viscosity model is not required in the prediction of diffusion coefficients with the new equation, the models presented in this work allow a consistent modelling approach of diffusion, viscosity, and thermodynamics in liquid systems.
The aim of the thesis is the numerical investigation of saturated, stationary, incompressible Newtonian flow in porous media when inertia is not negligible. We focus our attention to the Navier-Stokes system with two pressures derived by two-scale homogenization. The thesis is subdivided into five Chapters. After the introductory remarks on porous media, filtration laws and upscaling methods, the first chapter is closed by stating the basic terminology and mathematical fundamentals. In Chapter 2, we start by formulating the Navier-Stokes equations on a periodic porous medium. By two-scale expansions of the velocity and pressure, we formally derive the Navier-Stokes system with two pressures. For the sake of completeness, known existence and uniqueness results are repeated and a convergence proof is given. Finally, we consider Stokes and Navier-Stokes systems with two pressures with respect to their relation to Darcy's law. Chapter 3 and Chapter 4 are devoted to the numerical solution of the nonlinear two pressure system. Therefore, we follow two approaches. The first approach which is developed in Chapter 3 is based on a splitting of the Navier-Stokes system with two pressures into micro and macro problems. The splitting is achieved by Taylor expanding the permeability function or by discretely computing the permeability function. The problems to be solved are a series of Stokes and Navier-Stokes problems on the periodicity cell. The Stokes problems are solved by an Uzawa conjugate gradient method. The Navier-Stokes equations are linearized by a least-squares conjugate gradient method, which leads to the solution of a sequence of Stokes problems. The macro problem consists of solving a nonlinear uniformly elliptic equation of second order. The least-squares linearization is applied to the macro problem leading to a sequence of Poisson problems. All equations will be discretized by finite elements. Numerical results are presented at the end of Chapter 3. The second approach presented in Chapter 4 relies on the variational formulation in a certain Hilbert space setting of the Navier-Stokes system with two pressures. The nonlinear problem is again linearized by the least-squares conjugate gradient method. We obtain a sequence of Stokes systems with two pressures. For the latter systems, we propose a fast solution method which relies on pre-computing Stokes systems on the periodicity cell for finite element basis functions acting as right hand sides. Finally, numerical results are discussed. In Chapter 5 we are concerned with modeling and simulation of the pressing section of a paper machine. We state a two-dimensional model of a press nip which takes into account elasticity and flow phenomena. Nonlinear filtration laws are incorporated into the flow model. We present a numerical solution algorithm and the chapter is closed by a numerical investigation of the model with special focus on inertia effects.
This thesis contains the mathematical treatment of a special class of analog microelectronic circuits called translinear circuits. The goal is to provide foundations of a new coherent synthesis approach for this class of circuits. The mathematical methods of the suggested synthesis approach come from graph theory, combinatorics, and from algebraic geometry, in particular symbolic methods from computer algebra. Translinear circuits form a very special class of analog circuits, because they rely on nonlinear device models, but still allow a very structured approach to network analysis and synthesis. Thus, translinear circuits play the role of a bridge between the "unknown space" of nonlinear circuit theory and the very well exploited domain of linear circuit theory. The nonlinear equations describing the behavior of translinear circuits possess a strong algebraic structure that is nonetheless flexible enough for a wide range of nonlinear functionality. Furthermore, translinear circuits offer several technical advantages like high functional density, low supply voltage and insensitivity to temperature. This unique profile is the reason that several authors consider translinear networks as the key to systematic synthesis methods for nonlinear circuits. The thesis proposes the usage of a computer-generated catalog of translinear network topologies as a synthesis tool. The idea to compile such a catalog has grown from the observation that on the one hand, the topology of a translinear network must satisfy strong constraints which severely limit the number of "admissible" topologies, in particular for networks with few transistors, and on the other hand, the topology of a translinear network already fixes its essential behavior, at least for static networks, because the so-called translinear principle requires the continuous parameters of all transistors to be the same. Even though the admissible topologies are heavily restricted, it is a highly nontrivial task to compile such a catalog. Combinatorial techniques have been adapted to undertake this task. In a catalog of translinear network topologies, prototype network equations can be stored along with each topology. When a circuit with a specified behavior is to be designed, one can search the catalog for a network whose equations can be matched with the desired behavior. In this context, two algebraic problems arise: To set up a meaningful equation for a network in the catalog, an elimination of variables must be performed, and to test whether a prototype equation from the catalog and a specified equation of desired behavior can be "matched", a complex system of polynomial equations must be solved, where the solutions are restricted to a finite set of integers. Sophisticated algorithms from computer algebra are applied in both cases to perform the symbolic computations. All mentioned algorithms have been implemented using C++, Singular, and Mathematica, and are successfully applied to actual design problems of humidity sensor circuitry at Analog Microelectronics GmbH, Mainz. As result of the research conducted, an exhaustive catalog of all static formal translinear networks with at most eight transistors is available. The application for the humidity sensor system proves the applicability of the developed synthesis approach. The details and implementations of the algorithms are worked out only for static networks, but can easily be adopted for dynamic networks as well. While the implementation of the combinatorial algorithms is stand-alone software written "from scratch" in C++, the implementation of the algebraic algorithms, namely the symbolic treatment of the network equations and the match finding, heavily rely on the sophisticated Gröbner basis engine of Singular and thus on more than a decade of experience contained in a special-purpose computer algebra system. It should be pointed out that the thesis contains the new observation that the translinear loop equations of a translinear network are precisely represented by the toric ideal of the network's translinear digraph. Altogether, this thesis confirms and strengthenes the key role of translinear circuits as systematically designable nonlinear circuits.
In this thesis we have discussed the problem of decomposing an integer matrix \(A\) into a weighted sum \(A=\sum_{k \in {\mathcal K}} \alpha_k Y^k\) of 0-1 matrices with the strict consecutive ones property. We have developed algorithms to find decompositions which minimize the decomposition time \(\sum_{k \in {\mathcal K}} \alpha_k\) and the decomposition cardinality \(|\{ k \in {\mathcal K}: \alpha_k > 0\}|\). In the absence of additional constraints on the 0-1 matrices \(Y^k\) we have given an algorithm that finds the minimal decomposition time in \({\mathcal O}(NM)\) time. For the case that the matrices \(Y^k\) are restricted to shape matrices -- a restriction which is important in the application of our results in radiotherapy -- we have given an \({\mathcal O}(NM^2)\) algorithm. This is achieved by solving an integer programming formulation of the problem by a very efficient combinatorial algorithm. In addition, we have shown that the problem of minimizing decomposition cardinality is strongly NP-hard, even for matrices with one row (and thus for the unconstrained as well as the shape matrix decomposition). Our greedy heuristics are based on the results for the decomposition time problem and produce better results than previously published algorithms.
Automated theorem proving is a search problem and, by its undecidability, a very difficult one. The challenge in the development of a practically successful prover is the mapping of the extensively developed theory into a program that runs efficiently on a computer. Starting from a level-based system model for automated theorem provers, in this work we present different techniques that are important for the development of powerful equational theorem provers. The contributions can be divided into three areas: Architecture. We present a novel prover architecture that is based on a set-based compression scheme. With moderate additional computational costs we achieve a substantial reduction of the memory requirements. Further wins are architectural clarity, the easy provision of proof objects, and a new way to parallelize a prover which shows respectable speed-ups in practice. The compact representation paves the way to new applications of automated equational provers in the area of verification systems. Algorithms. To improve the speed of a prover we need efficient solutions for the most time-consuming sub-tasks. We demonstrate improvements of several orders of magnitude for two of the most widely used term orderings, LPO and KBO. Other important contributions are a novel generic unsatisfiability test for ordering constraints and, based on that, a sufficient ground reducibility criterion with an excellent cost-benefit ratio. Redundancy avoidance. The notion of redundancy is of central importance to justify simplifying inferences which are used to prune the search space. In our experience with unfailing completion, the usual notion of redundancy is not strong enough. In the presence of associativity and commutativity, the provers often get stuck enumerating equations that are permutations of each other. By extending and refining the proof ordering, many more equations can be shown redundant. Furthermore, our refinement of the unfailing completion approach allows us to use redundant equations for simplification without the need to consider them for generating inferences. We describe the efficient implementation of several redundancy criteria and experimentally investigate their influence on the proof search. The combination of these techniques results in a considerable improvement of the practical performance of a prover, which we demonstrate with extensive experiments for the automated theorem prover Waldmeister. The progress achieved allows the prover to solve problems that were previously out of reach. This considerably enhances the potential of the prover and opens up the way for new applications.
Sterisch anspruchsvolle Cyclopentadienyl-Liganden wurden zur Stabilisierung neuer Mono(cyclopentadienyl) Verbindungen der schweren Erdalkalimetalle eingesetzt und deren Funktionalisierbarkeit dieser Spezies wurde exemplarisch durch die Synthese neutraler Tripeldecker-Sandwichkomplexe demonstriert. Die dabei ausgebildeten Molekülstrukturen lassen sich mittels DFT-Rechnungen zuverlässig vorhersagen. In diesem Zusammenhang wurde ebenfalls der Cyclononatetraenyl-Ligand, dessen Komplexeigenschaften bisher nur unzureichend untersucht wurden, eingesetzt. Im Rahmen dieser Arbeit gelang die Synthese des Bis(cyclononatetraenyl)bariums, Ba(C9H9)2, und dessen spektroskopische Charakterisierung. DFT-Rechnungen sagen für diesen Komplex eine Metallocenstruktur mit nahezu parallelen Ringen und einem Ba-Ring Abstand von 2.37 Å voraus. Durch den Einsatz des Tetraisopropylcyclopentadienyl (4Cp) und Tri(tert.-butyl)cyclopentadienyl (Cp’)-Liganden gelang die Synthese von Bis- und Monocyclopentadienyl-Verbindungen der frühen und späten Lanthanoide. Besonders interessant in diesem Zusammenhang ist die erfolgreiche Darstellung des Azido-Clusters, [Na(dme)3]2[4Cp6Yb6(N3)14] (4Cp= (Me2CH)4C5H), der die unterschiedlichen Koordinationsmöglichkeiten des Azido-Liganden in einem einzigen Komplex vereint. Vergleichbare Komplexe waren in der Organolanthanoidchemie bisher unbekannt. Durch Substitution am Cyclopentadienyl-System lassen sich dessen elektronische und sterische Eigenschaften signifikant verändern. Die Auswirkungen dieser Effekte können sehr eindrucksvoll an Manganocen-Komplexen demonstriert werden, in denen sich der low- und high-spin Zustand energetisch nur sehr wenig unterscheiden. Der elektronische Grundzustand einer Reihe unterschiedlich substituierter Manganocen-Komplexe wurde mittels Festkörpermagnetismus, ESR, Röntgenstrukturanalyse, EXAFS und variabler Temperatur UV-Vis Spektroskopie bestimmt, und mit dem Substitutionsmuster am Cyclopentadienyl-System korreliert. Spin-Gleichgewichte ließen sich für [(Me3C)C5H4]2Mn, [(Me3C)2C5H3]2Mn und [(Me3C)(Me3Si)C5H3]2Mn nachweisen. Theoretische Rechnungen postulieren, dass Cerocen, Ce(C8H8)2, ein Beispiel für Moleküle mit gemischt-konfiguriertem Grundzustand sei, der durch 80 % [(Ce)f1e2u(cot)e2u3] und 20 % [(Ce)f0e2u(cot)e2u4] beschreiben werden könne. Obwohl dieses Molekül bereits seit 1976 bekannt ist, ist dessen elektronische Struktur bis heute sehr umstritten. Im Rahmen dieser Arbeit wurden neue Synthesekonzepte für diese Verbindung entwickelt und die elektronische Struktur mittels magnetischer Messungen im Festkörper, EXAFS und XANES Studien untersucht. Die dabei erhaltenen Daten sind in sehr guter Übereinstimmung mit den theoretischen Rechnungen und belegen die Bedeutung eines gemischt-konfigurierten Grundzustandes bei der Bindung in Organometallkomplexen der f-Block Metalle. Während in Cerocen nur ein temperaturunabhängiger Paramagnetismus (TIP) beobachtet werden kann, findet man eine starke Temperaturabhängigkeit der magnetischen Suszeptibilität in Ytterbium Systemen des Typs Cp’2Yb(bipy’) [Cp´ und bipy´ sind substituierte Cyclopentadienyl- oder 4,4’-substituierter 2,2’-Bipyridyl-Liganden]. Temperaturabhängige XANES-Experimenten belegen, dass auch in diesen Systemen ein gemischt-konfigurierter Grundzustand vorliegt, der durch [(Yb)f14(bipy)b1()0] und [(Yb)f13(bipy)b1()1] beschreiben werden kann. Der relative Anteil beider Wellenfunktionen zum Grundzustand wird durch Substitution am 2,2’-Bipyridyl- oder Cyclopentadienyl-System signifikant beeinflusst. Modelle, mit denen sich dieses Verhalten qualitativ beschreiben lässt, wurden im Rahmen dieser Arbeit entwickelt. Ein kinetisch stabilisiertes, adduktfreies Titanocen wurde unter Verwendung des Di(tert.-butyl)cyclopentadienyl Liganden hergestellt und dessen Reaktivität gegenüber kleinen Molekülen, z.B. CO, N2 und H2 untersucht. Im Rahmen der Reaktivitätsstudien wurden ebenfalls 2,2’-Bipyridyl Addukte an das Cp’2Ti Fragment synthetisiert und deren magnetische Eigenschaften erforscht. Durch Variationen am 2,2’-Bipyridyl System lässt sich das Singlet-Triplet Splitting in diesem System gezielt steuern.
The scientific and industrial interest devoted to polymer/layered silicate
nanocomposites due to their outstanding properties and novel applications resulted
in numerous studies in the last decade. They cover mostly thermoplastic- and
thermoset-based systems. Recently, studies in rubber/layered silicate
nanocomposites were started, as well. It was presented how complex maybe the
nanocomposite formation for the related systems. Therefore the rules governing their
structure-property relationships have to be clarified. In this Thesis, the related
aspects were addressed.
For the investigations several ethylene propylene diene rubbers (EPDM) of polar and
non-polar origin were selected, as well as, the more polar hydrogenated acrylonitrile
butadiene rubber (HNBR). The polarity was found to be beneficial on the
nanocomposite formation as it assisted to the intercalation of the polymer chains
within the clay galleries. This favored the development of exfoliated structures.
Finding an appropriate processing procedure, i.e. compounding in a kneader instead
of on an open mill, the mechanical performance of the nanocomposites was
significantly improved. The complexity of the nanocomposite formation in
rubber/organoclay system was demonstrated. The deintercalation of the organoclay
observed, was traced to the vulcanization system used. It was evidenced by an
indirect way that during sulfur curing, the primary amine clay intercalant leaves the
silicate surface and migrates in the rubber matrix. This was explained by its
participation in the sulfur-rich Zn-complexes created. Thus, by using quaternary
amine clay intercalants (as it was presented for EPDM or HNBR compounds) the
deintercalation was eliminated. The organoclay intercalation/deintercalation detected
for the primary amine clay intercalants, were controlled by means of peroxide curing
(as it was presented for HNBR compounds), where the vulcanization mechanism
differs from that of the sulfur curing.
The current analysis showed that by selecting the appropriate organoclay type the
properties of the nanocomposites can be tailored. This occurs via generating different
nanostructures (i.e. exfoliated, intercalated or deintercalated). In all cases, the
rubber/organoclay nanocomposites exhibited better performance than vulcanizates
with traditional fillers, like silica or unmodified (pristine) layered silicates.The mechanical and gas permeation behavior of the respective nanocomposites
were modelled. It was shown that models (e.g. Guth’s or Nielsen’s equations)
developed for “traditional” vulcanizates can be used when specific aspects are taken
into consideration. These involve characteristics related to the platy structure of the
silicates, i.e. their aspect ratio after compounding (appearance of platelet stacks), or
their orientation in the rubber matrix (order parameter).
It is considered an analytical model of defaultable bond portfolio in terms of its face value process. The face value process dynamically evolves with time and incorporates changes caused by recovery payment on default followed by purchasing of new bonds. The further studies involve properties, distribution and control of the face value process.
Channel estimation is of great importance in many wireless communication systems, since it influences the overall performance of a system significantly. Especially in multi-user and/or multi-antenna systems, i.e. generally in multi-branch systems, the requirements on channel estimation are very high, since the training signals or so called pilots that are used for channel estimation suffer from multiple access interference. Recently, in the context with such systems more and more attention is paid to concepts for joint channel estimation (JCE) which have the capability to eliminate the multiple access interference and also the interference between the channel coefficients. The performance of JCE can be evaluated in noise limited systems by the SNR degradation and in interference limited systems by the variation coefficient. Theoretical analysis carried out in this thesis verifies that both performance criteria are closely related to the patterns of the pilots used for JCE, no matter the signals are represented in the time domain or in the frequency domain. Optimum pilots like disjoint pilots, Walsh code based pilots or CAZAC code based pilots, whose constructions are described in this thesis, do not show any SNR degradation when being applied to multi-branch systems. It is shown that optimum pilots constructed in the time domain become optimum pilots in the frequency domain after a discrete Fourier transformation. Correspondingly, optimum pilots in the frequency domain become optimum pilots in the time domain after an inverse discrete Fourier transformation. However, even for optimum pilots different variation coefficients are obtained in interference limited systems. Furthermore, especially for OFDM-based transmission schemes the peak-to-average power ratio (PAPR) of the transmit signal is an important decision criteria for choosing the most suitable pilots. CAZAC code based pilots are the only pilots among the regarded pilot constructions that result in a PAPR of 0 dB for the transmit signal that origins in the transmitted pilots. When summarizing the analysis regarding the SNR degradation, the variation coefficient and the PAPR with respect to one single service area and considering the impact due to interference from other adjacent service areas that occur due to a certain choice of the pilots, one can conclude that CAZAC codes are the most suitable pilots for the application in JCE of multi-carrier multi-branch systems, especially in the case if CAZAC codes that origin in different mother codes are assigned to different adjacent service areas. The theoretical results of the thesis are verified by simulation results. The choice of the parameters for the frequency domain or time domain JCE is oriented towards the evaluated implementation complexity. According to the chosen parameterization of the regarded OFDM-based and FMT-based systems it is shown that a frequency domain JCE is the best choice for OFDM and a time domain JCE is the best choice for FMT applying CAZAC codes as pilots. The results of this thesis can be used as a basis for further theoretical research and also for future JCE implementation in wireless systems.
Under physiological conditions oxygen is constantly being converted to reactive oxygen intermediates, in mitochondria, peroxisomes, cytochrome p450 systems, macrophages, neutrophils and in plasma membranes. These reactive oxygen species (ROS) are toxic and therefore alter cell integrity leading to cell damage. To protect itself against this toxic effect of ROS, living systems have developed defence systems that scavenge ROS formation. These systems include some enzymes, transporting proteins and small antioxidant molecules for instance vitamin C and E. This thesis describes a study on the antioxidant chemistry and activity of vitamin C in vivo and in vitro systems using ESR spectroscopy. Also, a new method was designed to label ascorbic acid with a fluorescent marker. Moreover, some important criteria were considered for the evaluation and quantification of ascorbyl radicals in human blood plasma using two types of ESR spectrometers.
Fragmentation of tropical rain forests is pervasive and results in various modifications in the ecosystem functioning such as … It has long been noticed that the colony densities of a dominant herbivore in the neotropics - leaf-cutting ant (LCA) - increase in fragmentation-related habitats like forest edges and small fragments, however the reasons for this increase are not clear. The aim of the study was to test the hypothesis that bottom-up control of LCA populations is less effective in fragmented compared to continuous forests and thus explains the increase in LCA colony densities in these habitats. In order to test for less effective bottom-up control, I proposed four working hypotheses. I hypothesized that LCA colonies in fragmented habitats (1) find more palatable vegetation due to low plant defences, (2) forage on few dominant species resulting in a narrow diet breadth, (3) possess small foraging areas and (4) increase herbivory rate at the colony level. The study was conducted in the remnants of the Atlantic rainforest in NE Brazil. Two fragmentation-related forest habitats were included: the edge and a 3500-ha continuous forest and the interior of the 50-ha forest fragment. The interior of the continuous forest served as a control habitat for the study. All working hypotheses can be generally accepted. The results indicate that the abundance of LCA host plant species in the habitats created by forest fragmentation along with weaker chemical defense of those species (especially the lack of terpenoids) allow ants to forage predominantly on palatable species and thus reduce foraging costs on other species. This is supported by narrower ant diet breadth in these habitats. Similarly, small foraging areas in edge habitats and in small forest fragments indicate that there ants do not have to go far to find the suitable host species and thus they save foraging costs. Increased LCA herbivory rates indicate that the damages (i.e., amount of harvested foliage) caused by LCA are more important in fragmentation-related habitats which are more vulnerable to LCA herbivory due to the high availability of palatable plants and a low total amount of foliage (LAI). (1) Few plant defences, (2) narrower ant diet breadth, (3) reduced colony foraging areas, and (4) increased herbivory rates, clearly indicate a weaker bottom-up control for LCA in fragmented habitats. Weak bottom-up control in the fragmentation-related habitats decreases the foraging costs of a LCA colony in these habitats and the colonies might use the surplus of energy resulting from reduced foraging costs to increase the colony growth, the reproduction and turnover. If correct, this explains why fragmented habitats support more LCA colonies at a given time compared to continuous forest habitats. Further studies are urgently needed to estimate LCA colony growth and turnover rates. There are indices that edge effects of forest fragmentation might be more responsible in regulating LCA populations than area or isolation effects. This emphasizes the need to conserve big forest fragments not to fall below a critical size and retain their regular shape. Weak bottom-up control of LCA populations has various consequences on forested ecosystems. I suggest a loop between forest fragmentation and LCA population dynamics: the increased LCA colony densities, along with lower bottom-up control increase LCA herbivory pressure on the forest and thus inevitably amplify the deleterious effects of fragmentation. These effects include direct consequences of leaf removal by ants and various indirect effects on ecosystem functioning. This study contributes to our understanding of how primary fragmentation effects, via the alteration of trophic interactions, may translate into higher order effects on ecosystem functions.
This thesis investigates the constrained form of the spherical Minimax location problem and the spherical Weber location problem. Specifically, we consider the problem of locating a new facility on the surface of the unit sphere in the presence of convex spherical polygonal restricted regions and forbidden regions such that the maximum weighted distance from the new facility on the surface of the unit sphere to m existing facilities is minimized and the sum of the weighted distance from the new facility on the surface of the unit sphere to m existing facilities is minimized. It is assumed that a forbidden region is an area on the surface of the unit sphere where travel and facility location are not permitted and that distance is measured using the great circle arc distance. We represent a polynomial time algorithm for the spherical Minimax location problem for the special case where all the existing facilities are located on the surface of a hemisphere. Further, we have developed algorithms for spherical Weber location problem using barrier distance on a hemisphere as well as on the unit sphere.
The present thesis deals with a novel approach to increase the resource usage in digital communications. In digital communication systems, each information bearing data symbol is associated to a waveform which is transmitted over a physical medium. The time or frequency separations among the waveforms associated to the information data have always been chosen to avoid or limit the interference among them. By doing so, n the presence of a distortionless ideal channel, a single receive waveform is affected as little as possible by the presence of the other waveforms. The conditions necessary to meet the absence of any interference among the waveforms are well known and consist of a relationship between the minimum time separation among the waveforms and their bandwidth occupation or, equivalently, the minimum frequency separation and their time occupation. These conditions are referred to as Nyquist assumptions. The key idea of this work is to relax the Nyquist assumptions and to transmit with a time and/or frequency separation between the waveforms smaller than the minimum required to avoid interference. The reduction of the time and/or frequency separation generates not only an increment of the resource usage, but also a degradation in the quality of the received data. Therefore, to maintain a certain quality in the received signal, we have to increase the amount of transmitted power. We investigate the trade-off between the increment of the resource usage and the correspondent performance degradation in three different cases. The first case is the single carrier case in which all waveforms have the same spectrum, but have different temporal locations. The second one is the multi carrier case in which each waveform has its distinct spectrum and occupies all the available time. Finally, the hybrid case when each waveform has its unique time and frequency location. These different cases are framed within the general system modelling developed in the thesis so that they can be easily compared. We evaluate the potential of the key idea of the thesis by choosing a set of four possible waveforms with different characteristics. By doing so, we study the influence of the waveform characteristics in the three system configurations. We propose an interpretation of the results by modifying the well-known Shannon capacity formula and by explicitly expressing its dependency on the increment of resource usage and on the performance degradation. The results are very promising. We show that both in the case of a single carrier system with a time limited waveform and in the case of a multi-carrier system with a frequency limited waveform, the reduction of the time or frequency separation, respectively, has a positive effect on the channel capacity. The latter, depending on the actual SNR, can double or increase even more significantly.
Over the last decades, mathematical modeling has reached nearly all fields of natural science. The abstraction and reduction to a mathematical model has proven to be a powerful tool to gain a deeper insight into physical and technical processes. The increasing computing power has made numerical simulations available for many industrial applications. In recent years, mathematicians and engineers have turned there attention to model solid materials. New challenges have been found in the simulation of solids and fluid-structure interactions. In this context, it is indispensable to study the dynamics of elastic solids. Elasticity is a main feature of solid bodies while demanding a great deal of the numerical treatment. There exists a multitude of commercial tools to simulate the behavior of elastic solids. Anyhow, the majority of these software packages consider quasi-stationary problems. In the present work, we are interested in highly dynamical problems, e.g. the rotation of a solid. The applicability to free-boundary problems is a further emphasis of our considerations. In the last years, meshless or particle methods have attracted more and more attention. In many fields of numerical simulation these methods are on a par with classical methods or superior to them. In this work, we present the Finite Pointset Method (FPM) which uses a moving least squares particle approximation operator. The application of this method to various industrial problems at the Fraunhofer ITWM has shown that FPM is particularly suitable for highly dynamical problems with free surfaces and strongly changing geometries. Thereby, FPM offers exactly the features that we require for the analysis of the dynamics of solid bodies. In the present work, we provide a numerical scheme capable to simulate the behavior of elastic solids. We present the system of partial differential equations describing the dynamics of elastic solids and show its hyperbolic character. In particular, we focus our attention to the constitutive law for the stress tensor and provide evolution equations for the deviatoric part of the stress tensor in order to circumvent limitations of the classical Hooke's law. Furthermore, we present the basic principle of the Finite Pointset Method. In particular, we provide the concept of upwinding in a given direction as a key ingredient for stabilizing hyperbolic systems. The main part of this work describes the design of a numerical scheme based on FPM and an operator splitting to take the different processes within a solid body into account. Each resulting subsystem is treated separately in an adequate way. Hereby, we introduce the notion of system-inherent directions and dimensional upwinding. Finally, a coupling strategy for the subsystems and results are presented. We close this work with some final conclusions and an outlook on future work.
The use of polymers subjected to various tribological situations has become state of
the art. Owing to the advantages of self-lubrication and superior cleanliness, more
and more polymer composites are now being used as sliding elements, which were
formerly composed of metallic materials only. The feature that makes polymer composites
so promising in industrial applications is the opportunity to tailor their properties
with special fillers. The main aim of this study was to strength the importance of
integrating various functional fillers in the design of wear-resistant polymer composites
and to understand the role of fillers in modifying the wear behaviour of the materials.
Special emphasis was focused on enhancement of the wear resistance of
thermosetting and thermoplastic matrix composites by nano-TiO2 particles (with a
diameter of 300nm).
In order to optimize the content of various fillers, the tribological performance of a
series of epoxy-based composites, filled with short carbon fibre (SCF), graphite,
PTFE and nano-TiO2 in different proportions and combinations, was investigated.
The patterns of frictional coefficient, wear resistance and contact temperature were
examined by a pin-on-disc apparatus in a dry sliding condition under different contact
pressures and sliding velocities. The experimental results indicated that the addition
of nano-TiO2 effectively reduced the frictional coefficient, and consequently the contact
temperature, of short-fibre reinforced epoxy composites. Based on scanning
electron microscopy (SEM) and atomic force microscopy (AFM) observations of the
worn surfaces, a positive rolling effect of the nanoparticles between the material pairs
was proposed, which led to remarkable reduction of the frictional coefficient. In particular,
this rolling effect protected the SCF from more severe wear mechanisms, especially
in high sliding pressure and speed situations. As a result, the load carrying capacity of materials was significantly improved. In addition, the different contributions
of two solid lubricants, PTFE powders and graphite flakes, on the tribological
performance of epoxy nanocomposites were compared. It seems that graphite contributes
to the improved wear resistance in general, whereas PTFE can easily form a
transfer film and reduce the wear rate, especially in the running-in period. A combination of SCF and solid lubricants (PTFE and graphite) together with TiO2 nanoparticles
can achieve a synergistic effect on the wear behaviour of materials.
The favourable effect of nanoparticles detected in epoxy composites was also found
in the investigations of thermoplastic, e.g. polyamide (PA) 6,6 matrix. It was found
that nanoparticles could reduce the friction coefficient and wear rate of the PA6,6
composite remarkably, when additionally incorporated with short carbon fibres and
graphite flakes. In particular, the addition of nanoparticles contributed to an obvious
enhancement of the tribological performances of the short-fibre reinforced, hightemperature
resistant polymers, e.g. polyetherimide (PEI), especially under extreme
sliding conditions.
A procedure was proposed in order to correlate the contact temperature and the
wear rate with the frictional dissipated energy. Based on this energy consideration, a
better interpretation of the different performance of distinct tribo-systems is possible.
The validity of the model was illustrated for various sliding tests under different conditions.
Although simple quantitative formulations could not be expected at present, the
study may lead to a fundamental understanding of the mechanisms controlling friction
and wear from a general system point of view. Moreover, using the energybased
models, the artificial neural network (ANN) approach was applied to the experimental
data. The well-trained ANN has the potential to be further used for online
monitoring and prediction of wear progress in practical applications.
Die Verwendung von Polymeren im Hinblick auf verschiedene tribologische Anwendungen
entspricht mittlerweile dem Stand der Technik. Aufgrund der Vorteile von
Selbstschmierung und ausgezeichneter Sauberkeit werden polymere Verbundwerkstoffe
immer mehr als Gleitelemente genutzt, welche früher ausschließlich aus metallischen
Werkstoffen bestanden. Die Besonderheit, die polymere Verbundwerkstoffe
so vielversprechend für industrielle Anwendungen macht, ist die Möglichkeit ihre Eigenschaften
durch Zugabe von speziellen Füllstoffen maßzuschneidern. Das Hauptziel
dieser Arbeit bestand darin, die Wichtigkeit der Integration verschiedener funktionalisierter
Füllstoffe in den Aufbau polymerer Verbundwerkstoffe mit hohem Verschleißwiderstand
aufzuzeigen und die Rolle der Füllstoffe hinsichtlich des Verschleißverhaltens
zu verstehen. Hierbei lag besonderes Augenmerk auf der Verbesserung
des Verschleißwiderstandes bei Verbunden mit duromerer und thermoplastischer
Matrix durch die Präsenz von TiO2-Partikeln (Durchmesser 300nm).
Das tribologische Verhalten epoxidharzbasierter Verbunde, gefüllt mit kurzen Kohlenstofffasern
(SCF), Graphite, PTFE und nano-TiO2 in unterschiedlichen Proportionen
und Kombinationen wurde untersucht, um den jeweiligen Füllstoffgehalt zu optimieren.
Das Verhalten von Reibungskoeffizient, Verschleißwiderstand und Kontakttemperatur
wurde unter Verwendung einer Stift-Scheibe Apparatur bei trockenem
Gleitzustand, verschiedenen Kontaktdrücken und Gleitgeschwindigkeiten erforscht.
Die experimentellen Ergebnisse zeigen, dass die Zugabe von nano-TiO2 in kohlenstofffaserverstärkte
Epoxide den Reibungskoeffizienten und die Kontakttemperatur
herabsetzen können. Basierend auf Aufnahmen der verschlissenen Oberflächen
durch Rasterelektronen- (REM) und Rasterkraftmikroskopie (AFM) trat ein positiver
Rolleffekt der Nanopartikel zwischen den Materialpaaren zum Vorschein, welcher zu
einer beachtlichen Reduktion des Reibungskoeffizienten führte. Dieser Rolleffekt schützte insbesondere die SCF vor schwerwiegenderen Verschleißmechanismen,
speziell bei hohem Gleitdruck und hohen Geschwindigkeiten. Als Ergebnis konnte
die Tragfähigkeit dieser Materialien wesentlich verbessert werden. Zusätzlich wurde
die Wirkung zweier fester Schmierstoffe (PTFE-Pulver und Graphit-Flocken) auf die tribologische Leistungsfähigkeit verglichen. Es scheint, daß Graphit generell zur Verbesserung
des Verschleißwiderstandes beiträgt, wobei PTFE einen Transferfilm bilden
kann und die Verschleißrate insbesondere in der Einlaufphase reduziert. Die
Kombination von SCF und festen Schmierstoffen zusammen mit TiO2-Nanopartikeln
kann einen Synergieeffekt bei dem Verschleißverhalten der Materialien hervorrufen.
Der positive Effekt der Nanopartikel in Duromeren wurde ebenfalls bei den Untersuchungen
von Thermoplasten (PA 66) gefunden. Die Nanopartikel konnten den Reibungskoeffizienten
und die Verschleißrate der PA 66-Verbunde herabsetzen, wobei
zusätzlich Kohlenstofffasern und Graphit-Flocken enthalten waren. Die Zugabe von
Nanopartikeln trug offensichtlich auch zur Verbesserung der tribologischen Leistungsfähigkeit
von SCF-verstärkten, hochtemperaturbeständigen Polymeren (PEI)
insbesondere unter extremen Gleitzuständen, bei. Es wurde eine Methode vorgestellt,
um die Kontakttemperatur und die Verschleißrate mit der durch Reibung dissipierten
Energie zu korrelieren. Diese Energiebetrachtung ermöglicht eine bessere
Interpretation der verschiedenen Eigenschaften von ausgewählten Tribo-Systemen.
Die Gültigkeit dieses Models wurde für mehrere Gleittests unter verschiedenen Bedingungen
erklärt.
Vom generellen Blickpunkt eines tribologischen Systems aus mag diese Arbeit zu
einem fundamentalen Verständnis der Mechanismen führen, welche das Reibungs und Verschleißverhalten kontrollieren, obwohl hier einfache quantitative (mathematische)
Zusammenhänge bisher nicht zu erwarten sind. Der auf energiebasierenden
Modellen fußende Lösungsansatz der neuronalen Netzwerke (ANN) wurde darüber
hinaus auf die experimentellen Datensätze angewendet. Die gut trainierten ANN's
besitzen das Potenzial sie in der praktischen Anwendungen zur Online-
Datenauswertung und zur Vorhersage des Verschleißfortschritts einzusetzen.
In contrast to the spatial motion setting, the material motion setting of continuum mechanics is concerned with the response to variations of material placements of particles with respect to the ambient material. The material motion point of view is thus extremely prominent when dealing with defect mechanics to which it has originally been introduced by Eshelby more than half a century ago. Its primary unknown, the material deformation map is governed by the material motion balance of momentum, i.e. the balance of material forces on the material manifold in the sense of Eshelby. Material (configurational) forces are concerned with the response to variations of material placements of 'physical particles' with respect to the ambient material. Opposed to that, the common spatial (mechanical) forces in the sense of Newton are considered as the response to variations of spatial placements of 'physical particles' with respect to the ambient space. Material forces as advocated by Maugin are especially suited for the assessment of general defects as inhomogeneities, interfaces, dislocations and cracks, where the material forces are directly related to the classical J-Integral in fracture mechanics, see also Gross & Seelig. Another classical example of a material - or rather configurational - force is emblematized by the celebrated Peach-Koehler force, see e.g. the discussion in Steinmann. The present work is mainly divided in four parts. In the first part we will introduce the basic notions of the mechanics and numerics of material forces for a quasi-static conservative mechanical system. In this case the internal potential energy density per unit volume characterizes a hyperelastic material behaviour. In the first numerical example we discuss the reliability of the material force method to calculate the vectorial J-integral of a crack in a Ramberg-Osgood type material under mode I loading and superimposed T-stresses. Secondly, we study the direction of the single material force acting as the driving force of a kinked crack in a geometrically nonlinear hyperelastic Neo-Hooke material. In the second part we focus on material forces in the case of geometrically nonlinear thermo-hyperelastic material behaviour. Therefore we adapt the theory and numerics to a transient coupled problem, and elaborate the format of the Eshelby stress tensor as well as the internal material volume forces induced by the gradient of the temperature field. We study numerically the material forces in a bimaterial bar under tension load and the time dependent evolution of material forces in a cracked specimen. The third part discusses the material force method in the case of geometrically nonlinear isotropic continuum damage. The basic equations are similar to those of the thermo-hyperelastic problem but we introduce an alternative numerical scheme, namely an active set search algorithm, to calculate the damage field as an additional degree of freedom. With this at hand, it is an easy task to obtain the gradient of the damage field which induces the internal material volume forces. Numeric examples in this part are a specimen with an elliptic hole with different semi-axis, a center cracked specimen and a cracked disc under pure mode I loading. In the fourth part of this work we elaborate the format of the Eshelby stress tensor and the internal material volume forces for geometrically nonlinear multiplicative elasto-plasticity. Concerning the numerical implementation we restrict ourselves to the case of geometrically linear single slip crystal plasticity and compare here two different numerical methods to calculate the gradient of the internal variable which enters the format of the internal material volume forces. The two numerical methods are firstly, a node point based approach, where the internal variable is addressed as an additional degree of freedom, and secondly, a standard approach where the internal variable is only available at the integration points level. Here a least square projection scheme is enforced to calculate the necessary gradients of this internal variable. As numerical examples we discuss a specimen with an elliptic inclusion and an elliptic hole respectively and, in addition, a crack under pure mode I loading in a material with different slip angles. Here we focus on the comparison of the two different methods to calculate the gradient of the internal variable. As a second class of numerical problems we elaborate and implement a geometrically linear von Mises plasticity with isotropic hardening. Here the necessary gradients of the internal variables are calculated by the already mentioned projection scheme. The results of a crack in a material with different hardening behaviour under various additional T-stresses are given.
Matter-wave Optics of Dark-state Polaritons: Applications to Interferometry and Quantum Information
(2006)
The present work "Materwave Optics with Dark-state Polaritons: Applications to Interferometry and Quantum Information" deals in a broad sense with the subject of dark-states and in particular with the so-called dark-state polaritons introduced by M. Fleischhauer and M. D. Lukin. The dark-state polaritons can be regarded as a combined excitation of electromagnetic fields and spin/matter-waves. Within the framework of this thesis the special optical properties of the combined excitation are studied. On one hand a new procedure to spatially manipulate and to increase the excitation density of stored photons is described and on the other hand the properties are used to construct a new type of Sagnac Hybrid interferometer. The thesis is devided into four parts. In the introduction all notions necessary to understand the work are described, e.g.: electromagnetically induced transparency (EIT), dark-state polaritons and the Sagnac effect. The second chapter considers the method developed by A. Andre and M. D. Lukin to create stationary light pulses in specially dressed EIT-media. In a first step a set of field equations is derived and simplified by introducing a new set of normal modes. The absorption of one of the normal modes leads to the phenomenon of pulse-matching for the other mode and thereby to a diffusive spreading of its field envelope. All these considerations are based on a homogeneous field setup of the EIT preparation laser. If this restriction is dismissed one finds that a drift motion is superimposed to the diffusive spreading. By choosing a special laser configuration the drift motion can be tailored such that an effective force is created that counteracts the spreading. Moreover, the force can not only be strong enough to compensate the diffusive spreading but also to exceed this dynamics and hence to compress the field envelope of the excitation. The compression can be discribed using a Fokker-Planck equation of the Ornstein-Uhlenbeck type. The investigations show that the compression leads to an excitation of higher-order modes which decay very fast. In the last section of the chapter this exciation will be discussed in more detail and conditions will be given how the excitation of higher-order modes can be avoided or even suppressed. All results given in the chapter are supported by numerical simulatons. In the third chapter the matterwave optical properties of the dark-state polaritons will be studied. They will be used to construct a light-matterwave hybrid Sagnac interferometer. First the principle setup of such an interferometer will be sketched and the relevant equations of motion of light-matter interaction in a rotating frame will be derived. These form the basis of the following considerations of the dark-state polariton dynamics with and without the influence of external trapping potentials on the matterwave part of the polariton. It will be shown that a sensitivity enhancement compared to a passive laser gyroscope can be anticipated if the gaseous medium is initially in a superfluid quantum state in a ring-trap configuration. To achieve this enhancement a simultaneous coherence and momentum transfer is furthermore necessary. In the last part of the chapter the quantum sensitivity limit of the hybrid interferometer is derived using the one-particle density matrix equations incorporating the motion of the particles. To this end the Maxwell-Bloch equations are considered perturbatively in the rotation rate of the noninertial frame of reference and the susceptibility of the considered 3-level \(\Lambda\)-type system is derived in arbitrary order of the probe-field. This is done to determine the optimum operation point. With its help the anticipated quantum sensitivity of the light-matterwave hybrid Sagnac interferometer is calculated at the shot-noise limit and the results are compared to state-of-the-art laser and matterwave Sagnac interferometers. The last chapter of the thesis originates from a joint theoretical and experimental project with the AG Bergmann. This chapter does no longer consider the dark-state polaritons of the last two chapters but deals with the more general concept of dark states and in particular with the transient velocity selective dark states as introduced by E. Arimondo et al. In the experiment we could for the first time measure these states. The chapter starts with an introduction into the concept of velocity selective dark states as they occur in a \(\Lambda\)-configuration. Then we introduce the transient velocity selective dark-states as they occur in an particular extension of the \(\Lambda\)-system. For later use in the simulations the relevant equations of motion are derived in detail. The simulations are based on the solution of the generalized optical Bloch equations. Finally the experimental setup and procedure are explained and the theoretical and experimental results are compared.
In this thesis, we have dealt with two modeling approaches of the credit risk, namely the structural (firm value) and the reduced form. In the former one, the firm value is modeled by a stochastic process and the first hitting time of this stochastic process to a given boundary defines the default time of the firm. In the existing literature, the stochastic process, triggering the firm value, has been generally chosen as a diffusion process. Therefore, on one hand it is possible to obtain closed form solutions for the pricing problems of credit derivatives and on the other hand the optimal capital structure of a firm can be analysed by obtaining closed form solutions of firm's corporate securities such as; equity value, debt value and total firm value, see Leland(1994). We have extended this approach by modeling the firm value as a jump-diffusion process. The choice of the jump-diffusion process was a crucial step to obtain closed form solutions for corporate securities. As a result, we have chosen a jump-diffusion process with double exponentially distributed jump heights, which enabled us to analyse the effects of jump on the optimal capital structure of a firm. In the second part of the thesis, by following the reduced form models, we have assumed that the default is triggered by the first jump of a Cox process. Further, by following Schönbucher(2005), we have modeled the forward default intensity of a firm as a geometric Brownian motion and derived pricing formulas for credit default swap options in a more general setup than the ones in Schönbucher(2005).
The fast development of the financial markets in the last decade has lead to the creation of a variety of innovative interest rate related products that require advanced numerical pricing methods. Examples in this respect are products with a complicated strong path-dependence such as a Target Redemption Note, a Ratchet Cap, a Ladder Swap and others. On the other side, the usage of the standard in the literature one-factor Hull and White (1990) type of short rate models allows only for a perfect correlation between all continuously compounded spot rates or Libor rates and thus are not suited for pricing innovative products depending on several Libor rates such as for example a "steepener" option. One possible solution to this problem deliver the two-factor short rate models and in this thesis we consider a two-factor Hull and White (1990) type of a short rate process derived from the Heath, Jarrow, Morton (1992) framework by limiting the volatility structure of the forward rate process to a deterministic one. In this thesis, we often choose to use a variety of modified (binomial, trinomial and quadrinomial) tree constructions as a main numerical pricing tool due to their flexibility and fast convergence and (when there is no closed-form solution) compare their results with fine grid Monte Carlo simulations. For the purpose of pricing the already mentioned innovative short-rate related products, in this thesis we offer and examine two different lattice construction methods for the two-factor Hull-White type of a short rate process which are able to deal easily both with modeling of the mean-reversion of the underlying process and with the strong path-dependence of the priced options. Additionally, we prove that the so-called rotated lattice construction method overcomes the typical for the existing two-factor tree constructions problem with obtaining negative "risk-neutral probabilities". With a variety of numerical examples, we show that this leads to a stability in the results especially in cases of high volatility parameters and negative correlation between the base factors (which is typically the case in reality). Further, noticing that Chan et al (1992) and Ritchken and Sankarasubramanian (1995) showed that option prices are sensitive to the level of the short rate volatility, we examine the pricing of European and American options where the short rate process has a volatility structure of a Cheyette (1994) type. In this relation, we examine the application of the two offered lattice construction methods and compare their results with the Monte Carlo simulation ones for a variety of examples. Additionally, for the pricing of American options with the Monte Carlo method we expand and implement the simulation algorithm of Longstaff and Schwartz (2000). With a variety of numerical examples we compare again the stability and the convergence of the different lattice construction methods. Dealing with the problems of pricing strongly path-dependent options, we come across the cumulative Parisian barrier option pricing problem. We notice that in their classical form, the cumulative Parisian barrier options have been priced both analytically (in a quasi closed form) and with a tree approximation (based on the Forward Shooting Grid algorithm, see e.g. Hull and White (1993), Kwok and Lau (2001) and others). However, we offer an additional tree construction method which can be seen as a direct binomial tree integration that uses the analytically calculated conditional survival probabilities. The advantage of the offered method is on one side that the conditional survival probabilities are easier to calculate than the closed-form solution itself and on the other side that this tree construction is very flexible in the sense that it allows easy incorporation of additional features such as e.g a forward starting one. The obtained results are better than the Forward Shooting Grid tree ones and are very close to the analytical quasi closed form solution. Finally, we pay our attention to pricing another type of innovative interest rate alike products - namely the Longevity bond - whose coupon payments depend on the survival function of a given cohort. Due to the lack of a market for mortality, for the pricing of the Longevity bonds we develop (following Korn, Natcheva and Zipperer (2006)) a framework that contains principles from both Insurance and Financial mathematic. Further on, we calibrate the existing models for the stochastic mortality dynamics to historical German data and additionally offer new stochastic extensions of the classical (deterministic) models of mortality such as the Gompertz and the Makeham one. Finally, we compare and analyze the results of the application of all considered models to the pricing of a Longevity bond on the longevity of the German males.
The topic of this thesis is the coupling of an atomistic and a coarse scale region in molecular dynamics simulations with the focus on the reflection of waves at the interface between the two scales and the velocity of waves in the coarse scale region for a non-equilibrium process. First, two models from the literature for such a coupling, the concurrent coupling of length scales and the bridging scales method are investigated for a one dimensional system with harmonic interaction. It turns out that the concurrent coupling of length scales method leads to the reflection of fine scale waves at the interface, while the bridging scales method gives an approximated system that is not energy conserving. The velocity of waves in the coarse scale region is in both models not correct. To circumvent this problems, we present a coupling based on the displacement splitting of the bridging scales method together with choosing appropriate variables in orthogonal subspaces. This coupling allows the derivation of evolution equations of fine and coarse scale degrees of freedom together with a reflectionless boundary condition at the interface directly from the Lagrangian of the system. This leads to an energy conserving approximated system with a clear separation between modeling errors an errors due to the numerical solution. Possible approximations in the Lagrangian and the numerical computation of the memory integral and other numerical errors are discussed. We further present a method to choose the interpolation from coarse to atomistic scale in such a way, that the fine scale degrees of freedom in the coarse scale region can be neglected. The interpolation weights are computed by comparing the dispersion relations of the coarse scale equations and the fully atomistic system. With this new interpolation weights, the number of degrees of freedom can be drastically reduced without creating an error in the velocity of the waves in the coarse scale region. We give an alternative derivation of the new coupling with the Mori-Zwanzig projection operator formalism, and explain how the method can be extended to non-zero temperature simulations. For the comparison of the results of the approximated with the fully atomistic system, we use a local stress tensor and the energy in the atomistic region. Examples for the numerical solution of the approximated system for harmonic potentials are given in one and two dimensions.
This thesis introduces so-called cone scalarising functions. They are by construction compatible with a partial order for the outcome space given by a cone. The quality of the parametrisations of the efficient set given by the cone scalarising functions are then investigated. Here, the focus lies on the (weak) efficiency of the generated solutions, the reachability of effiecient points and continuity of the solution set. Based on cone scalarising functions Pareto Navigation a novel, interactive, multiobjective optimisation method is proposed. It changes the ordering cone to realise bounds on partial tradeoffs. Besides, its use of an equality constraint for the changing component of the reference point is a new feature. The efficiency of its solutions, the reachability of efficient solutions and continuity is then analysed. Potential problems are demonstrated using a critical example. Furthermore, the use of Pareto Navigation in a two-phase approach and for nonconvex problems is discussed. Finally, its application for intensity-modulated radiotherapy planning is described. Thereby, its realisation in a graphical user interface is shown.