Refine
Year of publication
Document Type
- Doctoral Thesis (671) (remove)
Language
- English (671) (remove)
Keywords
- Visualisierung (13)
- finite element method (9)
- Finite-Elemente-Methode (7)
- Visualization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Computergraphik (5)
- Finanzmathematik (5)
- Mobilfunk (5)
- Optimization (5)
Faculty / Organisational entity
- Fachbereich Mathematik (231)
- Fachbereich Informatik (154)
- Fachbereich Maschinenbau und Verfahrenstechnik (101)
- Fachbereich Chemie (59)
- Fachbereich Elektrotechnik und Informationstechnik (48)
- Fachbereich Biologie (31)
- Fachbereich Sozialwissenschaften (18)
- Fachbereich Wirtschaftswissenschaften (11)
- Fachbereich Physik (6)
- Fachbereich ARUBI (5)
Termination of Rewriting
(1994)
More and more, term rewriting systems are applied in computer science aswell as in mathematics. They are based on directed equations which may be used as non-deterministic functional programs. Termination is a key property for computing with termrewriting systems.In this thesis, we deal with different classes of so-called simplification orderings which areable to prove the termination of term rewriting systems. Above all, we focus on the problemof applying these termination methods to examples occurring in practice. We introduce aformalism that allows clear representations of orderings. The power of classical simplifica-tion orderings - namely recursive path orderings, path and decomposition orderings, Knuth-Bendix orderings and polynomial orderings - is improved. Further, we restrict these orderingssuch that they are compatible with underlying AC-theories by extending well-known methodsas well as by developing new techniques. For automatically generating all these orderings,heuristic-based algorithms are given. A comparison of these orderings with respect to theirpowers and their time complexities concludes the theoretical part of this thesis. Finally, notonly a detailed statistical evaluation of examples but also a brief introduction into the designof a software tool representing the integration of the specified approaches is given.
In dieser Dissertation wird das Konzept der Gröbnerbasen für endlich erzeugte Monoid-und Gruppenringe verallgemeinert. Dabei werden Reduktionsmethoden sowohl zurDarstellung der Monoid- beziehungsweise Gruppenelemente, als auch zur Beschreibungder Rechtsidealkongruenz in den entsprechenden Monoid- beziehungsweise Gruppenrin-gen benutzt. Da im allgemeinen Monoide und insbesondere Gruppen keine zulässigenOrdnungen mehr erlauben, treten bei der Definition einer geeigneten Reduktionsrela-tion wesentliche Probleme auf: Zum einen ist es schwierig, die Terminierung einer Re-duktionsrelation zu garantieren, zum anderen sind Reduktionsschritte nicht mehr mitMultiplikationen verträglich und daher beschreiben Reduktionen nicht mehr unbedingteine Rechtsidealkongruenz. In dieser Arbeit werden verschiedene Möglichkeiten Reduk-tionsrelationen zu definieren aufgezeigt und im Hinblick auf die beschriebenen Problemeuntersucht. Dabei wird das Konzept der Saturierung, d.h. eine Polynommenge so zu er-weitern, daß man die von ihr erzeugte Rechtsidealkongruenz durch Reduktion erfassenkann, benutzt, um Charakterisierungen von Gröbnerbasen bezüglich der verschiedenenReduktionen durch s-Polynome zu geben. Mithilfe dieser Konzepte ist es gelungenfür spezielle Klassen von Monoiden, wie z.B. endliche, kommutative oder freie, undverschiedene Klassen von Gruppen, wie z.B. endliche, freie, plain, kontext-freie odernilpotente, unter Ausnutzung struktureller Eigenschaften spezielle Reduktionsrelatio-nen zu definieren und terminierende Algorithmen zur Berechnung von Gröbnerbasenbezüglich dieser Reduktionsrelationen zu entwickeln.
Structure and Construction of Instanton Bundles on P3
At present the standardization of third generation (3G) mobile radio systems is the subject of worldwide research activities. These systems will cope with the market demand for high data rate services and the system requirement for exibility concerning the offered services and the transmission qualities. However, there will be de ciencies with respect to high capacity, if 3G mobile radio systems exclusively use single antennas. Very promising technique developed for increasing the capacity of 3G mobile radio systems the application is adaptive antennas. In this thesis, the benefits of using adaptive antennas are investigated for 3G mobile radio systems based on Time Division CDMA (TD-CDMA), which forms part of the European 3G mobile radio air interface standard adopted by the ETSI, and is intensively studied within the standardization activities towards a worldwide 3G air interface standard directed by the 3GPP (3rd Generation Partnership Project). One of the most important issues related to adaptive antennas is the analysis of the benefits of using adaptive antennas compared to single antennas. In this thesis, these bene ts are explained theoretically and illustrated by computer simulation results for both data detection, which is performed according to the joint detection principle, and channel estimation, which is applied according to the Steiner estimator, in the TD-CDMA uplink. The theoretical explanations are based on well-known solved mathematical problems. The simulation results illustrating the benefits of adaptive antennas are produced by employing a novel simulation concept, which offers a considerable reduction of the simulation time and complexity, as well as increased exibility concerning the use of different system parameters, compared to the existing simulation concepts for TD-CDMA. Furthermore, three novel techniques are presented which can be used in systems with adaptive antennas for additionally improving the system performance compared to single antennas. These techniques concern the problems of code-channel mismatch, of user separation in the spatial domain, and of intercell interference, which, as it is shown in the thesis, play a critical role on the performance of TD-CDMA with adaptive antennas. Finally, a novel approach for illustrating the performance differences between the uplink and downlink of TD-CDMA based mobile radio systems in a straightforward manner is presented. Since a cellular mobile radio system with adaptive antennas is considered, the ultimate goal is the investigation of the overall system efficiency rather than the efficiency of a single link. In this thesis, the efficiency of TD-CDMA is evaluated through its spectrum efficiency and capacity, which are two closely related performance measures for cellular mobile radio systems. Compared to the use of single antennas, the use of adaptive antennas allows impressive improvements of both spectrum efficiency and capacity. Depending on the mobile radio channel model and the user velocity, improvement factors range from six to 10.7 for the spectrum efficiency, and from 6.7 to 12.6 for the spectrum capacity of TD-CDMA. Thus, adaptive antennas constitute a promising technique for capacity increase of future mobile communications systems.
In this thesis a new family of codes for the use in optical high bit rate transmission systems with a direct sequence code division multiple access scheme component was developed and its performance examined. These codes were then used as orthogonal sequences for the coding of the different wavelength channels in a hybrid OCDMA/WDMA system. The overall performance was finally compared to a pure WDMA system. The common codes known up to date have the problem of needing very long sequence lengths in order to accommodate an adequate number of users. Thus, code sequence lengths of 1000 or more were necessary to reach bit error ratios of with only about 10 simultaneous users. However, these sequence lengths are unacceptable if signals with data rates higher than 100 MBit/s are to be transmitted, not to speak about the number of simultaneous users. Starting from the well known optical orthogonal codes (OOC) and under the assumption of synchronization among the participating transmitters - justified for high bit rate WDM transmission systems -, a new code family called ?modified optical orthogonal codes? (MOOC) was developed by minimizing the crosscorrelation products of each two sequences. By this, the number of simultaneous users could be increased by several orders of magnitude compared to the known codes so far. The obtained code sequences were then introduced in numerical simulations of a 80 GBit/s DWDM transmission system with 8 channels, each carrying a 10 GBit/s payload. Usual DWDM systems are featured by enormous efforts to minimize the spectral spacing between the various wavelength channels. These small spacings in combination with the high bit rates lead to very strict demands on the system components like laser diode, filters, multiplexers etc. Continuous channel monitoring and temperature regulations of sensitive components are inevitable, but often cannot prevent drop downs of the bit error ratio due to aging effects or outer influences like mechanical stress. The obtained results show that - very different to the pure WDM system - by orthogonally coding adjacent wavelength channels with the proposed MOOC, the overall system performance gets widely independent from system parameters like input powers, channel spacings and link lengths. Nonlinear effects like XPM that insert interchannel crosstalk are effectively fought. Furthermore, one can entirely dispense with the bandpass filters, thus simplifying the receiver structure, which is especially interesting for broadcast networks. A DWDM system upgraded with the OCDMA subsystem shows a very robust behavior against a variety of influences.
Urban Design Guidelines have been used in Jakarta for controlling the form of the built environment. This planning instrument has been implemented in several central city redevelopment projects particularly in superblock areas. The instrument has gained popularity and implemented in new development and conservation areas as well. Despite its popularity, there is no formal literature on the Indonesian Urban Design Guideline that systematically explain its contents, structure and the formulation process. This dissertation attempts to explain the substantive of urban design guideline and the way to control its implementation. Various streams of urban design theories are presented and evaluated in term of their suitability for attaining a high urbanistic quality in major Indonesian cities. The explanation on the form and the practical application of this planning instrument is elaborated in a comparative investigation of similar instrument in other countries; namely the USA, Britain and Germany. A case study of a superblock development in Jakarta demonstrates the application of the urban design theories and guideline. Currently, the role of computer in the process of formulating the urban design guideline in Indonesia is merely as a replacement of the manual method, particularly in areas of worksheet calculation and design presentation. Further support of computer for urban planning and design tasks has been researched in developed countries, which shows its potential in supporting decision-making process, enabling public participation, team collaboration, documentation and publication of urban design decisions and so on. It is hoped that the computer usage in Indonesian urban design process can catch up with the global trend of multimedia, networking (Internet/Intranet) and interactive functions that is presented with examples from developed countries.
The study of families of curves with prescribed singularities has a long tradition. Its foundations were laid by Plücker, Severi, Segre, and Zariski at the beginning of the 20th century. Leading to interesting results with applications in singularity theory and in the topology of complex algebraic curves and surfaces it has attained the continuous attraction of algebraic geometers since then. Throughout this thesis we examine the varieties V(D,S1,...,Sr) of irreducible reduced curves in a fixed linear system |D| on a smooth projective surface S over the complex numbers having precisely r singular points of types S1,...,Sr. We are mainly interested in the following three questions: 1) Is V(D,S1,...,Sr) non-empty? 2) Is V(D,S1,...,Sr) T-smooth, that is smooth of the expected dimension? 3) Is V(D,S1,...Sr) irreducible? We would like to answer the questions in such a way that we present numerical conditions depending on invariants of the divisor D and of the singularity types S1,...,Sr, which ensure a positive answer. The main conditions which we derive will be of the type inv(S1)+...+inv(Sr) < aD^2+bD.K+c, where inv is some invariant of singularity types, a, b and c are some constants, and K is some fixed divisor. The case that S is the projective plane has been very well studied by many authors, and on other surfaces some results for curves with nodes and cusps have been derived in the past. We, however, consider arbitrary singularity types, and the results which we derive apply to large classes of surfaces, including surfaces in projective three-space, K3-surfaces, products of curves and geometrically ruled surfaces.
Abstract
The main theme of this thesis is about Graph Coloring Applications and Defining Sets in Graph Theory.
As in the case of block designs, finding defining sets seems to be difficult problem, and there is not a general conclusion. Hence we confine us here to some special types of graphs like bipartite graphs, complete graphs, etc.
In this work, four new concepts of defining sets are introduced:
• Defining sets for perfect (maximum) matchings
• Defining sets for independent sets
• Defining sets for edge colorings
• Defining set for maximal (maximum) clique
Furthermore, some algorithms to find and construct the defining sets are introduced. A review on some known kinds of defining sets in graph theory is also incorporated, in chapter 2 the basic definitions and some relevant notations used in this work are introduced.
chapter 3 discusses the maximum and perfect matchings and a new concept for a defining set for perfect matching.
Different kinds of graph colorings and their applications are the subject of chapter 4.
Chapter 5 deals with defining sets in graph coloring. New results are discussed along with already existing research results, an algorithm is introduced, which enables to determine a defining set of a graph coloring.
In chapter 6, cliques are discussed. An algorithm for the determination of cliques using their defining sets. Several examples are included.
In der vorliegenden Arbeit wird das Verhalten von thermoplastischen
Verbundwerkstoffen mittels experimentellen und numerischen Untersuchungen
betrachtet. Das Ziel dieser Untersuchungen ist die Identifikation und Quantifikation
des Versagensverhaltens und der Energieabsorptionsmechanismen von geschichteten,
quasi-isotropen thermoplastischen Faser-Kunststoff-Verbunden und die Umsetzung
der gewonnenen Einsichten in Eigenschaften und Verhalten eines Materialmodells zur
Vorhersage des Crash-Verhaltens dieser Werkstoffe in transienten Analysen.
Vertreter der untersuchten Klassen sind un- und mittel-vertreckte Rundgestricke und
glasfaserverstärkte Thermoplaste (GMT). Die Untersuchungen an rundgestrickten
glasfaser-(GF)-verstärktem Polyethylentherephthalat (PET) waren Teil eines
Forschungsprojektes zur Charakterisierung sowohl der Verarbeitbarkeit als auch des
mechanischen Verhaltens. Experimente an GMT und Schnittfaser-GMT wurden
ebenfalls zum Vergleich mit dem Gestrick durchgeführt und dienen als Bestätigung
des beobachteten Verhaltens des Gestrickes.
Besonderer Aufmerksamkeit wird der Einfluß der Probengeometrie auf die Resultate
gewidmet, weil die Crash-Charakteristiken wesentlich von der Geometrie des
getesteten Probekörpers abhängen. Hierzu wurde ein Rundhutprofil zur Untersuchung
dieses Einflußes definiert. Diese spezielle Geometrie hat insbesondere Vorteile
hinsichtlich Energieabsorptionsvermögen sowie Herstellbarkeit von thermoplastischen
Verbundwerkstoffen (TPCs). Es wurden Impakt- und Perforationsversuche zur
Untersuchung der Schädigungsausbreitung und zur Charakterisierung der Zähigkeit
der untersuchten Materialien durchgeführt.
Geschichtete TPCs versagen hauptsächlich in einem Laminat-Biegemodus mit
kombiniertem intra- und interlaminaren Schub (transversaler Schub zwischen Lagen und teilweise mit transversalen Schubbrüchen in einzelnen Lagen). Durch eine
Kopplung der aktuellen Versagensmodi und Crash-Kennwerten wie der mittleren
Crash-Spannung, konnten Indikationen über die Relation zwischen Materialparameter
und absoluter Energieabsorption gewonnen werden.
Numerische Untersuchungen wurden mit einem expliziten Finiten Elemente-
Programm zur Simulation von dreidimensionalen, großen Verformungen durchgeführt.
Das Modell besteht bezüglich des Querschnittaufbaus aus einer mesoskopischen
Darstellung, die zwischen Matrix-zwischenlagen und mesoskopischen Verbundwerkstofflagen unterscheidet. Die Modellgeometrie stellt einen vereinfachten
Längsquerschnitt durch den Probekörper dar. Dabei wurden Einflüsse der Reibung
zwischen Impaktor und Material sowie zwischen einzelnen Lagen berücksichtigt.
Auch die lokal herrschende Dehnrate, Energie und Spannungs-Dehnungsverteilung
über die mesoskopischen Phasen konnten beobachtet werden. Dieses Modell zeigt
deutlich die verschiedenen Effekte, die durch den heterogenen Charakter des Laminats
entstehen, und gibt auch Hinweise für einige Erklärungen dieser Effekte.
Basierend auf den Resultaten der obengenannten Untersuchungen wurde ein
phänomenologisches Modell mit a-priori Information des inherenten
Materialverhaltens vorgeschlagen. Daher, daß das Crashverhalten vom heterogenen
Charakter des Werkstoffes dominiert wird, werden im Modell die Phasen separat
betrachtet. Eine einfache Methode zur Bestimmung der mesoskopischen Eigenschaften
wird diskutiert.
Zur Beschreibung des Verhaltens vom thermoplastischen Matrixsystem während
„Crushing“ würde ein dehnraten- und temperaturabhängiges Plastizitätsgesetz
ausreichen. Für die Beschreibung des Verhaltens der Verbundwerkstoffschichten wird
eine gekoppelte Plastizitäts- und Schädigungsformulierung vorgeschlagen. Ein solches
Modell kann sowohl den plastischen Anteil des Matrixsystems als auch das
„Softening“ - verursacht durch Faser-Matrix-Grenzflächenversagen und Faserbrüche -
beschreiben. Das vorgeschlagene Modell unterscheidet zwischen Belastungsfällen für
axiales „Crushing“ und Versagen ohne „Crushing“. Diese Unterteilung ermöglicht
eine explizite Modellierung des Werkstoffes unter Berücksichtigung des spezifischen
Materialzustandes und der Geometrie für den außerordentlichen Belastungsfall, der
zum progressiven Versagen führt.
The development of recombinant DNA techniques opened a new era for protein production both in scientific research and industrial application. However, the purification of recombinant proteins is very often quite difficult and inefficient. Therefore, we tried to employ novel techniques for the expression and purification of three pharmacologically interesting proteins: the plant toxin gelonin; a fusion protein of gelonin and the extracellular domain of the subunit of the acetylcholine receptor (gelonin-AchR) and human neurotrophin 3 (hNT3). Recombinant gelonin, acetylcholine receptor a subunit and their fusion product, gelonin-AchR were constructed and expressed. The gelonin gene, a 753 bp polynucleotide was chemically synthesized by Ya-Wei Shi et al. and was kindly provided to us. The gene was first inserted into the vector pUC118 yielding pUC-gel. It was subsequently transferred into pET28a and pET-gel was expressed in E. coli. The product, gelonin was soluble and was purified in two steps showing a homogeneous band corresponding to 28 kD on SDS-PAGE. The expression of the extracellular domain of the -subunit of AchR always led to insoluble aggregates and even upon coexpression with the chaperonin GroESL, very small and hardly reproducible amounts of soluble material were formed, only. Therefore, recombinant AchR- gelonin was cloned and expressed in the same host. The corresponding fusion protein, gelonin-AchR, again formed aggregates and it had to be solubilized in 6 M Gu-HCl for further purification and refolding. The final product, however, was recognized by several monoclonal antibodies directed against the extracellular domain of the -subunit of AchR as well as a polyclonal serum against gelonin. Expression and purification of recombinant hNT3 was achieved by the use of a protein self-splicing system. Based on the reported hNT3 DNA sequence, a 380 bp fragment corresponding to a 14 kD protein was amplified from genomal DNA of human whole blood by PCR. The DNA fragment was cloned into the pTXB1 vector, which contains a DNA fragment of intein and chintin binding domain (CBD). A further construct, pJLA-hNT3, is temperature-inducible. Both constructs expressed the target protein, hNT3-intein-CBD in E. coli by the induction with IPTG or temperature, however, as aggregates. After denaturation and renaturation, the soluble fusion protein was slowly loaded on an affinity column of chitin beads. A 14 kD hNT3 could be isolated after cleavage with DTT either at 4 °C or 25 °C for 48 h. Based on nerve fiber out-growth of the dorsal root ganglia of chicken embryos, both, hNT-3-intein-CBD and hNT3 itself exhibit almost the same biological activity.
Contributions to the application of adaptive antennas and CDMA code pooling in the TD CDMA downlink
(2002)
TD (Time Division)-CDMA is one of the partial standards adopted by 3GPP (3rd Generation Partnership Project) for 3rd Generation (3G) mobile radio systems. An important issue when designing 3G mobile radio systems is the efficient use of the available frequency spectrum, that is the achievement of a spectrum efficiency as high as possible. It is well known that the spectrum efficiency can be enhanced by utilizing multi-element antennas instead of single-element antennas at the base station (BS). Concerning the uplink of TD- CDMA, the benefits achievable by multi-element BS antennas have been quantitatively studied to a satisfactory extent. However, corresponding studies for the downlink are still missing. This thesis has the goal to make contributions to fill this lack of information. For near-to-reality directional mobile radio scenarios TD-CDMA downlink utilizing multi-element antennas at the BS are investigated both on the system level and on the link level. The system level investigations show how the carrier-to-interference ratio can be improved by applying such antennas. As the result of the link level investigations, which rely on the detection scheme Joint Detection (JD), the improvement of the bit er- ror rate by utilizing multi-element antennas at the BS can be quantified. Concerning the link level of TD-CDMA, a number of improvements are proposed which allow considerable performance enhancement of TD-CDMA downlink in connection with multi-element BS antennas. These improvements include * the concept of partial joint detection (PJD), in which at each mobile station (MS) only a subset of the arriving CDMA signals including those being of interest to this MS are jointly detected, * a blind channel estimation algorithm, * CDMA code pooling, that is assigning more than one CDMA code to certain con- nections in order to offer these users higher data rates, * maximizing the Shannon transmission capacity by an interleaving concept termed CDMA code interleaving and by advantageously selecting the assignment of CDMA codes to mobile radio channels, * specific power control schemes, which tackle the problem of different transmission qualities of the CDMA codes. As a comprehensive illustration of the advantages achievable by multi-element BS anten- nas in the TD-CDMA downlink, quantitative results concerning the spectrum efficiency for different numbers of antenna elements at the BS conclude the thesis.
The immiscible lattice BGK method for solving the two-phase incompressible Navier-Stokes equations is analysed in great detail. Equivalent moment analysis and local differential geometry are applied to examine how interface motion is determined and how surface tension effects can be included such that consistency to the two-phase incompressible Navier-Stokes equations can be expected. The results obtained from theoretical analysis are verified by numerical experiments. Since the intrinsic interface tracking scheme of immiscible lattice BGK is found to produce unsatisfactory results in two-dimensional simulations several approaches to improving it are discussed but all of them turn out to yield no substantial improvement. Furthermore, the intrinsic interface tracking scheme of immiscible lattice BGK is found to be closely connected to the well-known conservative volume tracking method. This result suggests to couple the conservative volume tracking method for determining interface motion with the Navier-Stokes solver of immiscible lattice BGK. Applied to simple flow fields, this coupled method yields much better results than plain immiscible lattice BGK.
Lung cancer, mainly caused by tobacco smoke, is the leading cause of cancer mortality. Large efforts in prevention and cessation have reduced smoking rates in the U.S. and other countries. Nevertheless, since 1990, rates have remained constant and it is believed that most of those currently smoking (~25%) are addicted to nicotine, and therefore are unable to stop smoking. An alternative strategy to reduce lung cancer mortality is the development of chemopreventive mixtures used to reduce cancer risk. Before entering clinical trails, it is crucial to know the efficacy, toxicity and the molecular mechanism by which the active compounds prevent carcinogenesis. 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), N-nitrosonornicotine (NNN) and benzo[a]pyrene (B[a]P) are among the most carcinogenic compounds in tobacco smoke. All have been widely used as model carcinogens and their tumorigenic activities are well established. It is believed that formation of DNA adducts is a crucial step in carcinogenesis. NNK and NNN form 4-hydroxy-1-(3-pyridyl)-1-butanone releasing and methylating adducts, while B[a]P forms B[a]P-tetraol-releasing adducts. Different isothiocyanates (ITCs) are able to prevent NNK-, NNN- or B[a]P-induced tumor formation, but relative little is know about the mechanism of these preventive effects. In this thesis, the influence of different ITCs on adduct formation from NNK plus B[a]P and NNN were evaluated. Using an A/J mouse lung tumor model, it was first shown that the formation of HPB-releasing, O6-mG and B[a]P-tetraol-releasing adducts were not affected when NNK and B[a]P were given individually or in combination, of by gavage. Using the same model, the effects of different mixtures of PEITC and BITC, given by gavage or in the diet, on DNA adduct formation were evaluated. Dietary treatment with phenethyl isothiocyanate (PEITC) or PEITC plus benzyl isothiocyanate (BITC) reduced levels of HPB-releasing adducts by 40*50%. This is consistent with a previously shown 40% inhibition of tumor multiplicity for the same treatment. In the gavage treatments with ITCs it seemed that PEITC reduced HPB-releasing DNA adducts, while levels of BITC counteracted these effects. Levels of O6-mG were minimally affected by any of the treatments. Levels of B[a]P-tetraol releasing adducts were reduced by gavaged PEITC Summary Page XII and BITC, 120 h after the last carcinogen treatment, while dietary treatment had no effects. We then extended our investigation to F-344 rats by using a similar ITC treatment protocol as in the mouse model. NNK was given in the drinking water and B[a]P in diet. Dietary PEITC reduced the formation of HPB-releasing globin and DNA adducts in lung but not in liver, while levels of B[a]P-tetraol-releasing adducts were unaffected. Additionally, the effects of PEITC, 3-phenlypropyl isothiocyanate, and their N-acetylcystein conjugates in diet on adducts from NNN in drinking water were evaluated in rat esophageal DNA and globin. Using a protocol known to inhibit NNNinduced esophageal tumorigenesis, the levels of HPB-releasing adduct levels were unaffected by the ITCs treatment. The observations that dietary PEITC inhibited the formation of HPB-releasing DNA adducts only in mice where the control levels were above 1 fmol/µg DNA and adduct levels in rat lung were reduced to levels seen in liver, lead to the conclusion that in mice and rats, there are at least two activation pathway of NNK. One is PEITC-sensitive and responsible for the high adduct levels in lung and presumably also for higher carcinogenicity of NNK in lung. The other is PEITC-insensitive and responsible for the remaining adduct levels and tumorigenicity. In conclusion, our results demonstrated that the preventive mechanism by which ITCs inhibit carcinogenesis is only in part due to inhibition of DNA adduct formation and that other mechanisms are involved. There is a large body of evidence indicating that induction of apoptosis may be a mechanism by which ITCs prevent tumor formation, but further studies are required.
In this work the investigation of a (Ti, Al, Si) N system was done. The main point of investigation was to study the possibility of getting the nanocomposite coatings structures by deposition of multilayer films from TiN, AlSiN, . This tries to understand the relation between the mechanical properties (hardness, Young s modulus), and the microstructure (nanocrystalline with individual phases). Particularly special attention was given to the temperature effects on microstructural changes in annealing at 600 °C for the coatings. The surface hardness, elastic modulus, and the multilayers diffusion and compositions were the test tools for the comparison between the different coated samples with and without annealing at 600 °C. To achieve this object a rectangular aluminum vacuum chamber with three unbalanced sputtering magnetrons for the deposition of thin film coatings from different materials was constructed The chamber consists mainly of two chambers, the pre-vacuum chamber to load the workpiece, and the main vacuum chamber where the sputtering deposition of the thin film coatings take place. The workpiece is moving on a car travel on a railway between the two chambers to the position of the magnetrons by step motors. The chambers are divided by a self constructed rectangular gate controlled manually from outside the chamber. The chamber was sealed for vacuum use using glue and screws. Therefore, different types of glue were tested not only for its ability to develop an uniform thin layer in the gap between the aluminum plates to seal the chamber for vacuum use, but also low outgassing rates which made it suitable for vacuum use. A epoxy was able to fulfill this tasks. The evacuation characteristics of the constructed chamber was improved by minimizing the inner surface outgassing rate. Therefore, the throughput outgassing rate test method was used in the comparisons between the selected two aluminum materials (A2017 and A5353) samples short time period (one hour) outgassing rates. Different machining methods and treatments for the inner surface of the vacuum chamber were tested. The machining of the surface of material A (A2017) with ethanol as coolant fluid was able to reduce its outgassing rate a factor of 6 compared with a non-machined sample surface of the same material. The reduction of the surface porous oxide layer on the top of the aluminum surface by the pickling process with HNO3 acid, and the protection of it by producing another passive non-porous oxides layer using anodizing process will protect the surface for longer time and will minimize the outgassing rates even under humid atmosphere The residual gas analyzer (RGA) 6. Summary test shows that more than 85% of the gases inside the test chamber were water vapour (H2O) and the rests are (N2, H2, CO), so liquid nitrogen water vapor trap can enhance the chamber pumping down process. As a result it was possible to construct a chamber that can be pumped down using a turbo molecular pump (450 L/s) to the range of 1x10-6 mbar within one hour of evacuations where the chamber volume is 160 Litters and the inner surface area is 1.6 m2. This is a good base pressure for the process of sputtering deposition of hard thin film coatings. Multilayer thin film coating was deposited to demonstrate that nanostructured thin film within the (Ti, Al, Si) N system could be prepared by reactive magnetron sputtering of multi thin film layers of TiN, AlSiN. The (SNMS) spectrometry of the test samples show that a complete diffusion between the different deposited thin film coating layers in each sample takes place, even at low substrate deposition temperature. The high magnetic flux of the unbalanced magnetrons and the high sputtering power were able to produce a high ion-toatom flux, which give high mobility to the coated atoms. The interactions between the high mobility of the coated atoms and the ion-to-atom flux were sufficient to enhance the diffusion between the different deposited thin layers. It was shown from the XRD patterns for this system that the structure of the formed mixture consists of two phases. One phase is noted as TiN bulk and another detected unknown amorphous phase, which can be SiNx or AlN or a combination of Ti-Al-Si-N. As a result we where able to deposit a nanocomposite coatings by the deposition of multilayers from TiN, AlSiN thin film coatings using the constructed vacuum chamber
One crucial assumption of continuous financial mathematics is that the portfolio can be rebalanced continuously and that there are no transaction costs. In reality, this of course does not work. On the one hand, continuous rebalancing is impossible, on the other hand, each transaction causes costs which have to be subtracted from the wealth. Therefore, we focus on trading strategies which are based on discrete rebalancing - in random or equidistant times - and where transaction costs are considered. These strategies are considered for various utility functions and are compared with the optimal ones of continuous trading.
Matrix Compression Methods for the Numerical Solution of Radiative Transfer in Scattering Media
(2002)
Radiative transfer in scattering media is usually described by the radiative transfer equation, an integro-differential equation which describes the propagation of the radiative intensity along a ray. The high dimensionality of the equation leads to a very large number of unknowns when discretizing the equation. This is the major difficulty in its numerical solution. In case of isotropic scattering and diffuse boundaries, the radiative transfer equation can be reformulated into a system of integral equations of the second kind, where the position is the only independent variable. By employing the so-called momentum equation, we derive an integral equation, which is also valid in case of linear anisotropic scattering. This equation is very similar to the equation for the isotropic case: no additional unknowns are introduced and the integral operators involved have very similar mapping properties. The discretization of an integral operator leads to a full matrix. Therefore, due to the large dimension of the matrix in practical applcation, it is not feasible to assemble and store the entire matrix. The so-called matrix compression methods circumvent the assembly of the matrix. Instead, the matrix-vector multiplications needed by iterative solvers are performed only approximately, thus, reducing, the computational complexity tremendously. The kernels of the integral equation describing the radiative transfer are very similar to the kernels of the integral equations occuring in the boundary element method. Therefore, with only slight modifications, the matrix compression methods, developed for the latter are readily applicable to the former. As apposed to the boundary element method, the integral kernels for radiative transfer in absorbing and scattering media involve an exponential decay term. We examine how this decay influences the efficiency of the matrix compression methods. Further, a comparison with the discrete ordinate method shows that discretizing the integral equation may lead to reductions in CPU time and to an improved accuracy especially in case of small absorption and scattering coefficients or if local sources are present.
thesis deals with the investigation of the dynamics of optically excited (hot) electrons in thin and ultra-thin layers. The main interests concern about the time behaviour of the dissipation of energy and momentum of the excited electrons. The relevant relaxation times occur in the femtosecond time region. The two-photon photoemission is known to be an adequate tool in order to analyse such dynamical processes in real-time. This work expands the knowledge in the fields of electron relaxation in ultra-thin silver layers on different substrates, as well as in adsorbate states in a bandgap of a semiconductor. It contributes facts to the comprehension of spin transport through an interface between a metal and a semiconductor. The primary goal was to prove the predicted theory by reducing the observed crystal in at least one direction. One expects a change of the electron relaxation behaviour while altering the crystal’s shape from a 3d bulk to a 2d (ultra-thin) layer. This is due to the fact that below a determined layer thickness, the electron gas transfers to a two-dimensional one. This behaviour could be proven in this work. In an about 3nm thin silver layer on graphite, the hot electrons show a jump to longer relaxation time all over the whole accessible energy range. It is the first time that the temporal evolution of the relaxation of excited electrons could be observed during the transition from a 3d to a 2d system. In order to reduce or even eliminate the influence coming from the substrate, the system of silver on the semiconductor GaAs, which has a bandgap of 1.5eV at the Gamma-point, was investigated. The observations of the relaxation behaviour of hot electron in different ultra-thin silver layers on this semiconductor could show, that at metal-insulator-junctions, plasmons in the silver and in the interface, as well as cascading electrons from higher lying energies, have a huge influence to the dissipation of momentum and energy. This comes mainly from the band bending of the semiconductor, and from the electrons, which are excited in GaAs. The limitation of the silver layer on GaAs in one direction led to the expected generation of quantum well states (QWS) in the bandgap. Those adsorbate states have quantised energy- and momentum values, which are directly connected to the layer thickness and the standing electron wave therein. With the experiments of this work, published values could not only be completed and proved, but it could also be determined the time evolution of such a QWS. It came out that this QWS might only be filled by electrons, which are moving from the lower edge of the conduction band of the semiconductor to the silver and suffer cascading steps there. By means of the system silver on GaAs, and of the known fact that an excitation of electrons in GaAs with circularly polarised light of the energy 1.5eV does produce spin polarised electrons in the conduction band, it became possible to bring a contribution to the hot topic of spin injection. The main target of spin injection is the transfer of spin polarised electrons out of a ferromagnet into a semiconductor, in order to develop spin dependent switches and memories. It could be demonstrated here that spin polarised electrons from GaAs can move through the interface into silver, could be photoemitted from there and their spin was still being detectable. As a third investigation system, ultra-thin silver layers were deposited on the insulator MgO, which has a bandgap of 7.8eV. Also in this system, one could recognize a change in the relaxation time while reducing the dimension of the silver layer from thick to ultra-thin. Additionally, it came out an extreme large relaxation time at a layer thickness of 0.6 – 1.2nm. This time is an order of magnitude longer than at thick films, and this is a consequence of two factors: first, the reduction of the phase space due to the confined electron gas in the z-direction, and second, the slowlier thermalisation of the electron gas due to less accessible scattering partners.
Microsystem technology has been a fast evolving field over the last few years. Its ability to handle volumes in the sub-microliter range makes it very interesting for potential application in fields such as biology, medicine and pharmaceutical research. However, the use of micro-fabricated devices for the analysis of liquid biological samples still has to prove its applicability for many particular demands of basic research. This is particularly true for samples consisting of complex protein mixtures. The presented study therefore aimed at evaluating if a commonly used glass-coating technique from the field of micro-fluidic technology can be used to fabricate an analysis system for molecular biology. It was ultimately motivated by the demand to develop a technique that allows the analysis of biological samples at the single-cell level. Gene expression at the transcription level is initiated and regulated by DNA-binding proteins. To fully understand these regulatory processes, it is necessary to monitor the interaction of specific transcription factors with other elements - proteins as well as DNA sites - in living cells. One well-established method to perform such analysis is the Chromatin Immunoprecipitation (CHIP) assay. To map protein-DNA interactions, living cells are treated with formaldehyde in vivo to cross-link DNA-binding proteins to their resident sites. The chromatin is then broken into small fragments, and specific antibodies against the protein of interest are used to immunopurify the chromatin fragments to which those factors are bound. After purification, the associated DNA can be detected and analyzed using Polymerase Chain Reaction (PCR). Current CHIP technology is limited as it needs a relatively large number of cells while there is increasing interest in monitoring DNA-protein interactions in very few, if not single cells. Most notably this is the case in research on early organism development (embryogenesis). To investigate if microsystem technology can be used to analyze DNA-protein complexes from samples containing chromatin from only few cells, a new setup for fluid transport in glass capillaries of 75 µm inner diameter has been developed, forming an array of micro-columns for parallel affinity chromatography. The inner capillary walls were antibody-coated using a silane-based protocol. The remaining surface was made chemically inert by saturating free binding sites with suitable biomolecules. Variations of this protocol have been tested. Furthermore, the sensitivity of the PCR method to detect immunoprecipitated protein-DNA complexes was improved, resulting in the reliable detection of about 100 DNA fragments from chromatin. The aim of the study was to successively decrease the amount of analyzed chromatin in order to investigate the lower limits of this technology in regard to sensitivity and specificity of detection. The Drosophila GAGA transcription factor was used as an established model system. The protein has already been analyzed in several large-scale CHIP experiments and antibodies of excellent specificity are available. The results of the study revealed that this approach is not easily applicable to "real-world" biological samples in regard to volume reduction and specificity. Particularly, material that non-specifically adsorbed to capillary surfaces outweighed the specific antibody-antigen interaction, the system was designed for. It became clear that complex biological structures, such as chromatin-protein compositions, are not as easily accessible by techniques based on chemically modified glass surfaces as pre-purified samples. In the case of the investigated system, it became evident that there is a need for more research that goes beyond the scope of this work. It is necessary to develop novel coatings and materials to prevent non-specific adsorption. In addition to improving existing techniques, fundamentally new concepts, such as microstructures in biocompatible polymers or liquid transport on hydrophobic stripes on planar substrates to minimize surface contact, may also help to advance the miniaturization of biological experiments.
In this work we present and estimate an explanatory model with a predefined system of explanatory equations, a so called lag dependent model. We present a locally optimal, on blocked neural network based lag estimator and theorems about consistensy. We define the change points in context of lag dependent model, and present a powerfull algorithm for change point detection in high dimensional high dynamical systems. We present a special kind of bootstrap for approximating the distribution of statistics of interest in dependent processes.