Refine
Year of publication
- 2000 (149) (remove)
Document Type
- Preprint (79)
- Doctoral Thesis (34)
- Article (18)
- Report (7)
- Lecture (5)
- Master's Thesis (3)
- Diploma Thesis (1)
- Periodical Part (1)
- Study Thesis (1)
Has Fulltext
- yes (149) (remove)
Keywords
- resonances (6)
- Quantum mechanics (5)
- lifetimes (5)
- AG-RESY (4)
- HANDFLEX (4)
- Internet (4)
- Wannier-Stark systems (4)
- Hypermediale Umweltberichte (3)
- Internetbasierte Umweltberichterstattung (3)
- branch and cut (3)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Physik (53)
- Kaiserslautern - Fachbereich Mathematik (42)
- Kaiserslautern - Fachbereich Chemie (16)
- Kaiserslautern - Fachbereich Informatik (13)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (9)
- Fraunhofer (ITWM) (4)
- Kaiserslautern - Fachbereich ARUBI (4)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (4)
- Kaiserslautern - Fachbereich Biologie (2)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (1)
A new and systematic basic approach to force- and vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation between the deformable and rigid obstacles, and are derived from the object image and its features. We give an enumeration of possible contact states and discuss the main characteristics of each state. We investigate the performance of robust transitions between the contact states and derive criteria and conditions for each of the states and for two sensor systems, i.e. a vision sensor and a force/torque sensor. This results in a new and task-independent approach in regarding the handling of deformable objects and in a sensor-based implementation of manipulation primitives for industrial robots. Thus, the usage of sensor processing is an appropriate solution for our problem. Finally, we apply the concept of contact states and state transitions to the description of a typical assembly task. Experimental results show the feasibility of our approach: A robot performs several contact state transitions which can be combined for solving a more complex task.
Dynamics of Excited Electrons in Copper and Ferromagnetic Transition Metals: Theory and Experiment
(2000)
Both theoretical and experimental results for the dynamics of photoexcited electrons at surfaces of Cu and the ferromagnetic transition metals Fe, Co, and Ni are presented. A model for the dynamics of excited electrons is developed, which is based on the Boltzmann equation and includes effects of photoexcitation, electron-electron scattering, secondary electrons (cascade and Auger electrons), and transport of excited carriers out of the detection region. From this we determine the time-resolved two-photon photoemission (TR-2PPE). Thus a direct comparison of calculated relaxation times with experimental results by means of TR-2PPE becomes possible. The comparison indicates that the magnitudes of the spin-averaged relaxation time t and of the ratio t_up/t_down of majority and minority relaxation times for the different ferromagnetic transition metals result not only from density-of-states effects, but also from different Coulomb matrix elements M. Taking M_Fe > M_Cu > M_Ni = M_Co we get reasonable agreement with experiments.
The increasing parallelisation of development processes as well as the ongoing trends towards virtual product development and outsourcing of development activities strengthen the need for 3D co-operative design via communication networks. Regarding the field of CAx, none of the existing systems meets all the requirements of very complex process chain. This leads to a tremendous need for the integration of heterogeneous CAx systems. Therefore, MACAO, a platform-independent client for a distributed CAx component system, the so-called ANICA CAx object bus, is presented. The MACAO client is able to access objects and functions provided by different CAx servers distributed over a communication network. Thus, MACAO is a new solution for engineering design and visualisation in shared distributed virtual environments. This paper describes the underlying concepts, the actual prototype implementation, as well as possible application scenarios in the area of co-operative design and visualisation.
Der Trend der letzten Jahre im CAx-Bereich geht eindeutig in Richtung 3D-Modellierung. Der Einsatz dieser Technologie ist jedoch erst dann wirtschaftlich sinnvoll, wenn die generierten Daten nicht ausschließlich als Ersatz für 2D-Zeichnungen dienen, sondern während des gesamten Produkt-entstehungsprozesses eingesetzt werden und auf diese Weise Datendurchgängigkeit gewährleistet wird. Mittlerweile wird ein umfangreiches Spektrum von Anwendungen eingesetzt. Beispielhaft sei-en hier Berechnungs- und Simulationsprogramme oder die 3D-Produktvisualisierung in nicht-technischen Bereichen (z. B. Marketing, Vertrieb) genannt. Viele CA-Systeme bieten zwar eine große Auswahl an Modulen für nahezu alle Bereiche der Produktentwicklung, allerdings ist kein System, unabhängig von seiner Komplexität, in der Lage, alle Anforderungen seiner Anwender zu erfüllen. Deshalb kommen in immer größerem Umfang spezielle Programme für individuelle Probleme zum Einsatz. Der Anwender sieht sich jedoch mit Schwierigkeiten konfrontiert, wenn er versucht, für spezielle Probleme spezielle Anwendungen unterschiedlicher Systemhersteller einzusetzen. Um die Integrati-on der verschiedenen Programme zu ermöglichen, muß er sich auf neutrale Standardschnittstellen für den Produktdatenaustausch (IGES, VDAFS, STEP) verlassen, wobei hier mit Informationsverlusten zu rechnen ist. Außerdem muß er sich mit differierenden Benutzerführungen vertraut machen. Im Bewußtsein dieser Probleme entwickelte die Arbeitsgruppe "CAD/CAM-Strategien der deut-schen Automobilindustrie" einen Vorschlag für eine offene CAx-Systemarchitektur /1/, /2/, /3/. Diese sollte in der Lage sein, alle CAx-Komponenten, die im Laufe des Produktent-stehungsprozesses verwendet werden, zu integrieren. Es sollte unter anderem die folgenden Anforderungen erfüllen: ° Offenheit ° Interoperabilität ° Investitionssicherheit ° Aufhebung der Zwangsbindung des Anwenders an einen Systemhersteller ° Vermeidung redundanter Systeme Die Berücksichtigung der internationalen Standards STEP für den Bereich der Produktdatenmo-dellierung und CORBA für den Bereich der verteilten objektorientierten Systeme, die in den folgen-den Abschnitten kurz dargestellt sind, war für die Erfüllung dieser Anforderungen eine wichtige Voraussetzung
For the next generation of high data rate magnetic recording above 1 Gbit/s, a better understanding of the switching processes for both recording heads and media will be required. In order to maximize the switch-ing speed for such devices, the magnetization precession after the magnetic field pulse termination needs to be suppressed to a maximum degree. It is demonstrated experimentally for ferrite films that the appropriate adjustment of the field pulse parameters and/or the static applied field may lead to a full suppression of the magnetization precession immediately upon termination of the field pulse. The suppression is explained by taking into account the actual direction of the magnetization with respect to the static field direction at the pulse termination.
Wir beschreiben eine Methode zur Approximation von Spannungsgradienten aus diskreten Spannungsdaten. Eine herkömmliche Diskretisierung der Ableitungen aus Funktionswerten führt zu Stabilitätsproblemen, weswegen eine Möglichkeit zur Kontrolle der Ableitungen notwendig ist (Regularisierung). Wir bestimmen zunächst das Funktional der potentiellen Energie und führen zusätzlich ein Fehlerfunktional ein, das die Anpassung an die vorgegebenen diskreten Werte ermöglicht. Durch Gewichtung der beiden Funktionale und Minimierung des Gesamtfunktionals erhält man den gewünschten Ausgleich zwischen der Fehlerkontrolle beim Ableiten einerseits und Kontrolle der Fehler bei den Randwerten andererseits.
Within this paper we review image distortion measures. A distortion measure is a criterion that assigns a "quality number" to an image. We distinguish between mathematical distortion measures and those distortion measures in-cooperating a priori knowledge about the imaging devices ( e.g. satellite images), image processing algorithms or the human physiology. We will consider representative examples of different kinds of distortion measures and are going to discuss them.
The quality of freeform surfaces is one of the major topics of CAD/CAM. Aesthetic and technical demands require the construction of high quality surfaces with strong shape conditions. Quality diminishing properties like dents or flat points have to be eliminated while approximation conditions must hold at the same time. Our approach combines quality and approximation criteria to a nonlinear multicriteria optimization problem and achieves an automatic approximation and fitting process.
In multicriteria optimization problems the connectedness of the set of efficient solutions (pareto set) is of special interest since it would allow the determination of the efficient solutions without considering non-efficient solutions in the process. In the case of the multicriteria problem to minimize matchings the set of efficient solutions is not connected. The set of minimal solutions E pot with respect to the power ordered set contains the pareto set. In this work theorems about connectedness of E pot are given. These lead to an automated process to detect all efficient solutions.
We consider the problem of locating a line or a line segment in three- dimensional space, such that the sum of distances from the linear facility to a given set of points is minimized. An example is planning the drilling of a mine shaft, with access to ore deposits through horizontal tunnels connecting the deposits and the shaft. Various models of the problem are developed and analyzed, and effcient solution methods are given.
Many polynomially solvable combinatorial optimization problems (COP) become NP when we require solutions to satisfy an additional cardinality constraint. This family of problems has been considered only recently. We study a newproblem of this family: the k-cardinality minimum cut problem. Given an undirected edge-weighted graph the k-cardinality minimum cut problem is to find a partition of the vertex set V in two sets V 1 , V 2 such that the number of the edges between V 1 and V 2 is exactly k and the sum of the weights of these edges is minimal. A variant of this problem is the k-cardinality minimum s-t cut problem where s and t are fixed vertices and we have the additional request that s belongs to V 1 and t belongs to V 2 . We also consider other variants where the number of edges of the cut is constrained to be either less or greater than k. For all these problems we show complexity results in the most significant graph classes.
It is well-known that some of the classical location problems with polyhedral gauges can be solved in polynomial time by finding a finite dominating set, i.e. a finite set of candidates guaranteed to contain at least one optimal location. In this paper it is first established that this result holds for a much larger class of problems than currently considered in the literature. The model for which this result can be proven includes, for instance, location problems with attraction and repulsion, and location-allocation problems. Next, it is shown that the approximation of general gauges by polyhedral ones in the objective function of our general model can be analyzed with regard to the subsequent error in the optimal objective value. For the approximation problem two different approaches are described, the sandwich procedure and the greedy algorithm. Both of these approaches lead - for fixed epsilon - to polynomial approximation algorithms with accuracy epsilon for solving the general model considered in this paper.
The paper concerns the equilibrium state of ultra small semiconductor devices. Due to the quantum drift diffusion model, electrons and holes behave as a mixture of charged quantum fluids. Typically the involved scaled Plancks constants of holes, \(\xi\), is significantly smaller than the scaled Plancks constant of electrons. By setting formally \(\xi=0\) a well-posed differential-algebraic system arises. Existence and uniqueness of an equilibrium solution is proved. A rigorous asymptotic analysis shows that this equilibrium solution is the limit (in a rather strong sense) of quantum systems as \(\xi \to 0\). In particular the ground state energies of the quantum systems converge to the ground state energy of the differential-algebraic system as \(\xi \to 0\).
Mean field equations arise as steady state versions of convection-diffusion systems where the convective field is determined as solution of a Poisson equation whose right hand side is affine in the solutions of the convection-diffusion equations. In this paper we consider the repulsive coupling case for a system of 2 convection-diffusion equations. For general diffusivities we prove the existence of a unique solution of the mean field equation by a variational technique. Also we analyse the small-Debye-length limit and prove convergence to either the so-called charge-neutral case or to a double obstacle problem for the limiting potential depending on the data.
Die mechanischen Eigenschaften von Verbundwerkstoffen und Werkstoffverbunden werden
in erheblichem Maß durch die Eigenschaften der Grenzfläche bestimmt. Oftmals ist die
Grenzfläche sogar das schwächste Element. Eine zuverlässige Beschreibung der mechanischen
Grenzflächenqualität ist von großer Bedeutung für die Wahl optimaler Werkstoffkombinationen
und Kontaktbildungsverfahren. Bei mechanisch-technologischen Charakterisierungsmethoden
unterliegen die Zielgrößen, wie etwa die Grenzflächenscherfestigkeit, oftmals
einer starken Streuung. In der vorliegenden Arbeit wird deshalb das Konzept der linearelastischen
Bruchmechanik zur Grenzflächencharakterisierung herangezogen. Für die dazu
notwendige Spannungsanalyse des Prüfkörpers mit einem öffnungsdominierten Grenzflächenriß
werden FE-Modelle erstellt. Im Nachgang zu Experiment und Datenreduktion werden die
Voraussetzungen für die Anwendbarkeit des linear-elastischen Konzeptes verifiziert.
Da die Grenzflächenzähigkeit c G empfindlich von der Zweiachsigkeit ψ des örtlichen Beanspruchungszustandes
abhängt, wird eine Belastungseinrichtung konzipiert, mit der ψ im gesamten,
der linear-elastischen Bruchmechanik zugänglichen Mixed-Mode-Intervall stufenlos
variiert werden kann. Ergänzend zur Bestimmung der (ψ ) c G -Grenzflächenbruchkurve
wurde das Rißwachstum lichtmikroskopisch verfolgt und der Einfluß thermischer Eigenspannungen
abgeschätzt.
An nicht-linearen FE-Modellen wird der Einfluß des Rißuferkontaktes sowie des plastischen
Fließens als Kleinbereichstörung auf die Modenabhängigkeit der Grenzflächenbruchenergie
untersucht. In beiden Beispielen wird durch Annahme von Verzerrungskriterien im Inneren
der jeweiligen Nichtlinearitätszone eine Verbindung zwischen Festigkeitslehre und Bruchmechanik
hergestellt. Für den Fall der Kleinbereichplastizität werden außerdem die Ligamentnormalspannungen
im Rahmen eines weakest-link-Modells für rißbehaftete Körper bewertet.Es zeigt sich, daß die U-Gestalt der (ψ ) c G -Grenzflächenbruchkurve qualitativ nachvollzogen
werden kann, wenn man die Ligamentnormalspannungen als rißtreibende Kraft bewertet.
The aim of this article is to show that moment approximations of kinetic equations based on a Maximum Entropy approach can suffer from severe drawbacks if the kinetic velocity space is unbounded. As example, we study the Fokker Planck equation where explicit expressions for the moments of solutions to Riemann problems can be derived. The quality of the closure relation obtained from the Maximum Entropy approach as well as the Hermite/Grad approach is studied in the case of five moments. It turns out that the Maximum Entropy closure is even singular in equilibrium states while the Hermite/Grad closure behaves reasonably. In particular, the admissible moments may lead to arbitrary large speeds of propagation, even for initial data arbitrary close to global eqilibrium.
Performance of some preconditioners for the p - and hp -version of the finite element method in 3D
(2000)
The balance space approach (introduced by Galperin in 1990) provides a new view on multicriteria optimization. Looking at deviations from global optimality of the different objectives, balance points and balance numbers are defined when either different or equal deviations for each objective are allowed. Apportioned balance numbers allow the specification of proportions among the deviations. Through this concept the decision maker can be involved in the decision process. In this paper we prove that the apportioned balance number can be formulated by a min-max operator. Furthermore we prove some relations between apportioned balance numbers and the balance set, and see the representation of balance numbers in the balance set. The main results are necessary and sufficient conditions for the balance set to be exhaustive, which means that by multiplying a vector of weights (proportions of deviation) with its corresponding apportioned balance number a balance point is attained. The results are used to formulate an interactive procedure for multicriteria optimization. All results are illustrated by examples.
This paper provides an annotated bibliography of multiple objective combinatorial optimization, MOCO. We present a general formulation of MOCO problems, describe the main characteristics of MOCO problems, and review the main properties and theoretical results for these problems. One section is devoted to a brief description of the available solution methodology, both exact and heuristic. The main part of the paper is devoted to an annotation of the existing literature in the field organized problem by problem. We conclude the paper by stating open questions and areas of future research. The list of references comprises more than 350 entries.
In this paper we address the question of how many objective functions are needed to decide whether a given point is a Pareto optimal solution for a multicriteria optimization problem. We extend earlier results showing that the set of weakly Pareto optimal points is the union of Pareto optimal sets of subproblems and show their limitations. We prove that for strictly quasi-convex problems in two variables Pareto optimality can be decided by consideration of at most three objectives at a time. Our results are based on a geometric characterization of Pareto, strict Pareto and weak Pareto solutions and Helly's Theorem. We also show that a generalization to quasi-convex objectives is not possible, and state a weaker result for this case. Furthermore, we show that a generalization to strictly Pareto optimal solutions is impossible, even in the convex case.
In this paper we investigate the problem offending the Nadir point for multicriteria optimization problems (MOP). The Nadir point is characterized by the component wise maximal values of efficient points for (MOP). It can be easily computed in the bicriteria case. However, in general this problem is very difficult. We review some existing methods and heuristics and propose some new ones. We propose a general method to compute Nadir values for the case of three objectives, based on theoretical results valid for any number of criteria. We also investigate the use of the Nadir point for compromise programming, when the goal is to be as far away as possible from the worst outcomes. We prove some results about (weak) Pareto optimality of the resulting solutions. The results are illustrated by examples.
Ein Teilaspekt der formalen Logik besteht in der Untersuchung wie die logischen Konsequenzen (insbesondere die Tautologien) einer vorgegebenen Formelmenge unter Verwendung gewisser Reglements schrittweise hergeleitet werden können. Hierbei ist die Logik bestimmt durch eine konsequente Trennung von Syntax und Semantik. Diese Abhandlung stellt exemplarisch das Tableau-Kalkül und das Kalkül des natürlichen Schließens vor.
Aufgrund der vernetzten Strukturen und Wirkungszusammenhänge dynamischer Systeme werden die zugrundeliegenden mathematischen Modelle meist sehr komplex und erfordern ein hohes mathematisches Verständnis und Geschick. Bei Verwendung von spezieller Software können jedoch auch ohne tiefgehende mathematische oder informatorische Fachkenntnisse komplexe Wirkungsnetze dynamischer Systeme interaktiv erstellt werden. Als Beispiel wollen wir schrittweise das Modell einer Miniwelt entwerfen und Aussagen bezüglich ihrer Bevölkerungsentwicklung treffen.
We consider some continuous-time Markowitz type portfolio problems that consist of maximizing expected terminal wealth under the constraint of an upper bound for the Capital-at-Risk. In a Black-Scholes setting we obtain closed form explicit solutions and compare their form and implications to those of the classical continuous-time mean-variance problem. We also consider more general price processes which allow for larger uctuations in the returns.
For transferring existing knowledge into new projects, reuse has become an important factor in today's software industry. However, to set reuse into practice, reusable artifacts have to be stored somewhere, and must be offered to (re-)users on demand. For this purpose, advanced reuse repository systems like, for instance, instantiations of the Experience Base concept, are quite frequently used. Many people, from different projects, have to access such a repository at various phases of software development processes to retrieve or store reusable data. In order to fulfill the given tasks, each of these user has specific needs. Taking this into account, a reuse repository has to offer tailored user interfaces and functions for different user groups. Furthermore, since the contents of such a repository usually represent the state of the art of an organization's (core) competencies, not everyone should be allowed to freely access each and every repository entry. This isespecially true for persons that are not part of the organization. This report discusses role concepts that can be applied to reuse repository systems to overcome some of the stated access problems. Commonly used roles for software development and reuse repository management are listed. Based on these roles, a basic set of roles, as implemented in the SFB 501 Experience Base, is introduced.
Comprehensive reuse and systematic evolution of reuse artifacts as proposed by the Quality Improvement Paradigm (QIP) do not only require tool support for mere storage and retrieval. Rather, an integrated management of (potentially reusable) experience data as well as project-related data is needed. This paper presents an approach exploiting object-relational database technology to implement QIP-driven reuse repositories. Requirements, concepts, and implementational aspects are discussed and illustrated through a running example, namely the reuse and continuous improvement of SDL patterns for developing distributed systems. Our system is designed to support all phases of a reuse process and the accompanying improvement cycle by providing adequate functionality. Its implementation is based on object-relational database technology along with an infrastructure well suited for these purposes.
Software development organizations are recognizing the increasing importance of investing in the build-up of core competencies for their competitiveness in software system development. This is supported by reuse and experience repository systems that assist in capturing and reusing all kinds of software artifacts (e. g., code, patterns, frameworks) and processes as well as experiences related to these artifacts and processes. To justify such an investment and guide its improvement, it must be evaluated according to the business case, that is, a measurement program has to be developed that is oriented towards the business goals of such a reuse and experience repository system. In this paper, we suggest an approach to iteratively build up measurement programs for gaining feedback and, thereby, controlling and improving such a reuse and experience repository system. The focus is placed on guidelines for the evolution of such measurement programs over time, rather than providing directly applicable metrics or questionnaires. In order to illustrate the feasibility of the approach, examples of running measurement programs at different stages of evolutions are given.
Abstract: The recently proposed idea to generate entanglement between photon states via exchange interactions in an ensemble of atoms (J. D. Franson and T. B. Pitman, Phys. Rev. A 60 , 917 (1999) and J. D. Franson et al., (quant- ph/9912121)) is discussed using an S -matix approach. It is shown that if the nonlinear response of the atoms is negligible and no additional atom-atom interactions are present, exchange interactions cannot produce entanglement between photons states in a process that returns the atoms to their initial state. Entanglement generation requires the presence of a nonlinear atomic response or atom-atom interactions.
Abstract: Local field effects on the rate of spontaneous emission and Lamb shift in a dense gas of atoms are discussed taking into account correlations of atomic center-of-mass coordinates. For this the exact retarded propagator in the medium is calculated in independent scattering approximation and employing a virtual-cavity model. The resulting changes of the atomic polarizability lead to modi cations of the medium response which can be of the same order of magnitude but of opposite sign than those due to local field corrections of the dielectric function derived by Morice, Castin, and Dalibard [Phys.Rev.A 51, 3896 (1995)].
Abstract: We identify form-stable coupled excitations of light and matter ("dark-state polaritons") associated with the propagation of quantum fields in Electromagnetically Induced Transparency. The properties of the dark-state polaritons such as the group velocity are determined by the mixing angle between light and matter components and can be controlled by an external coherent field as the pulse propagates. In particular, light pulses can be decelerated and "trapped" in which case their shape and quantum state are mapped onto metastable collective states of matter. Possible applications of this reversible coherent-control technique are discussed.
Abstract: We analyze the above-threshold behavior of a mirrorless parametric oscillator based on resonantly enhanced four wave mixing in a coherently driven dense atomic vapor. It is shown that, in the ideal limit, an arbitrary small flux of pump photons is sufficient to reach the oscillator threshold. We demonstrate that due to the large group velocity delays associated with coherent media, an extremely narrow oscillator linewidth is possible, making a narrow-band source of non-classical radiation feasible.
Abstract: We analyze systematic (classical) and fundamental (quantum) limitations of the sensitivity of optical magnetometers resulting from ac-Stark shifts. We show that incontrast to absorption-based techniques, the signal reduction associated with classical broadening can be compensated in magnetometers based on phase measurements using electromagnetically induced transparency (EIT). However due to ac-Stark associated quantum noise the signal-to-noise ratio of EIT-based magnetometers attains a maximum value at a certain laser intensity. This value is independent on the quantum statistics of the light and defines a standard quantum limit of sensitivity. We demonstrate that an EIT-based optical magnetometer in Faraday configuration is the best candidate to achieve the highest sensitivity of magnetic field detection and give a detailed analysis of such a device.
The basic idea behind selective multiscale reconstruction of functions from error-affected data is outlined on the sphere. The selective reconstruction mechanism is based on the premise that multiscale approximation can be well-represented in terms of only a relatively small number of expansion coefficients at various resolution levels. An attempt is made within a tree algorithm (pyramid scheme) to remove the noise component from each scale coefficient using a priori statistical information (provided by an error covariance kernel of a Gaussian, stationary stochastic model).
Spherical Tikhonov Regularization Wavelets in Satellite Gravity Gradiometry with Random Noise
(2000)
This paper considers a special class of regularization methods for satellite gravity gradiometry based on Tikhonov spherical regularization wavelets with particular emphasis on the case of data blurred by random noise. A convergence rate is proved for the regularized solution, and a method is discussed for choosing the regularization level a posteriori from the gradiometer data.
Being interested in (rotation-)invariant pseudodi erential equations of satellite problems corresponding to spherical orbits, we are reasonably led to generating kernels that depend only on the spherical distance, i. e. in the language of modern constructive approximation form spherical radial basis functions. In this paper approximate identities generated by such (rotation-invariant) kernels which are additionally locally supported are investigated in detail from theoretical as well as numerical point of view. So-called spherical di erence wavelets are introduced. The wavelet transforms are evaluated by the use of a numerical integration rule, that is based on Weyl's law of equidistribution. This approximate formula is constructed such that it can cope with millions of (satellite) data. The approximation error is estimated on the orbital sphere. Finally, we apply the developed theory to the problems of satellite-to-satellite tracking (SST) and satellite gravity gradiometry (SGG).
The satellite-to-satellite tracking (SST) problems are characterized from mathematical point of view. Uniqueness results are formulated. Moreover, the basic relations are developed between (scalar) approximation of the earth's gravitational potential by "scalar basis systems" and (vectorial) approximation of the gravitational eld by "vectorial basis systems". Finally, the mathematical justication is given for approximating the external geopotential field by finite linear combinations of certain gradient fields (for example, gradient fields of multi-poles) consistent to a given set of SST data.
In this short note we prove some general results on semi-stable sheaves on P_2 and P_3 with arbitrary linear Hilbert polynomial. Using Beilinson's spectral sequence, we compute free resolutions for this class of semi-stable sheaves and deduce that the smooth moduli spaces M_{r m + s}(P_2) and M_{r m + r - s}(P_2) are birationally equivalent if r and s are coprime.
A simple method of calculating the Wannier-Stark resonances in 2D lattices is suggested. Using this method we calculate the complex Wannier-Stark spectrum for a non-separable 2D potential realized in optical lattices and analyze its general structure. The dependence of the lifetime of Wannier-Stark states on the direction of the static field (relative to the crystallographic axis of the lattice) is briefly discussed.
The paper studies the dynamics of transitions between the levels of a Wannier-Stark ladder induced by a resonant periodic driving. The analysis of the problem is done in terms of resonance quasienergy states, which take into account the metastable character of the Wannier-Stark states. It is shown that the periodic driving creates from a localized Wannier-Stark state an extended Bloch-like state with a spatial length varying in time as ~ t^1/2. Such a state can find applications in the field of atomic optics because it generates a coherent pulsed atomic beam.
The statistics of the resonance widths and the behavior of the survival probability is studied in a particular model of quantum chaotic scattering (a particle in a periodic potential subject to static and time-periodic forces) introduced earlier in Ref. [5,6]. The coarse-grained distribution of the resonance widths is shown to be in good agreement with the prediction of Random Matrix Theory (RMT). The behavior of the survival probability shows, however, some deviation from RMT.
The quasienergy spectrum of a Bloch electron affected by dc-ac fields is known to have a fractal structure as function of the so-called electric matching ratio, which is the ratio of the ac field frequency and the Bloch frequency. This paper studies a manifestation of the fractal nature of the spectrum in the system "atom in a standing laser wave", which is a quantum optical realization of a Bloch electron. It is shown that for an appropriate choice of the system parameters the atomic survival probability (a quantity measured in laboratory experiments) also develops a fractal structure as a function of the electric matching ratio. Numerical simulations under classically chaotic scattering conditions show good agreement with theoretical predictions based on random matrix theory.
The paper studies the effect of a weak periodic driving on metastable Wannier-Stark states. The decay rate of the ground Wannier-Stark states as a continuous function of the driving frequency is calculated numerically. The theoretical results are compared with experimental data of Wilkinson et at. [Phys.Rev.Lett.76, 4512 (1996)] obtained for cold sodium atoms in an accelerated optical lattice.
We study the transitions between the ground and excited Wannier states induced by a weak ac field. Because the upper Wannier states are several order of magnitude less stable than the ground states, these transitions decrease the global stability of the system characterized by the rate of probability leakage or decay rate. Using nonhermitian resonant perturbation theory we obtain an analytical expression for this induced decay rate. The analytical results are compared with exact numerical calculations of the system decay rate.
Die Polyurethane stellen eine extrem vielgestaltige Kunststoffklasse dar, außerdem zählen
sie, von den Produktionskosten her, zu den höherwertigen Kunststoffen. Ersteres erschwert die
Entwicklung von Recyclingverfahren, letzteres ist der Grund, weshalb trotzdem seit längerem
an Wiederverwertungsmethoden für Polyurethane gearbeitet wird. Eine ganze Reihe von
Verfahren existieren bereits und werden mit Erfolg angewendet. Da es aber immer noch PURTypen
gibt, die bisher nicht erfolgreich wiederverwertet werden können, besteht weiterhin
Bedarf an zusätzlichen Verfahren.
Bei der Hydrolyse von Polyurethan-Abfällen wird das Material unter Zugabe von Wasser
weitgehend in seine Ausgangsbestandteile zerlegt. Aufgrund einiger schwierig zu bewältigender
Verfahrensschritte wird die Hydrolyse bisher nur im Labor- und Technikumsmaßstab
angewendet. In dieser Arbeit wurde nun ein Hydrolyseverfahren entwickelt, bei dem die
Auftrennung in die Bestandteile nur bis zu einem bestimmten Grad durchgeführt wird, also ein
partieller Abbau stattfindet.
Der partielle hydrolytische Abbau wurde in einem Doppelschneckenextruder ausgeführt. Die
Produkte („hygrothermisch a bgebautes P olyur ethan“; HA-PUR) wurden durch Bestimmung
des unlöslichen Rückstands und der Viskosität, mittels Infrarotspektroskopie sowie mit Hilfe
der Thermogravimetrie mit angeschlossener Massenspektrometrie charakterisiert.
Ausgehend von den Eigenschaften des Zwischenproduktes HA-PUR wurde nach
Anwendungsmöglichkeiten gesucht.
HA-PUR lässt sich hervorragend mit Duromeren mischen. Diese Tatsache wurde genutzt,
um die heute als Zähmodifikator für Duromere gebräuchlichen, aber teuren funktionalisierten
Flüssigkautschuke durch ein preisgünstiges Recyclingprodukt zu ersetzen. Tatsächlich wirkte
sich ein Zusatz von HA-PUR zu Duromeren günstig auf deren mechanische Eigenschaften, wie
Bruchzähigkeit, Bruchenergie und Schlagzähigkeit aus. Weiterhin konnte HA-PUR auch als
Härter für Epoxidharze eingesetzt werden.
Die kautschukähnlichen Eigenschaften von HA-PUR legten dessen Verwendung als Füllstoff in Kautschukrezepturen nahe. In Anteilen von 10-20 Gew.-% bewirkte HA-PUR bei einigen
Kautschuksorten eine beschleunigte Vulkanisation sowie eine Verbesserung der mechanischen
Eigenschaften. Im Falle einer Verschlechterung des Eigenschaftsprofils war es möglich, diese
durch geringfügige Variationen der Rezeptur auszugleichen.
HA-PUR besitzt gewisse thermoplastische Eigenschaften. Daher wurde auch die Möglichkeit
erprobt, es als Zähmodifikator für Polyoxymethylen (POM) einzusetzen. (Die Verwendung von
thermoplastischem Polyurethan für diese Zwecke ist heute bereits Stand der Technik.) Bei
Zusatz von 5 Gew.-% HA-PUR wurde eine leichte Erhöhung der Schlagzähigkeit festgestellt.
This work was aimed at studying the hygrothermal decomposition of polyester urethanes and
the usability of the products in thermosets, rubbers and thermoplastics.
Polyurethanes (PUR) are one of the most versatile groups of plastic materials. The variety of
PUR types reaches from flexible foams and rigid foams over thermoplastic elastomers to
adhesives, paints and varnishes. This variety is one of the reasons, why the development of
cost-efficient reycling methods is very difficult. On the other hand, the production of PUR is
rather expensive - compared to the mass-produced plastic materials like the polyolefins. This
fact was the reason for the development of recycling methods for PUR since the 60s. The
recycling routes for PUR can be devided in mechanical and chemical methods. Mechanical
methods cover e.g. grinding of PUR waste, compression moulding, adhesive pressing,
bonding. Chemical methods (also called feedstock recycling) change the chemistry of the
material. A third group of recycling methods is the recovery of energy. This can mean simple
incineration of the PUR waste or the decomposition by pyrolysis or hydrogenation and the
combustion of the products.
Chemical methods are e.g. glycolysis and hydrolysis. Glycolysis, which is already used on a
commercial scale, means the decomposition of PUR by diols (e.g. glycol) at elevated
temperatures through a transesterification reaction. The reaction products are polyols which are
similar to the virgin components and can be directly used for the manufacture of new PUR.
Amines can be products of side reactions of the glycolysis.
Hydrolysis of polyurethane waste means decomposition of the material to its virgin
components by treatment with water at elevated temperatures. The products are polyols and
amines which are related to the virgin isocyanates. After purification, the polyols can be used
for the production of new PUR, as well as the amines - after conversion into isocyanates by
phosgenation. Since there are still some problems with the processing (e.g. the separation of the amines), the hydrolysis of PUR waste has not yet been used on a commercial scale.
In this work, a process of hydrolysis has been worked out which does not lead to the virgin
components. The formation of these virgin components can be avoided by stopping the process
before reaching the state of complete decomposition.
This partial hygrothermally decomposition was carried out in a twin-screw extruder at
temperatures between 150 and 250 °C and addition of 10 wt.-% of water. The material used
for this process was polyester-PUR waste from the footwear industry and was ground into particles of 1-3 mm size. The products („hygrothermally decomposed polyurethane“; HDPUR)
were characterized by determination of the insoluble residue and melt viscosity. The
hygrothermal decomposition was traced by infrared spectroscopy and by thermogravimetry
combined with mass spectrometry. These examinations allowed a monitoring of the
decomposition degree. Further, some information about the chemical processes during
decomposition could be obtained.
Based on the specific properties (consistency upon decomposition stage, compound
containing primary and secondary amines) of HD-PUR attempts were made to check its use in
selected thermoset, rubber and thermoplastic combinations.
HD-PUR is quite well miscible with thermosets such as epoxy resins (EP) phenolic resins
(PF), and unsaturated polyester resins (UP). This fact was utilized for replacing the expensive
functionalized liquid rubbers, which are used for toughening of thermosets, by this costefficient
recycling product. The mixing of HD-PUR, especially with EP, leads to a clear
improvement of the mechanical properties like fracture toughness, fracture energy, and impact
toughness. Due to this promising results, the emphasis for further investigations was placed on
experiments with HD-PUR in EP. Two EPs (one trifunctional and one tetrafunctional) of Ciba
were used. Examinations of fracture surfaces by scanning electron microscopy gave some
information about the phase structure and the toughening mechanism. Dynamic-mechanical
thermoanalysis made it possible - apart from the investigation of other mechanical properties - to
determine the crosslink density which was then correlated with the fracture mechanical data. The
addition of HD-PUR in small amounts (up to 20 wt.-%) led to improved toughness along with
only slightly reduced stiffness. It should be noted, that even mixtures with 80 wt.-% HD-PUR
gave a curable resin yet with reduced stiffness and temperature resistance. HD-PUR alone could
act as hardener for epoxy resins. Further, one phenolic resin, one unsaturated polyester resin,and one vinylester-urethane hybrid resin were examined. The results were, compared to the
experiments with EP, less promising.
Due to its rubber-like properties, especially when extruded at lower temperatures, HD-PUR
seemed to be qualified for using as polymeric filler and extender in rubber recipes. Five sorts of
rubbers (natural rubber, nitrile-butadiene-rubber, styrene-butadiene-rubber, epoxidized rubber
and fluoro rubber) were mixed with HD-PUR in ratios of 10-20 wt.-%. If possible, standard
recipes without further additives were used. The changes of the rheological properties and the
vulcanization behaviour were checked. The results showed, that HD-PUR not only could be
regarded as neutral filler, but also as a kind of reactive plasticizer which could influence the vulcanization behaviour and the mechanical properties. Indeed, the vulcanization rate and the
tear strength of natural rubber was increased. If there was any deterioration of the performance,
this could be compensated by small variations of the related recipes. Some experiments were
conducted with regard to the comparison of two different vulcanization systems and two
different grades of carbon black.
The applicability of HD-PUR as modifier for thermoplastics has been checked by adding
HD-PUR to poly(oxymethylene) (POM). The modification of POM with thermoplastic PUR is
already the state of the art. Due to its thermoplastic properties, HD-PUR should be suitable for
this application. Mixing of HD-PUR with POM was possible in amounts from 5 - 40 wt.-%. If
5 wt.-% of HD-PUR was added, the impact toughness of POM was slightly increased. Higher
amounts of HD-PUR led to a decrease of impact toughness, tensile strength and Young’s
modulus.
Future works could provide the complete clarification of the chemical reactions during the
hygrothermal decomposition.The related information could serve for improved process control
and for extending the decomposition on PURs of polyether type. Further, the applicability of
HD-PUR as toughening agent for other (brittle) materials should be checked. The modification
of thermoplastics still offers a wide field of applications. Also the use of HD-PUR as reactive
filler in rubber recipes could be worked out. Finally, some other applications for HD-PUR, e.g.
as pressure sensitve adhesive, as sealing material or for sound and vibration damping could be
tested.
Lineare Algebra I & II
(2000)
Inhalte der Grundvorlesungen Lineare Algebra I und II im Winter- und Sommersemester 1999/2000: Gruppen, Ringe, Körper, Vektorräume, lineare Abbildungen, Determinanten, lineare Gleichungssysteme, Polynomring, Eigenwerte, Jordansche Normalform, endlich-dimensionale Hilberträume, Hauptachsentransformation, multilineare Algebra, Dualraum, Tensorprodukt, äußeres Produkt, Einführung in Singular.
In der vorliegenden Arbeit zeigen wir, wie man Masse für die Leptonen, insbesondere die Neutrinos erzeugen kann. Dazu erweitern wir das Standard Modell der Elementarteilchenphysik um eine weitere abelsche Eichgruppe. Diese Erweiterung erlaubt uns die Einführung eines Higgs Bosons zur Generierung der Masse zu vermeiden.Wir zeigen, daß aus der Forderung der Anomalifreiheit der Theorie keine Einschränkungen für die Hyperladungen der zusätzlichen abelschen Eichgruppe folgen.Die Masse der Leptonen werden wir durch die sogenannte dynamische Massengenerierung erzeugen. Dabei zeigen wir, daß obwohl kein Massenterm für die Leptonen in der Lagrangedichte existiert, man doch eine Masse finden kann. Zur Untersuchung, ob dynamische Massengenerierung auftritt, verwenden wir die Dyson-Schwinger Gleichung für die Leptonen. Dabei erhält man ein System gekoppelter Integralgleichungen, das wir mit Hilfe von verschiedenen Näherungen zu lösen versuchen. Als Anhaltspunkt, ob unsere Näherungen sinnvoll sind, fordern wir, daß die Ward-Takahashi Identitäten erfüllt werden. Alle Lösungen die wir finden sind auch mit der Annahme verschwindender Leptonenmasse verträglich. In der "massenlosen quenched rainbow" Näherung, in der wir den Vertex durch den nackten Vertex ersetzt und in dem Bosonenpropagator die Anzahl der Leptonenflavor gleich Null gesetzt haben, gelingt es zu zeigen, daß die Wellenfunktionsrenormierung in der Landau-Eichung verschwindet. In zwei Näherungen für die gefundene Integralgleichung gelingt es uns zu zeigen, daß eine Masse für die Leptonen auftritt. Die erste Näherung benutzt die Ward-Takahashi Identität und löst das Integral direkt. Bei der zweiten Lösunsmethode führen wir die Integralgleichung in eine Differentialgleichung über, die wir mit der Methode der Bifurkationsanalyse analysieren. Wir finden eine kritische Kopplung, so daß nur für Kopplungen die größer sind als die kritische Kopplung man eine dynamische Massengenerierung findet.In der "massive quenched rainbow" Näherung, in der wir den Vertex durch den nackten Vertex ersetzt und in dem massiven Bosonenpropagator die Anzahl der Neutrinoflavor gleich Null gesetzt haben, zeigen wir die Probleme auf, die eine Erweiterung auf dieses Modell aufwirft. Insbesondere das Problem, daß die Wellenfunktionsrenormierung in keiner Eichung verschwindet wird diskutiert. Eine in der Literatur übliche Verallgemeinerung dieser Näherung, die "Landau ähnliche" Eichung, wird auch diskutiert. Danach deuten wir an, wie man unsere Ergebnisse aus der "quenched rainbow" Näherung verwenden kann und eine Massenhierarchie für die Neutrinos aus den Massenverhältnissen der geladenen Leptonen berechnen kann.
Ein Hauptziel der vorliegenden Arbeit war die Etablierung von in vitro Testsystemen zur Erfassung androgener und vor allem antiandrogener Aktivität. Es gelang, drei in der Gruppe neue, unterschiedliche in vitro Testsysteme zu etablieren und mittels der bekannten Antiandrogene zu validieren. Zunächst konnte ein transienter Transaktivierungsassay unter Verwendung des Androgenrezeptor-Expressionsplasmids pSG5AR, des Reportergenplasmids pMamneoLuc und des Kontrollplasmids pSV in COS-7 Affennierenzellen aufgebaut werden. Dadurch wurde eine erste Untersuchung potentieller Androgene/Antiandrogene ermöglicht. Ein für ein effizienteres Screening benötigtes stabil transfiziertes Testsystem konnte gleichzeitig in T47D Brustkrebszellen nach stabiler Transfektion des Reportergenplasmids pMamneoLuc und entsprechender Selektion gewonnen werden. Dagegen war ein nach Transfektion von CV-1 Affennierenzellen mit pMamneoLuc und pSG5AR gewonnenes Testsystem nicht über einen längeren Zeitraum stabil. Das Plasmid pSG5AR enthält keinen Selektionsmarker und wurde daher vermutlich im Laufe der Selektionsphase von den Zellen wieder ausgeschleust. Ein transgenes Reportergentestsystem in den Osteosarkomazellen SaOS-2, wie von Wiren et al. 1997 entwickelt, konnte aufgrund des mittels RT-PCR nachgewiesenen zu geringen Androgenrezeptorgehaltes nicht nachvollzogen werden. Die beiden etablierten transgenen Reportergenassays (transienter Transaktivierungsassay in COS-7 Zellen; Transaktivierungsassay in stabilen T47D- Luc Zellen) wiesen in etwa vergleichbare Sensitivität und Präzision auf. Der zum Vergleich erfolgreich etablierte und validierte nicht transgene Reportergenassay auf Basis des endogenen Reporters PSA war dagegen deutlich sensitiver und präziser. Ein Proliferationsassay konnte weder in MFM-223 Brustkrebszellen noch in SaOS-2 Zellen etabliert werden. Dies ist im Falle der SaOS-2 Zellen auf eine veränderte Steroidhormonrezeptorausstattung zurückzuführen. In den drei etablierten in vitro Testsystemen wurden mittels online Strukturdatenbank-Recherche ausgehend von zwei vorselektierten Leitstrukturen identifizierte Verbindungen untersucht. Bei den Benzophenon-Analoga wurden Oxybenzon, p,p'-Dihydroxybenzophenon, p,p'-Dichlorbenzophenon, p,p'-Dibrom- benzophenon, p,p'-Dimethoxybenzophenon, Benzophenon und Xanthon stellvertretend herausgegriffen, bei den Phenylharnstoff-Analoga Linuron, Monolinuron, Metobromuron, Diuron, Fluometuron und Phenylharnstoff. Die Benzophenon-Analoga wurden im transienten Transaktivierungsassay und im PSA Assay, die Phenylharnstoff-Analoga im stabilen T47D-Luc Transaktivierungsassay und im PSA Assay untersucht. Im transienten Transaktivierungsassay zeigten alle untersuchten Benzophenone eindeutig antiandrogene Aktivität. Dies bestätigte sich im PSA Assay mit einer Ausnahme. Für das unsubstituierte Benzophenon konnte in diesem Testsystem auch bei hoher Konzentration (10 microM) keine antiandrogene Aktivität nachgewiesen werden. Die untersuchten Phenylharnstoff-Analoga zeigten ebenfalls in den eingesetzten Testsystemen, PSA Assay und stabiler Transaktivierungsassay, eindeutig antiandrogene Wirkung, die aber im Vergleich zu den etablierten Verbindungen Hydroxyflutamid und Vinclozolin M2 schwächer ausfiel. Der unsubstituierte Phenylharnstoff war in beiden Testsystemen auch bei 10 microM nicht antiandrogen aktiv. Androgene Effekte wurden im transienten Transaktivierungsassay in COS-7 Zellen untersucht. Anhand einer Signifikanzschwelle wurden lediglich Vinclozolin M2, p,p'- DDE und Hydroxyflutamid als eindeutig androgen aktiv identifiziert. Die erhaltenen Ergebnisse wurden zum Aufstellen von dreidimensionalen Struktur- Aktivitätsbeziehungen (3D-QSAR) herangezogen. Zur Generierung des dazu benötigten einzelnen Zahlenwertes wurden die IC50 R bzw. IC30 RWerte der gemessenen Dosis-Wirkungskurven errechnet. Dabei mußte ein erheblicher Fehler in Kauf genommen werden. Unter Verwendung der Daten aus dem PSA Assay konnte für die Phenylharnstoff-Analoga dennoch ein signifikantes Modell mit einem kreuzvalidierten q 2 -Wert von 0,36 erhalten werden. Für die Benzophenon-Analoga dagegen war aufgrund eines negativen q 2 -Wertes das erhaltene Modell nicht zuverlässig. In diesem Fall ergaben die Daten aus dem transienten Transaktivierungsassay aber ein aussagefähiges Modell mit einem q 2 -Wert von 0,47. Die beiden signifikanten Modelle können zur Vorhersage der antiandrogenen Aktivität von bisher nicht getesteten, strukturell ähnlichen Verbindungen herangezogen werden. Insgesamt konnte in dieser Arbeit ein wichtiger Beitrag zur Identifizierung endokriner Disruptoren geleistet werden. Mit Hilfe der validierten in vitro Testsysteme konnten die ausgewählten Leitstrukturen bestätigt und ein QSAR Ansatz zur schnellen theoretischen Prüfung von Antiandrogenen entwickelt werden.
Linearized flows past slender bodies can be asymptotically described by a linear Fredholm integral equation. A collocation method to solve this equation is presented. In cases where the spectral representation of the integral operator is explicitly known, the collocation method recovers the spectrum of the continuous operator. The approximation error is estimated for two discretizations of the integral operator and the convergence is proved. The collocation scheme is validated in several test cases and extended to situations where the spectrum is not explicit.