Refine
Year of publication
- 2005 (126) (remove)
Document Type
- Doctoral Thesis (58)
- Report (23)
- Periodical Part (15)
- Preprint (14)
- Working Paper (5)
- Diploma Thesis (4)
- Master's Thesis (3)
- Conference Proceeding (2)
- Habilitation (1)
- Lecture (1)
Has Fulltext
- yes (126)
Keywords
- Mobilfunk (4)
- Mehrskalenanalyse (3)
- Wavelet (3)
- mobile radio (3)
- Ambient Intelligence (2)
- Approximation (2)
- Computeralgebra (2)
- Elastoplastizität (2)
- Empfängerorientierung (2)
- Flüssig-Flüssig-Extraktion (2)
- Galerkin-Methode (2)
- Geometric Ergodicity (2)
- Jiang's model (2)
- Jiang-Modell (2)
- MIMO (2)
- MIMO-Antennen (2)
- Modellierung (2)
- Navier-Stokes-Gleichung (2)
- Poisson-Gleichung (2)
- Randwertproblem / Schiefe Ableitung (2)
- Rotordynamik (2)
- Scheduling (2)
- Sobolev-Raum (2)
- Stoffübergang (2)
- air interface (2)
- receiver orientation (2)
- Ab-initio-Rechnung (1)
- Abgasnachbehandlung (1)
- Ableitung höherer Ordnung (1)
- Acinetobacter calcoaceticus (1)
- Agenda 21 (1)
- Aggregation (1)
- Akquisition (1)
- Algebraic dependence of commuting elements (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Algebraische Geometrie (1)
- Algorithmus (1)
- Alkylaromaten (1)
- Allgemeine Mikrobiologie (1)
- Ammoniumcarbamat (1)
- Analyse (1)
- Anisotropic Gaussian filter (1)
- Anthocyanidine (1)
- Antioxidans (1)
- Apfel (1)
- Apfelsorte (1)
- Apoptosis (1)
- Arbeitsgedächtnis (1)
- Arc distance (1)
- Armierung (1)
- Ascorbat (1)
- Ascorbinsäure (1)
- Ascorbylradikal (1)
- Audiodeskription (1)
- Ausfallrisiko (1)
- Automatische Differentiation (1)
- Automatische Klassifikation (1)
- Automatisches Beweisverfahren (1)
- Axialschub (1)
- Barriers (1)
- Basisband (1)
- Bauindustrie (1)
- Bauplanung (1)
- Beere (1)
- Beerenobst (1)
- Bernstejn-Polynom (1)
- Beschleunigung (1)
- Beteiligung (1)
- Betriebsfestigkeit (1)
- Blattschneiderameisen (1)
- Bottom-up (1)
- Boundary Value Problem (1)
- Box Algorithms (1)
- Brombeere (1)
- CAE-Kette zur Strukturoptimierung (1)
- CDMA (1)
- CHAMP <Satellitenmission> (1)
- Channel estimation (1)
- Computer Algebra System (1)
- Computeralgebra System (1)
- Container (1)
- Controlling (1)
- Crane (1)
- Crash modelling (1)
- Crashmodellierung (1)
- Cyclopentadienylliganden (1)
- DNA damage (1)
- DNA-Schäden (1)
- Darm (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Derivatives (1)
- Differentialinklusionen (1)
- Diffusionskoeffizient (1)
- Diffusionsmessung (1)
- Diffusionsmodell (1)
- Digitalmodulation (1)
- Discrete Bicriteria Optimization (1)
- Diskrete Mathematik (1)
- Domänenumklappen (1)
- Drei-Säulen-Konzept (1)
- Dreieck (1)
- Drosselspalt (1)
- Duftstoffanalyse (1)
- Dynamic Network Flow Problem (1)
- Dynamische Topographie (1)
- EAG (1)
- EM algorithm (1)
- EPR (1)
- ESR (1)
- Effizienter Algorithmus (1)
- Eigenfrequenz (1)
- Eigenfrequenzbeeinflussung (1)
- Eigenspannungen (1)
- Elastizität (1)
- Elastoplasticity (1)
- Elektronenspinresonanz (1)
- Eliminationsverfahren (1)
- Ellagsäure (1)
- Emissionsverringerung (1)
- Energieerzeugung (1)
- Epoxidklebstoffe Härtungskinetik Oberflächenvorbehandlung Aluminium Netzwerkstrukturen (1)
- Erdbeere (1)
- Evacuation Planning (1)
- Extreme Events (1)
- FPM (1)
- Fahrrad (1)
- Faserverbundwerkstoff (1)
- Federgelenk (1)
- Feedfoward Neural Networks (1)
- Fertigungsverfahren (1)
- Feststoff-Dosiersystem (1)
- Filippov theory (1)
- Filippov-Theorie (1)
- Filtergesetz (1)
- Finite Elemente Methode (1)
- Finite Pointset Method (1)
- Finite-Punktmengen-Methode (1)
- Firmwertmodell (1)
- Flavonoide (1)
- Fliehkraftinvariant (1)
- Flooding (1)
- Folgar-Tucker model (1)
- Fruchtsaft (1)
- Funkdienst (1)
- GARCH (1)
- GARCH Modelle (1)
- GOCE <Satellitenmission> (1)
- GOCE <satellite mission> (1)
- GRACE (1)
- GRACE <Satellitenmission> (1)
- GRACE <satellite mission> (1)
- Gemeinsame Kanalschaetzung (1)
- Genregulation (1)
- Gentherapie (1)
- Geodäsie (1)
- Geodätischer Satellit (1)
- Geographical Information Systems (1)
- Geometrische Ergodizität (1)
- Gießprozesssimulation (1)
- Gießtechnische Restriktionen (1)
- Glasfaserverstärkter Thermoplast (1)
- GlucDOR (1)
- Glucosedehydrogenase (1)
- Gravitational Field (1)
- Gravitationsfeld (1)
- Grenzflächenpolarisation (1)
- Gröbner-Basis (1)
- HPLC (1)
- Halbfrequenzwirbel (1)
- Harmonische Spline-Funktion (1)
- Hidden Markov models for Financial Time Series (1)
- Higher Order Differentials as Boundary Data (1)
- Himbeere (1)
- Hochspannungsfeld (1)
- Homogenisierung <Mathematik> (1)
- Hybridlager (1)
- Hydrological Gravity Variations (1)
- Hydrologie (1)
- Implementierung (1)
- Imprägnierung (1)
- Indicators (1)
- Indikatoren (1)
- Industrielle Mikrobiologie (1)
- Information Theory (1)
- Informationstechnologie (1)
- Informationstheorie (1)
- Intensität (1)
- Interferenz (1)
- Interferenzklassifizierung (1)
- Inverses Problem (1)
- Isotrope Geometrie (1)
- Isotropes System (1)
- Isotropie (1)
- Java (1)
- Johannisbeere (1)
- Joint Transmission (1)
- Kanalschätzung (1)
- Kinderbeteiligung (1)
- Kinetik (1)
- Knuth-Bendix completion (1)
- Knuth-Bendix-Vervollständigung (1)
- Koaleszenz (1)
- Koexistenz (1)
- Kombinatorik (1)
- Kommunikationstechnik (1)
- Kommutative Algebra (1)
- Konstruktive Approximation (1)
- Kontinuum <Mathematik> (1)
- Kontinuumsphysik (1)
- Krafteinleitungen (1)
- Kreiselpumpe (1)
- Kreisverkehr (1)
- Kreitderivaten (1)
- Kugel (1)
- Kugelflächenfunktion (1)
- Kugelfunktion (1)
- Kunststoff / Verbundwerkstoff (1)
- Large-Scale Problems (1)
- Lattice-Boltzmann method (1)
- Level-Set Methode (1)
- Lineare Elastizitätstheorie (1)
- Linienbus (1)
- Lokale Agenda 21 (1)
- Lokalkompakte Kerne (1)
- Luftlager (1)
- Luftschnittstellen (1)
- MIDI <Musikelektronik> (1)
- MIR (1)
- MP3 (1)
- Marangoni (1)
- Maschinelles Lernen (1)
- Mathematical Physics (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- Mehrbenutzer-Informationstheorie (1)
- Mehrtraegeruebertragungsverfahren (1)
- Migration (1)
- Mikrobiologie (1)
- Mikroelektronik (1)
- Milchsäurebakterien (1)
- Minimum Cost Network Flow Problem (1)
- Modulationsübertragungsfunktion (1)
- Molekulargenetik (1)
- Morphismus (1)
- Multi-user information theory (1)
- Multileaf collimator (1)
- Multiobjective programming (1)
- Multiple objective optimization (1)
- Music Information Retrieval (1)
- Musik / Artes liberales (1)
- NOx (1)
- Nachbarkanalinterferenz (1)
- Nachhaltige Entwicklung (1)
- Nachhaltigkeit (1)
- Nachhaltigkeits-Dreieck (1)
- Nachhaltigkeitsstrategie (1)
- Nanocomposites (1)
- Nekrose (1)
- Netzwerksynthese (1)
- New Towns (1)
- Nichtkommutative Algebra (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nonlinear multigrid (1)
- Numerische Mathematik (1)
- OFDM (1)
- OFDM mobile radio systems (1)
- OFDM-Mobilfunksysteme (1)
- Operator (1)
- Optischer Sensor (1)
- P2P (1)
- Palindrom (1)
- Papiermaschine (1)
- Parameter identification (1)
- Parameteridentifikation (1)
- Partizipation (1)
- Pfadintegral (1)
- Phasengleichgewicht (1)
- Phasengrenzfläche (1)
- Planungsbeteiligung (1)
- Poisson line process (1)
- Polymere (1)
- Polyphenole (1)
- Portfoliomanagement (1)
- Poröser Stoff (1)
- Precoding (1)
- Preimage of an ideal under a morphism of algebras (1)
- Projektplanung (1)
- Quantenmechanik (1)
- ROS (1)
- RSK-Werte (1)
- Ratenabhängigkeit (1)
- Reaktive Sauerstoffspezies (1)
- Reaktivextraktion (1)
- Reduktion (1)
- Redundanzvermeidung (1)
- Regelung (1)
- Regularisierung (1)
- Rekonstruktion (1)
- Representation Theory (1)
- Restricted Regions (1)
- Retroviren (1)
- Retrovirus (1)
- Rhabdomyolyse (1)
- Richtungsableitung (1)
- Risiko (1)
- Risikocontrolling (1)
- Schulgelände (1)
- Screening (1)
- Sendesignalvorverarbeitung (1)
- Sensitivitäten (1)
- Simulation (1)
- Skelettmuskel (1)
- Software (1)
- Software-Architektur (1)
- Soziales (1)
- Spannungs-Dehn (1)
- Spherical Harmonics (1)
- Spherical Location Problem (1)
- Spherical Wavelets (1)
- Sphäre (1)
- Sphärische Wavelets (1)
- Spiralrillenlager (1)
- Spline-Wavelets (1)
- Split Operator (1)
- Split-Operator (1)
- Spumaviren (1)
- Stabile Vektorbundle (1)
- Stabilität (1)
- Stable vector bundles (1)
- Stadtbahn (1)
- Statine (1)
- Stochastisches Feld (1)
- Substitutionsreaktion (1)
- Sustainability (1)
- Sustainability Strategy (1)
- Sustainable Development (1)
- TDP1 (1)
- TEAC (1)
- Taylor-Couette (1)
- Telekommunikation (1)
- Test for Changepoint (1)
- Thermodynamik (1)
- Three-Pillar-Approach (1)
- Time Series (1)
- Time-motion-Ultraschallkardiographie (1)
- Titanium complex (1)
- Tonsignal (1)
- Topoisomerasegifte (1)
- Topoisomerasehemmstoffe (1)
- Topoisomerasen (1)
- Topologieoptimierung (1)
- Training (1)
- Transportation Problem (1)
- Triangle (1)
- Tropenökologie (1)
- Tyrosyl-DNA-Phosphodiesterase 1 (TDP1) (1)
- Ultraschallkardiographie (1)
- Unschärferelation (1)
- Upwind-Verfahren (1)
- Vektor <Genetik> (1)
- Verbundwerkstoffe (1)
- Viskosität (1)
- Visualisierung (1)
- Vitamin C (1)
- Vitamin C-Derivate (1)
- Wahrscheinlichkeitsfunktion (1)
- Waldfragmentierung (1)
- Wavelet-Analyse (1)
- Wavelets auf der Kugel und der Sphäre (1)
- Zeitliche Veränderungen (1)
- Zelle / Physiologie (1)
- Zellulares Mobilfunksystem (1)
- Zentrifugalkraft (1)
- Zirconium complex (1)
- [2.2.1]-bicyclic substituents (1)
- [2.2.1]-bicyclisch (1)
- acoustic absorption (1)
- adaptive refinement (1)
- adhesives cure-behaviour aluminium (1)
- aftertreatment (1)
- air drag (1)
- air-bearing (1)
- algebraic constraints (1)
- analoge Mikroelektronik (1)
- anthocyanidins (1)
- apoptosis (1)
- apple (1)
- ascorbate (1)
- ascorbic acid (1)
- ascorbyl radical (1)
- automated theorem proving (1)
- automatic differentiation (1)
- ball (1)
- berry (1)
- beyond 3G (1)
- bottom-up (1)
- centrifugal force (1)
- classification of interference (1)
- coexistence (1)
- combinatorics (1)
- composite materials (1)
- computeralgebra (1)
- constructive approximation (1)
- cre-Sequenz (1)
- cyclopentadienyl ligands (1)
- default time (1)
- derivative-free iterative method (1)
- differential inclusions (1)
- diffusion coefficient (1)
- diffusion measurement (1)
- diffusion model (1)
- distributed computing (1)
- domain switching (1)
- durability (1)
- dynamical topography (1)
- effective elastic moduli (1)
- efficient solution (1)
- elastoplasticity (1)
- electric field (1)
- epidemic algorithms (1)
- epidemische Algorithmen (1)
- epsilon-constraint method (1)
- explicit jump immersed interface method (1)
- extreme solutions (1)
- face value (1)
- facets (1)
- fiber orientation (1)
- fiber-turbulence interaction scales (1)
- finite element method (1)
- flexible fibers (1)
- float glass (1)
- flow resistivity (1)
- foamy virus (1)
- forest fragmentation (1)
- gene therapy (1)
- heat radiation (1)
- hub covering (1)
- hub location (1)
- implementation (1)
- initial temperature (1)
- initial temperature reconstruction (1)
- integer programming (1)
- intensity (1)
- interface (1)
- invariant (1)
- invariants (1)
- inverse problem (1)
- isotropical (1)
- jenseits der dritten Generation (1)
- joint channel estimation (1)
- juice (1)
- large scale integer programming (1)
- leaf-cutting ants (1)
- level-set (1)
- linear filtering (1)
- linear kinetics theory (1)
- lineare kinetische Theorie (1)
- liquid-liquid-extraction (1)
- locally compact kernels (1)
- lokalisierende Kerne (1)
- mass transfer (1)
- mechanism (1)
- mehreren Uebertragungszweigen (1)
- migration (1)
- mixed convection (1)
- multi-carrier (1)
- multi-user (1)
- multicriteria optimization (1)
- nD image processing (1)
- necrosis (1)
- network flows (1)
- network synthesis (1)
- nichtlineare Netzwerke (1)
- non-Newtonian flow in porous media (1)
- non-conventional (1)
- non-woven (1)
- nonlinear circuits (1)
- nonlinear heat equation (1)
- nonlinear inverse problem (1)
- numerics (1)
- odour mixtures (1)
- optically active (1)
- optimization (1)
- optimization algorithms (1)
- optisch aktiv (1)
- optisch aktiver Titankomplex (1)
- optisch aktiver Zirkoniumkomplex (1)
- orientation space (1)
- other-channel interference (1)
- peer-to-peer (1)
- phase space (1)
- political districting (1)
- polyphenols (1)
- portfolio (1)
- probabilistic approach (1)
- processing (1)
- properly efficient solution (1)
- radiative heat transfer (1)
- random -Gaussian aerodynamic force (1)
- random system of fibers (1)
- rate-dependency (1)
- real-time (1)
- reduction (1)
- regular surface (1)
- regularization (1)
- reguläre Fläche (1)
- representative systems (1)
- retroviral vector (1)
- retroviraler Vektor (1)
- retrovirus (1)
- rhabdomyolysis (1)
- sales territory alignment (1)
- scalarization (1)
- sensitivities (1)
- separable filters (1)
- service area (1)
- shape optimization (1)
- simulation (1)
- skeletal muscle cells (1)
- solid-dosing-system (1)
- spiral-groove (1)
- spline-wavelets (1)
- statin (1)
- stochastic dif (1)
- superposed fluids (1)
- territory desgin (1)
- thermodynamic model (1)
- thermoplastische Bandhalbzeuge (1)
- topoisomerases (1)
- topological sensitivity (1)
- topology optimization (1)
- trace stability (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- tropical ecology (1)
- tropical rainforest (1)
- tropischer Regenwald (1)
- unbeschränktes Potential (1)
- unbounded potential (1)
- urban elevation (1)
- verteilte Berechnung (1)
- viral vector (1)
- viraler Vektor (1)
- virtual material design (1)
- viscosity model (1)
- white noise (1)
- wireless communications system (1)
- working memory (1)
- ÖPNV-Beschleunigung (1)
- Öffentlicher Personennahverkehr (1)
- Ökologie (1)
- Ökonomie (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (38)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (17)
- Fraunhofer (ITWM) (13)
- Kaiserslautern - Fachbereich Sozialwissenschaften (13)
- Kaiserslautern - Fachbereich Chemie (12)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (10)
- Kaiserslautern - Fachbereich Informatik (8)
- Kaiserslautern - Fachbereich Biologie (4)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (4)
- Kaiserslautern - Fachbereich ARUBI (3)
Zur Eigenspannungsausbildung bei der wickeltechnischen Verarbeitung thermoplastischer Bandhalbzeuge
(2005)
Filament winding is today a well established production technique for fiber reinforced
pressure vessels. Most of the parts are still made using thermosets as matrix material,
but parts with thermoplastic matrices are today on the edge to mass production.
Usually these parts are made from fully consolidated unidirectional fiber reinforced
thermoplastic tapes. During processing the matrix material is molten and the tapes
are placed on the substrate where they re-solidify. A wide range of material combinations
are available on the market. The materials used in the present investigation are
semi-crystalline thermoplastics and glass or carbon fiber i.e. carbon fiber reinforced
Polyetheretherketone, glass fiber reinforced Polyetheretherketone and glass fiber
reinforced Polypropylene.
Applications can be found in the field of medium and high pressure vessels like they
are used for natural gas and hydrogen storage or for tubes and pipes for their transport.
During the design of such parts mostly idealized properties as for example tensile
strength are used. Residual stresses which are inherent for composite materials
are only considered as part of the safety factor.
The present work investigates the generation of residual stresses for in-situ consolidation
during filament winding. Within this process consolidation of the tape material
and the substrate takes place immediately after the tapes are placed. This is contrary
to the normal curing of thermoset materials and has a large influence on the generation
of the residual stress. The impact of these stresses on the behavior of the produced
parts during service is one of the topics of this investigation. Therefore the
background of thermal residual stresses in semi-crystalline thermoplastic parts is discussed
and a closer look on the crystallization behavior of the matrix materials was
taken. As the beginning of the crystal growth is a major point in the generation of thermal residual stress.
The aim of the present work is to find process parameter combinations that allow to
compensate the thermal residual stresses and to generate a residual stress profile
that – unlike the thermal residual stresses - brings about structural benefit. Ring
samples with a defined geometry were made to measure the generated stresses.
The geometry of the samples was chosen in a way that prevents influences of the
boundary conditions of the free edges on the measuring point.
In the investigations the residual stresses were measured in circumferential direction
using a method where the ring samples were cut in radial direction and the deformation
was measured using strain gages. From the strain the local stress can be determined.
It was tried to minimize the number of experiments. Therefore the influence of filament
winding process parameters on the residual stress were investigated using a
Design of Experiments approach where the main influences on the residual stress
generation can be found from a relatively small number of experiments such as 8
instead of 128. As a result of these experiments it was found that the winding angle,
the mandrel temperature, the annealing, the wall thickness and the tape tension have
a significant influence on residual stresses. With increasing winding angles the influence
on the measured circumferential stresses increase regardless to kind of residual
stresses. The mandrel temperature has a large influence on the temperature difference
that causes the stress between fiber and matrix. They are caused by different
thermal expansion coefficients of fiber and matrix. Structural benefit through annealing
is only theoretically possible because the required outside temperatures
along with internal cooling of the parts can not be realized within an industrial processes.
Increasing wall thickness leads to also increasing residual stress but it can not
be the aim to build oversized parts for the sake of residual stresses. The applied tape
tension was identified as a parameter that can be used to achieve the desired residual
stress state with reasonable efforts.
Different ways of varying the tape tension with increasing wall thickness were investigated.
The tape tension was increased with every layer to a chosen maximum value or, after half of the layers were placed, in one step to the maximum value. Furthermore
a continuously high tape tension and a variant without tape tension was investigated.
The experiments led to the conclusion that increasing tape tension with increasing
wall thickness is a viable way to have structural benefit from residual stress.
The increasing in one step gave the best results.
The impact of the thermal history during production is discussed as well. Temperatures
must not exceed the softening point of the matrix. Otherwise a part of the tape
tension gets lost by relaxation. In a particular case the relaxation reached an amount
where the compensation of the thermal stresses failed. Thermodynamic calculations
led to the conclusion that the energy transfer into the material by mandrel heating
and melt energy caused a temperature above the softening point.
The impact of tape tension on material quality is documented. Very low tape tension
can not guarantee a proper consolidation. On the other hand excessive tape tension
can lead to matrix squeeze out and in particular cases to cracks due to too high residual
stresses. Therefore the tape tension profile should be well adapted to work
load, the composite and its properties.
Investigations on the relaxation behavior of the residual stresses showed that relaxation
occurs and that a part of the residual stress relaxes when the samples were exposed
to higher temperatures. Test at room temperature showed no significant sign
of relaxation. When the temperature was raised – in this case to 80 °C - the samples
clearly relaxed. The amount of induced residual stress sank to half of its initial value.
Investigations on the structural benefit showed that material savings of up to 23 % of
weight are possible for high pressure applications and fiber reinforcements with relatively
low fiber volume content. Higher fiber volume contents which also mean higher
strengths reduce the benefit. As the strength of the material increases the benefit
reduces in relation to it.
Nevertheless there is a potential in material saving and one should keep in mind that
the costs to establish the equipment to control the tape tension is cheap in comparison
to the achievable result.
Externe elektrische Gleichspannungsfelder können sowohl den physikalischen, als auch den reaktiven Stoffaustausch bei der Flüssig-Flüssig Extraktion signifikant beeinflussen, wodurch eine Steigerung des Stoffüberganges im elektrischen Feld erzielt werden kann. Die Gründe hierfür sind im elektrischen Feld gesteigerte Grenzflächenturbulenzen und feldinduzierte Konzentrationspolarisationen im Phasengrenzflächenbereich, welche durch Migrationswechselwirkungen verursacht werden. Das elektrische Feld hat bezüglich des reaktiven Stoffübergangs sowohl in Einzel- als auch im Mehrkomponentensystem keinen Einfluss auf das chemische Gleichgewicht. Jedoch wird durch das Feld die Kinetik beschleunigt und das Gleichgewicht schneller erreicht. Auch die maximale Trennselektivität im Mehrkomponentensystem, welche im Gleichgewicht erreicht wird, wird nicht durch das Feld verändert. Diese ist primär von der Konzentration und der Säurestärke abhängig. Lediglich im Falle sehr schwacher Säuren ist das Gleichgewicht über das natürliche hinaus verschiebbar. Diese Stoffaustauscherhöhung ist durch die im elektrischen Feld erhöhte Dissoziation der Übergangskomponente gemäß dem 2. Wien’schen Effekt erklärbar. Zudem ist die feldinduzierte Stoffaustauscherhöhung stark von der Feldwirkrichtung abhängig. Der Feldeinfluss ist dann maximal, wenn das Feld direkt in Stoffübergangs-richtung wirkt. Dies ist bei unbewegten (z. B. planaren) Grenzflächen erreichbar. So konnte in planaren Stoffübergangszellen und am hängenden Tropfen eine starke Stoffaus-tauschbeschleunigung in der Größenordnung von ca. 1000 % erzielt werden. Am bewegten Tropfen konnte zwar eine Stoffaustauscherhöhung durch die im Feld geänderten hydrodynamischen Betriebsgrößen (wie Tropfengröße und Verweilzeit) erzielt werden, jedoch konnte darüber hinaus keine weitere Stoffaustauschbeschleunigung erzielt werden. Dies kann damit erklärt werden, dass bei bewegter sphärischer Grenzflächengeometrie das Feld nicht nur in Stoffübergangsrichtung wirkt und feldinduzierte Polarisations-erscheinungen sich weitgehend kompensieren. Daher gelingt in klassischen Extraktionsapparaten, welche mit Dispergierung und Tropfen-bildung arbeiten, die Verfahrensumsetzung der kontinuierlich betriebenen Extraktion im Hochspannungsfeld nicht effizient. Diese gelingt in einem speziellen Zentrifugalextraktor, dem Taylor-Couette Elektroextraktor, in wessen Ringsspalt zwischen zwei als Elektroden fungierenden, konzentrischen Zylindern auf Grund der Rotationsbewegung sich eine planaranaloge, zylindrische Phasengrenzfläche ausbildet und das Feld somit direkt in Stoffübergangsrichtung wirken kann. Auch wird der stationäre Betriebszustand binnen weniger Minuten erreicht. Zudem entstehen im Phasengrenzbereich Taylorverwirblungen, welche ebenfalls den Stoffaustausch erhöhen. Zur theoretischen Beschreibung konnten Stoffübergangsmodelle entwickelt werden, welche die feldinduzierten Polarisationseffekte berücksichtigen. So gelingt die Berechnung des reaktiven Stoffaustauschs über ein elektrostatisch erweitertes Kinetikmodell, welches neben der chemischen Reaktion, der Grenzflächenadsorption des Ionenaustauschers und dem Reaktionsgleichgewicht, auch die Migration über die Nernst Planck Gleichung, sowie auch die Elektrodissoziation über einen Ansatz nach Onsager berücksichtigt. Die Berücksichtigung der im elektrischen gesteigerten Grenzflächenturbulenz gelingt über einen elektrostatisch erweiterten Ansatz nach Maroudas und Sawistowski. Auch wurde ein Modell zur Berechnung des Stoffübergangs im Taylor-Couette Extraktors vorgestellt. Die Berechnung der anliegenden elektrischen Felder gelingt über die Finite Elemente Methode basierend auf den Maxwell’schen Gleichungen oder vereinfacht über die Laplace Gleichung. Wesentlich ist, dass nicht die Elektrodenpotentialdifferenz, sondern das berechnete Potential an der Phasengrenzfläche den Stofftransfer im elektrischen Feld bestimmt, was durch die Simulationsrechnungen bestätigt wurde.
Anthocyane, eine Untergruppe der Flavonoide, sind als natürliche Farbpigmente weit verbreitet in Lebensmitteln pflanzlicher Herkunft. Ihnen werden eine Reihe von gesundheitlich positiven Wirkungen zugeschrieben, was dazu geführt hat, dass auf dem Markt für Nahrungsergänzungsmittel immer mehr Produkte auf Anthocyanbasis auftauchen. Als ein möglicher nachteiliger Faktor für eine potentielle genotoxische Wirkung von Flavonoiden wird die Interaktion mit humanen Topoisomerasen diskutiert. Bezüglich einer möglichen Risiko/Nutzen-Evaluierung ist es nicht nur von Bedeutung, ob Flavonoide/Anthocyane mit diesen Enzymen interagieren, sondern auch die Art dieser Interaktion und sich möglicherweise daraus ergebende Konsequenzen, besonders im Hinblick auf die Integrität der DNA. Ziel der vorliegenden Arbeit war es, die Anthocyanidine Delphinidin (Del), Cyanidin (Cy), Pelargonidin (Pg), Paeonidin (Pn) und Malvidin (Mv) hinsichtlich ihrer Beeinflussung humaner Topoisomerase I und II zu untersuchen. Ein Schwerpunkt lag bei der Aufklärung des Wirkmechanismus dieser Verbindungen bezüglich einer Stabilisierung des Cleavable Complex, möglichen Interaktionen mit der DNA und die Relevanz dieser Effekte für die Integrität zellulärer DNA. Es konnte gezeigt werden, dass nur die Verbindungen mit vicinalen Hydroxygruppen im B-Ring des Anthocyangrundgerüstes, Del und Cy, effektiv die katalytische Aktivität isolierter humaner Topoisomerase I und Topoisomerase IIalpha + IIbeta hemmen. Sie wirken jedoch nicht wie die klassischen Topoisomerasegifte Camptothecin oder Etoposid über die Stabilisierung des kovalenten Topoisomerase-DNA-Komplexes, sondern stellen rein katalytische Inhibitoren dar. Del und Cy könnten sogar die DNA vor Topoisomerase I-Giften schützen, da sie zumindest am isolierten Enzym die Stabilisierung des Cleavable Complex der Topoisomerase I durch Camptothecin effektiv verhindern. Für alle getesteten Anthocyanidine konnte gezeigt werden, dass sie im niedrigen mikromolaren Bereich, zwischen 15 µM und 50 µM, sowohl an die kleine Furche der DNA binden, als auch in die DNA interkalieren können und das diese DNA-interagierenden Eigenschaften keinen wesentlichen Beitrag zur Hemmung der Topoisomerasen liefern. Auch wenn im Falle der Anthocyanidine die direkte DNA-Interaktion im Hinblick auf die Topoisomerasehemmung nur von geringer Bedeutung ist, so scheint sie jedoch relevant bezüglich der Integrität zellulärer DNA zu sein. Die einstündige Inkubation von HT29 Kolonkarzinomzellen zeigte, dass bei Inkubation mit den Anthocyanidinen in Konzentrationen >50 µM, signifikant DNA-Schäden induziert werden. Im Hinblick auf die Integrität der DNA lebender Zellen scheint der jeweilige Konzentrationsbereich von entscheidender Bedeutung zu sein. Ein weiterer Schwerpunkt der vorliegenden Arbeit war es, den Einfluss der Überexpression der Tyrosyl-DNA-Phosphodiesterase 1 (TDP1) nach Inkubation mit Topoisomerasegiften zu untersuchen. Im Rahmen einer Kooperation mit Prof. Boege, Universitätsklinikum Düsseldorf, wurden uns verschiedene Zellklone zur Verfügung gestellt, die ein Fusionsprotein aus TDP1 und GFP überexprimierten, sowie eine katalytisch inaktive Variante des Enzymes (TDP1-H263A). Im Rahmen dieser Arbeit wurden Untersuchungen an diesen Zelllinien zur Zytotoxizität von Topoisomerasegiften durchgeführt, sowie Untersuchungen zum Einfluss der TDP1 auf die Induktion von DNA-Schäden durch Topoisomerasegifte. Untersuchungen zum Zellwachstum der verschiedenen Zelllinien mittels MTT-Zytotoxiziätsassay zeigten nach 72 stündiger Inkubation mit den Topoisomerasegiften Camptothecin (Topo I) und Etoposid (Topo II) keinen signifikanten Wachstumsvorteil bei Überexpression der TDP1. Betrachtet man jedoch die Induktion von DNA-Schäden bei Kurzzeitinkubationen (1h) von Camptothecin im Comet-Assay, so erkennt man eine signifikante Reduktion der DNA-Schäden bei Überexpression der TDP1. Ist bei der TDP1 das katalytische Histidin 263 gegen Alanin ausgetauscht, steigen die DNA-Schäden auf das gleiche Niveau wie bei nicht TDP1-überexprimierenden Zellen, d.h. es findet keine Reparatur statt. Erstaunlicherweise zeigte sich auch bei der Inkubation mit dem Topoisomerase II-Gift Etoposid, welches ursprünglich als Negativkontrolle gedacht war, der gleiche Reparatureffekt. Eine Hochregulation DNA-reparierender Enzyme ist unwahrscheinlich, da bei Inkubation mit dem DNA-methylierenden Agens N-methyl-N'-nitro-N-nitrosoguanidin (MNNG) alle Zelllinien eine vergleichbare Menge an DNA-Schäden aufwiesen. Die nunmehr erzielten Ergebnisse eröffnen erstmals die Möglichkeit den Comet-Assay, unter Verwendung der unterschiedlichen TDP1-Klone, zum Auffinden von TDP1-Hemmstoffen einzusetzen.
This work is dedicated to the wavelet modelling of regional and temporal variations of the Earth's gravitational potential observed by GRACE. In the first part, all required mathematical tools and methods involving spherical wavelets are introduced. Then we apply our method to monthly GRACE gravity fields. A strong seasonal signal can be identified, which is restricted to areas, where large-scale redistributions of continental water mass are expected. This assumption is analyzed and verified by comparing the time series of regionally obtained wavelet coefficients of the gravitational signal originated from hydrology models and the gravitational potential observed by GRACE. The results are in good agreement to previous studies and illustrate that wavelets are an appropriate tool to investigate regional time-variable effects in the gravitational field.
Seit Beginn der 90er Jahre werden vermehrt Kreisverkehre gebaut. Insbesondere beim Umbau vorhandener LSA-Kreuzungen zu Kreisverkehren werden Vorteile für den Individualverkehr gesehen. Diese Vorteile werden allerdings mit Nachteilen für den öffentlichen Personennahverkehr erkauft. Die erstmals untersuchten Auswirkungen dieser Knotenpunktform auf Fahrzeuge des öffentlichen Personennahverkehr (ÖPNV) sind teilweise erheblich: - Wartezeiten für ÖV-Fahrzeuge an Kreisverkehren schwanken und sind schlecht in den Fahrplan zu integrieren, - Fahrtrichtungswechsel bei der Befahrung der Kreisfahrbahn reduzieren den Fahrkomfort, - ÖPNV-Beschleunigungen werden an Kreisver-kehren selten angewandt, - ÖPNV-Priorisierungmöglichkeiten durch Eingriffe in Lichtsignalanlagensteuerungen entfallen. Dass Beschleunigungen auf dem Linienweg in vielen Fällen möglich sind, wurde sowohl durch empirische Auswertungen bestehender Kreisverkehre als auch durch Simulationen gezeigt. Bei geringen Verkehrsstärken sind Beschleunigungen im Allgemeinen nicht notwendig. Zudem steht die Wirkung häufig in ungünstiger Relation zum erforderlichen Aufwand. Bei hohen Verkehrsstärken in den Zufahrten ermöglicht dagegen z.B. die ÖV-Spur hervorragende Beschleunigungen für den ÖV. Allerdings wurden bei zweistreifiger, pa-ralleler Führung von MIV- und ÖV-Strömen zu einstreifigen Kreisverkehren häufig Irritationen zwischen MIV- und ÖV-Fahrzeugen beobachtet. Kei-ne Irritationen wurden beobachtet, wenn MIV- und ÖV- Spur in der Zufahrt zum Kreisverkehr zweistreifig parallel geführt werden und die MIV-Fahrspur unmittelbar vor dem Kreisverkehr in einer Fahrstreifenreduktion endet und die ebenfalls endende ÖV-Spur als normaler Fahrstreifen fortgeführt wird. Bei dieser als „KREIFAS“ (KReisverkehr mit EIngezogenem FAhrstreifen) bezeichneten Verkehrsführung wechseln die MIV-Fahrzeuge den Fahrstreifen, während der ÖV geradeaus weiterfahren kann. Eine Beschleunigungsmöglichkeit bei der Einfahrt in die Kreisfahrbahn bietet die „schlafende LSA“. Der Vorrang der Fahrzeuge auf der Kreisfahrbahn wird durch eine schlafende Lichtsignalanlage (Dunkelampel) aufgehoben, wenn ÖV-Fahrzeuge in der Zufahrt auf den Kreisverkehr zufahren. Fahrzeuge aus dieser Zufahrt erhalten solange Vorrang, bis dass das ÖV-Fahrzeug die Kreisfahrbahn erreicht hat. Durch diese und andere Maßnahmen wird eine Reduzierung von Wartezeiten für Busse bei der Einfahrt in die Kreisfahrbahn erreicht, ebenso wie eine Verstetigung des Fahrtverlaufes in der Zufahrt und somit eine Steigerung von Fahrplantreue und Fahrkomfort erreicht. Insgesamt ist eine ausgewogene Berücksichtigung aller Verkehrsteilnehmer auch bei Planungen an Kreisverkehren erforderlich. Wenn ÖV-Linien beschleunigt über Kreisverkehre geführt werden sollen, sind deren Anforderungen besonders sorgfältig zu berücksichtigen. Somit ergeben sich auch für das Element Kreisverkehr Beschleunigungsmöglichkeiten, um den Wegfall von LSA-Beschleunigungsmaßnahmen in weiten Teilen zu kompensieren.
In der Arbeit geht es um die Untersuchung von Mechanismen zur Energiegewinnung in Ambient Intelligence Systemen. Zunächst wird ein Überblick über die existierenden Möglichkeiten und deren zu grunde liegenden physikalischen Effekte gegeben. Dann wird die Energiegewinnung mittels Thermogeneratoren näher untersucht.
The symplectic group of homogeneous canonical transformations is represented in the bosonic Fock space by the action of the group on the ultracoherent vectors, which are generalizations of the coherent states. The intertwining relations between this representation and the algebra of Weyl operators are derived. They confirm the identification of this representation with Bogoliubov transformations.
Die vorliegende Arbeit beschäftigt sich mit dem Reibungs- und Verschleißverhalten
Polytetraßuorethylen-basierender Verbundwerkstoffe (PTFE) bezogen auf eine Anwendung
als tribologisch beanspruchte Maschinenelemente im Temperaturbereich
zwischen Raumtemperatur und Temperaturen in kryogenen Medien. Dieser Temperaturbereich
ist relevant für eine Reihe neuer, innovativer Technologien, allen voran die
Wasserstofftechnologie als Alternative zu fossilen Energieträgern.
Der Ausgangspunkt dieser Arbeit ist eine auf bekannten Erfahrungen und entsprechenden
Publikationen aufbauende Werkstoffauswahl. Daher wurde PTFE als Matrixwerkstoff
ausgewählt, da es sich bereits in Tieftemperaturanwendungen bewährt hat.
Zur Verstärkung der PTFE-Matrix wurden ein polymerer Füllstoff, Polyetheretherketon
(PEEK) beziehungsweise ein aromatisches Polyester, und kurze Kohlenstofffasern
ausgewählt. Diese Werkstoffkomponenten wurden zu einer Reihe von Verbundwerkstoffen
mit systematisch variierendem Faser- und Füllstoffgehalt zusammengesetzt.
Der experimentelle Teil beschäftigt sich schwerpunktmäßig mit tribologischen Untersuchungen
bei Raumtemperatur mit Hilfe einer Eigenbau-Stift-Scheibe-Prüfapparatur.
Als Gegenkörper kommen geschliffene Laufringe aus 100 Cr6 Stahl zum Einsatz.
Alle Verbundwerkstoffe wurden bei Standard-Testbedingungen von 1 m/s, 1 MPa sowie
Raumtemperatur getestet. Einer der verschleißbeständigsten Verbundwerkstoffe
wurde auch bei verschiedenen Geschwindigkeiten und Belastungen geprüft. Das Reibungsverhalten
dieser Werkstoffe zeigte sich anders als erwartet, weshalb zusätzliche
Versuche erforderlich waren, um den TransferÞlmbildungsprozess beobachten zu können.
Zur Einordnung der tribologischen Ergebnisse werden auch reines PTFE und
einige lediglich partikelgefüllte PTFE-Compounds hinsichtlich Reibung und Verschleiß
getestet. Weiterhin werden die für tribologische Anwendungen wichtigen mechanischen
und thermischen Werkstoffeigenschaften untersucht.
Im Diskussionsteil werden die Einßüsse der Füllstoffe und Fasern auf die resultierenden
mechanischen, thermischen und tribologischen Werkstoffeigenschaften bewertet.
Die im Rahmen dieser Arbeit beschafften bzw. hergestellten Werkstoffe wurden parallel
an der Bundesanstalt für Materialforschung und -prüfung, BAM, Berlin, tribologischen
Beanspruchungen in verschiedenen kryogenen Medien, insbesondere auch in ßüssigem
Wasserstoff, unterworfen. Auf die dort erarbeiteten Ergebnisse wird ebenfalls kurz
eingegangen.
In conventional radio communication systems, the system design generally starts from the transmitter (Tx), i.e. the signal processing algorithm in the transmitter is a priori selected, and then the signal processing algorithm in the receiver is a posteriori determined to obtain the corresponding data estimate. Therefore, in these conventional communication systems, the transmitter can be considered the master and the receiver can be considered the slave. Consequently, such systems can be termed transmitter (Tx) oriented. In the case of Tx orientation, the a priori selected transmitter algorithm can be chosen with a view to arrive at particularly simple transmitter implementations. This advantage has to be countervailed by a higher implementation complexity of the a posteriori determined receiver algorithm. Opposed to the conventional scheme of Tx orientation, the design of communication systems can alternatively start from the receiver (Rx). Then, the signal processing algorithm in the receiver is a priori determined, and the transmitter algorithm results a posteriori. Such an unconventional approach to system design can be termed receiver (Rx) oriented. In the case of Rx orientation, the receiver algorithm can be a priori selected in such a way that the receiver complexity is minimum, and the a posteriori determined transmitter has to tolerate more implementation complexity. In practical communication systems the implementation complexity corresponds to the weight, volume, cost etc of the equipment. Therefore, the complexity is an important aspect which should be taken into account, when building practical communication systems. In mobile radio communication systems, the complexity of the mobile terminals (MTs) should be as low as possible, whereas more complicated implementations can be tolerated in the base station (BS). Having in mind the above mentioned complexity features of the rationales Tx orientation and Rx orientation, this means that in the uplink (UL), i.e. in the radio link from the MT to the BS, the quasi natural choice would be Tx orientation, which leads to low cost transmitters at the MTs, whereas in the downlink (DL), i.e. in the radio link from the BS to the MTs, the rationale Rx orientation would be the favorite alternative, because this results in simple receivers at the MTs. Mobile radio downlinks with the rationale Rx orientation are considered in the thesis. Modern mobile radio communication systems are cellular systems, in which both the intracell and intercell interferences exist. These interferences are the limiting factors for the performance of mobile radio systems. The intracell interference can be eliminated or at least reduced by joint signal processing with consideration of all the signals in the considered cell. However such joint signal processing is not feasible for the elimination of intercell interference in practical systems. Knowing that the detrimental effect of intercell interference grows with its average energy, the transmit energy radiated from the transmitter should be as low as possible to keep the intercell interference low. Low transmit energy is required also with respect to the growing electro-phobia of the public. The transmit energy reduction for multi-user mobile radio downlinks by the rationale Rx orientation is dealt with in the thesis. Among the questions still open in this research area, two questions of major importance are considered here. MIMO is an important feature with respect to the transmit power reduction of mobile radio systems. Therefore, first questionconcerns the linear Rx oriented transmission schemes combined with MIMO antenna structures. The investigations of the MIMO benefit on the linear Rx oriented transmission schemes are studied in the thesis. Utilization of unconventional multiply connected quantization schemes at the receiver has also great potential to reduce the transmit energy. Therefore, the second question considers the designing of non-linear Rx oriented transmission schemes combined with multiply connected quantization schemes.
Territory design may be viewed as the problem of grouping small geographic areas into larger geographic clusters called territories in such a way that the latter are acceptable according to relevant planning criteria. In this paper we review the existing literature for applications of territory design problems and solution approaches for solving these types of problems. After identifying features common to all applications we introduce a basic territory design model and present in detail two approaches for solving this model: a classical location–allocation approach combined with optimal split resolution techniques and a newly developed computational geometry based method. We present computational results indicating the efficiency and suitability of the latter method for solving large–scale practical problems in an interactive environment. Furthermore, we discuss extensions to the basic model and its integration into Geographic Information Systems.
Channel estimation is of great importance in many wireless communication systems, since it influences the overall performance of a system significantly. Especially in multi-user and/or multi-antenna systems, i.e. generally in multi-branch systems, the requirements on channel estimation are very high, since the training signals or so called pilots that are used for channel estimation suffer from multiple access interference. Recently, in the context with such systems more and more attention is paid to concepts for joint channel estimation (JCE) which have the capability to eliminate the multiple access interference and also the interference between the channel coefficients. The performance of JCE can be evaluated in noise limited systems by the SNR degradation and in interference limited systems by the variation coefficient. Theoretical analysis carried out in this thesis verifies that both performance criteria are closely related to the patterns of the pilots used for JCE, no matter the signals are represented in the time domain or in the frequency domain. Optimum pilots like disjoint pilots, Walsh code based pilots or CAZAC code based pilots, whose constructions are described in this thesis, do not show any SNR degradation when being applied to multi-branch systems. It is shown that optimum pilots constructed in the time domain become optimum pilots in the frequency domain after a discrete Fourier transformation. Correspondingly, optimum pilots in the frequency domain become optimum pilots in the time domain after an inverse discrete Fourier transformation. However, even for optimum pilots different variation coefficients are obtained in interference limited systems. Furthermore, especially for OFDM-based transmission schemes the peak-to-average power ratio (PAPR) of the transmit signal is an important decision criteria for choosing the most suitable pilots. CAZAC code based pilots are the only pilots among the regarded pilot constructions that result in a PAPR of 0 dB for the transmit signal that origins in the transmitted pilots. When summarizing the analysis regarding the SNR degradation, the variation coefficient and the PAPR with respect to one single service area and considering the impact due to interference from other adjacent service areas that occur due to a certain choice of the pilots, one can conclude that CAZAC codes are the most suitable pilots for the application in JCE of multi-carrier multi-branch systems, especially in the case if CAZAC codes that origin in different mother codes are assigned to different adjacent service areas. The theoretical results of the thesis are verified by simulation results. The choice of the parameters for the frequency domain or time domain JCE is oriented towards the evaluated implementation complexity. According to the chosen parameterization of the regarded OFDM-based and FMT-based systems it is shown that a frequency domain JCE is the best choice for OFDM and a time domain JCE is the best choice for FMT applying CAZAC codes as pilots. The results of this thesis can be used as a basis for further theoretical research and also for future JCE implementation in wireless systems.
In this work we introduce a new bandlimited spherical wavelet: The Bernstein wavelet. It possesses a couple of interesting properties. To be specific, we are able to construct bandlimited wavelets free of oscillations. The scaling function of this wavelet is investigated with regard to the spherical uncertainty principle, i.e., its localization in the space domain as well as in the momentum domain is calculated and compared to the well-known Shannon scaling function. Surprisingly, they possess the same localization in space although one is highly oscillating whereas the other one shows no oscillatory behavior. Moreover, the Bernstein scaling function turns out to be the first bandlimited scaling function known to the literature whose uncertainty product tends to the minimal value 1.
The Folgar-Tucker equation (FTE) is the model most frequently used for the prediction of fiber orientation (FO) in simulations of the injection molding process for short-fiber reinforced thermoplasts. In contrast to its widespread use in injection molding simulations, little is known about the mathematical properties of the FTE: an investigation of e.g. its phase spaceMFT has been presented only recently. The restriction of the dependent variable of the FTE to the setMFT turns the FTE into a differential algebraic system (DAS), a fact which is commonly neglected when devising numerical schemes for the integration of the FTE. In this article1 we present some recent results on the problem of trace stability as well as some introductory material which complements our recent paper.
The use of polymers subjected to various tribological situations has become state of
the art. Owing to the advantages of self-lubrication and superior cleanliness, more
and more polymer composites are now being used as sliding elements, which were
formerly composed of metallic materials only. The feature that makes polymer composites
so promising in industrial applications is the opportunity to tailor their properties
with special fillers. The main aim of this study was to strength the importance of
integrating various functional fillers in the design of wear-resistant polymer composites
and to understand the role of fillers in modifying the wear behaviour of the materials.
Special emphasis was focused on enhancement of the wear resistance of
thermosetting and thermoplastic matrix composites by nano-TiO2 particles (with a
diameter of 300nm).
In order to optimize the content of various fillers, the tribological performance of a
series of epoxy-based composites, filled with short carbon fibre (SCF), graphite,
PTFE and nano-TiO2 in different proportions and combinations, was investigated.
The patterns of frictional coefficient, wear resistance and contact temperature were
examined by a pin-on-disc apparatus in a dry sliding condition under different contact
pressures and sliding velocities. The experimental results indicated that the addition
of nano-TiO2 effectively reduced the frictional coefficient, and consequently the contact
temperature, of short-fibre reinforced epoxy composites. Based on scanning
electron microscopy (SEM) and atomic force microscopy (AFM) observations of the
worn surfaces, a positive rolling effect of the nanoparticles between the material pairs
was proposed, which led to remarkable reduction of the frictional coefficient. In particular,
this rolling effect protected the SCF from more severe wear mechanisms, especially
in high sliding pressure and speed situations. As a result, the load carrying
capacity of materials was significantly improved. In addition, the different contributions
of two solid lubricants, PTFE powders and graphite flakes, on the tribologicalperformance of epoxy nanocomposites were compared. It seems that graphite contributes
to the improved wear resistance in general, whereas PTFE can easily form a
transfer film and reduce the wear rate, especially in the running-in period. A combination of SCF and solid lubricants (PTFE and graphite) together with TiO2 nanoparticles
can achieve a synergistic effect on the wear behaviour of materials.
The favourable effect of nanoparticles detected in epoxy composites was also found
in the investigations of thermoplastic, e.g. polyamide (PA) 6,6 matrix. It was found
that nanoparticles could reduce the friction coefficient and wear rate of the PA6,6
composite remarkably, when additionally incorporated with short carbon fibres and
graphite flakes. In particular, the addition of nanoparticles contributed to an obvious
enhancement of the tribological performances of the short-fibre reinforced, hightemperature
resistant polymers, e.g. polyetherimide (PEI), especially under extreme
sliding conditions.
A procedure was proposed in order to correlate the contact temperature and the
wear rate with the frictional dissipated energy. Based on this energy consideration, a
better interpretation of the different performance of distinct tribo-systems is possible.
The validity of the model was illustrated for various sliding tests under different conditions.
Although simple quantitative formulations could not be expected at present, the
study may lead to a fundamental understanding of the mechanisms controlling friction
and wear from a general system point of view. Moreover, using the energybased
models, the artificial neural network (ANN) approach was applied to the experimental
data. The well-trained ANN has the potential to be further used for online monitoring and prediction of wear progress in practical applications.Die Verwendung von Polymeren im Hinblick auf verschiedene tribologische Anwendungen
entspricht mittlerweile dem Stand der Technik. Aufgrund der Vorteile von
Selbstschmierung und ausgezeichneter Sauberkeit werden polymere Verbundwerkstoffe
immer mehr als Gleitelemente genutzt, welche früher ausschließlich aus metallischen
Werkstoffen bestanden. Die Besonderheit, die polymere Verbundwerkstoffe
so vielversprechend für industrielle Anwendungen macht, ist die Möglichkeit ihre Eigenschaften
durch Zugabe von speziellen Füllstoffen maßzuschneidern. Das Hauptziel
dieser Arbeit bestand darin, die Wichtigkeit der Integration verschiedener funktionalisierter
Füllstoffe in den Aufbau polymerer Verbundwerkstoffe mit hohem Verschleißwiderstand
aufzuzeigen und die Rolle der Füllstoffe hinsichtlich des Verschleißverhaltens
zu verstehen. Hierbei lag besonderes Augenmerk auf der Verbesserung
des Verschleißwiderstandes bei Verbunden mit duromerer und thermoplastischer
Matrix durch die Präsenz von TiO2-Partikeln (Durchmesser 300nm).
Das tribologische Verhalten epoxidharzbasierter Verbunde, gefüllt mit kurzen Kohlenstofffasern
(SCF), Graphite, PTFE und nano-TiO2 in unterschiedlichen Proportionen
und Kombinationen wurde untersucht, um den jeweiligen Füllstoffgehalt zu optimieren.
Das Verhalten von Reibungskoeffizient, Verschleißwiderstand und Kontakttemperatur
wurde unter Verwendung einer Stift-Scheibe Apparatur bei trockenem
Gleitzustand, verschiedenen Kontaktdrücken und Gleitgeschwindigkeiten erforscht.
Die experimentellen Ergebnisse zeigen, dass die Zugabe von nano-TiO2 in kohlenstofffaserverstärkte
Epoxide den Reibungskoeffizienten und die Kontakttemperatur
herabsetzen können. Basierend auf Aufnahmen der verschlissenen Oberflächen
durch Rasterelektronen- (REM) und Rasterkraftmikroskopie (AFM) trat ein positiver
Rolleffekt der Nanopartikel zwischen den Materialpaaren zum Vorschein, welcher zu
einer beachtlichen Reduktion des Reibungskoeffizienten führte. Dieser Rolleffekt
schützte insbesondere die SCF vor schwerwiegenderen Verschleißmechanismen,
speziell bei hohem Gleitdruck und hohen Geschwindigkeiten. Als Ergebnis konnte die Tragfähigkeit dieser Materialien wesentlich verbessert werden. Zusätzlich wurde
die Wirkung zweier fester Schmierstoffe (PTFE-Pulver und Graphit-Flocken) auf die tribologische Leistungsfähigkeit verglichen. Es scheint, daß Graphit generell zur Verbesserung
des Verschleißwiderstandes beiträgt, wobei PTFE einen Transferfilm bilden
kann und die Verschleißrate insbesondere in der Einlaufphase reduziert. Die
Kombination von SCF und festen Schmierstoffen zusammen mit TiO2-Nanopartikeln
kann einen Synergieeffekt bei dem Verschleißverhalten der Materialien hervorrufen.
Der positive Effekt der Nanopartikel in Duromeren wurde ebenfalls bei den Untersuchungen
von Thermoplasten (PA 66) gefunden. Die Nanopartikel konnten den Reibungskoeffizienten
und die Verschleißrate der PA 66-Verbunde herabsetzen, wobei
zusätzlich Kohlenstofffasern und Graphit-Flocken enthalten waren. Die Zugabe von
Nanopartikeln trug offensichtlich auch zur Verbesserung der tribologischen Leistungsfähigkeit
von SCF-verstärkten, hochtemperaturbeständigen Polymeren (PEI)
insbesondere unter extremen Gleitzuständen, bei. Es wurde eine Methode vorgestellt,
um die Kontakttemperatur und die Verschleißrate mit der durch Reibung dissipierten
Energie zu korrelieren. Diese Energiebetrachtung ermöglicht eine bessere
Interpretation der verschiedenen Eigenschaften von ausgewählten Tribo-Systemen.
Die Gültigkeit dieses Models wurde für mehrere Gleittests unter verschiedenen Bedingungen
erklärt.
Vom generellen Blickpunkt eines tribologischen Systems aus mag diese Arbeit zu
einem fundamentalen Verständnis der Mechanismen führen, welche das Reibungsund
Verschleißverhalten kontrollieren, obwohl hier einfache quantitative (mathematische)
Zusammenhänge bisher nicht zu erwarten sind. Der auf energiebasierenden
Modellen fußende Lösungsansatz der neuronalen Netzwerke (ANN) wurde darüber
hinaus auf die experimentellen Datensätze angewendet. Die gut trainierten ANN's
besitzen das Potenzial sie in der praktischen Anwendungen zur Online-
Datenauswertung und zur Vorhersage des Verschleißfortschritts einzusetzen.
The scientific and industrial interest devoted to polymer/layered silicate
nanocomposites due to their outstanding properties and novel applications resulted
in numerous studies in the last decade. They cover mostly thermoplastic- and
thermoset-based systems. Recently, studies in rubber/layered silicate
nanocomposites were started, as well. It was presented how complex maybe the
nanocomposite formation for the related systems. Therefore the rules governing their
structure-property relationships have to be clarified. In this Thesis, the related
aspects were addressed.
For the investigations several ethylene propylene diene rubbers (EPDM) of polar and
non-polar origin were selected, as well as, the more polar hydrogenated acrylonitrile
butadiene rubber (HNBR). The polarity was found to be beneficial on the
nanocomposite formation as it assisted to the intercalation of the polymer chains
within the clay galleries. This favored the development of exfoliated structures.
Finding an appropriate processing procedure, i.e. compounding in a kneader instead
of on an open mill, the mechanical performance of the nanocomposites was
significantly improved. The complexity of the nanocomposite formation in
rubber/organoclay system was demonstrated. The deintercalation of the organoclay
observed, was traced to the vulcanization system used. It was evidenced by an
indirect way that during sulfur curing, the primary amine clay intercalant leaves the
silicate surface and migrates in the rubber matrix. This was explained by its
participation in the sulfur-rich Zn-complexes created. Thus, by using quaternary
amine clay intercalants (as it was presented for EPDM or HNBR compounds) the
deintercalation was eliminated. The organoclay intercalation/deintercalation detected
for the primary amine clay intercalants, were controlled by means of peroxide curing
(as it was presented for HNBR compounds), where the vulcanization mechanism
differs from that of the sulfur curing.
The current analysis showed that by selecting the appropriate organoclay type the
properties of the nanocomposites can be tailored. This occurs via generating different
nanostructures (i.e. exfoliated, intercalated or deintercalated). In all cases, the
rubber/organoclay nanocomposites exhibited better performance than vulcanizates
with traditional fillers, like silica or unmodified (pristine) layered silicates.The mechanical and gas permeation behavior of the respective nanocomposites
were modelled. It was shown that models (e.g. Guth’s or Nielsen’s equations)
developed for “traditional” vulcanizates can be used when specific aspects are taken
into consideration. These involve characteristics related to the platy structure of the
silicates, i.e. their aspect ratio after compounding (appearance of platelet stacks), or
their orientation in the rubber matrix (order parameter).
Sterisch anspruchsvolle Cyclopentadienyl-Liganden wurden zur Stabilisierung neuer Mono(cyclopentadienyl) Verbindungen der schweren Erdalkalimetalle eingesetzt und deren Funktionalisierbarkeit dieser Spezies wurde exemplarisch durch die Synthese neutraler Tripeldecker-Sandwichkomplexe demonstriert. Die dabei ausgebildeten Molekülstrukturen lassen sich mittels DFT-Rechnungen zuverlässig vorhersagen. In diesem Zusammenhang wurde ebenfalls der Cyclononatetraenyl-Ligand, dessen Komplexeigenschaften bisher nur unzureichend untersucht wurden, eingesetzt. Im Rahmen dieser Arbeit gelang die Synthese des Bis(cyclononatetraenyl)bariums, Ba(C9H9)2, und dessen spektroskopische Charakterisierung. DFT-Rechnungen sagen für diesen Komplex eine Metallocenstruktur mit nahezu parallelen Ringen und einem Ba-Ring Abstand von 2.37 Å voraus. Durch den Einsatz des Tetraisopropylcyclopentadienyl (4Cp) und Tri(tert.-butyl)cyclopentadienyl (Cp’)-Liganden gelang die Synthese von Bis- und Monocyclopentadienyl-Verbindungen der frühen und späten Lanthanoide. Besonders interessant in diesem Zusammenhang ist die erfolgreiche Darstellung des Azido-Clusters, [Na(dme)3]2[4Cp6Yb6(N3)14] (4Cp= (Me2CH)4C5H), der die unterschiedlichen Koordinationsmöglichkeiten des Azido-Liganden in einem einzigen Komplex vereint. Vergleichbare Komplexe waren in der Organolanthanoidchemie bisher unbekannt. Durch Substitution am Cyclopentadienyl-System lassen sich dessen elektronische und sterische Eigenschaften signifikant verändern. Die Auswirkungen dieser Effekte können sehr eindrucksvoll an Manganocen-Komplexen demonstriert werden, in denen sich der low- und high-spin Zustand energetisch nur sehr wenig unterscheiden. Der elektronische Grundzustand einer Reihe unterschiedlich substituierter Manganocen-Komplexe wurde mittels Festkörpermagnetismus, ESR, Röntgenstrukturanalyse, EXAFS und variabler Temperatur UV-Vis Spektroskopie bestimmt, und mit dem Substitutionsmuster am Cyclopentadienyl-System korreliert. Spin-Gleichgewichte ließen sich für [(Me3C)C5H4]2Mn, [(Me3C)2C5H3]2Mn und [(Me3C)(Me3Si)C5H3]2Mn nachweisen. Theoretische Rechnungen postulieren, dass Cerocen, Ce(C8H8)2, ein Beispiel für Moleküle mit gemischt-konfiguriertem Grundzustand sei, der durch 80 % [(Ce)f1e2u(cot)e2u3] und 20 % [(Ce)f0e2u(cot)e2u4] beschreiben werden könne. Obwohl dieses Molekül bereits seit 1976 bekannt ist, ist dessen elektronische Struktur bis heute sehr umstritten. Im Rahmen dieser Arbeit wurden neue Synthesekonzepte für diese Verbindung entwickelt und die elektronische Struktur mittels magnetischer Messungen im Festkörper, EXAFS und XANES Studien untersucht. Die dabei erhaltenen Daten sind in sehr guter Übereinstimmung mit den theoretischen Rechnungen und belegen die Bedeutung eines gemischt-konfigurierten Grundzustandes bei der Bindung in Organometallkomplexen der f-Block Metalle. Während in Cerocen nur ein temperaturunabhängiger Paramagnetismus (TIP) beobachtet werden kann, findet man eine starke Temperaturabhängigkeit der magnetischen Suszeptibilität in Ytterbium Systemen des Typs Cp’2Yb(bipy’) [Cp´ und bipy´ sind substituierte Cyclopentadienyl- oder 4,4’-substituierter 2,2’-Bipyridyl-Liganden]. Temperaturabhängige XANES-Experimenten belegen, dass auch in diesen Systemen ein gemischt-konfigurierter Grundzustand vorliegt, der durch [(Yb)f14(bipy)b1()0] und [(Yb)f13(bipy)b1()1] beschreiben werden kann. Der relative Anteil beider Wellenfunktionen zum Grundzustand wird durch Substitution am 2,2’-Bipyridyl- oder Cyclopentadienyl-System signifikant beeinflusst. Modelle, mit denen sich dieses Verhalten qualitativ beschreiben lässt, wurden im Rahmen dieser Arbeit entwickelt. Ein kinetisch stabilisiertes, adduktfreies Titanocen wurde unter Verwendung des Di(tert.-butyl)cyclopentadienyl Liganden hergestellt und dessen Reaktivität gegenüber kleinen Molekülen, z.B. CO, N2 und H2 untersucht. Im Rahmen der Reaktivitätsstudien wurden ebenfalls 2,2’-Bipyridyl Addukte an das Cp’2Ti Fragment synthetisiert und deren magnetische Eigenschaften erforscht. Durch Variationen am 2,2’-Bipyridyl System lässt sich das Singlet-Triplet Splitting in diesem System gezielt steuern.
In the field of gravity determination a special kind of boundary value problem respectively ill-posed satellite problem occurs; the data and hence side condition of our PDE are oblique second order derivatives of the gravitational potential. In mathematical terms this means that our gravitational potential \(v\) fulfills \(\Delta v = 0\) in the exterior space of the Earth and \(\mathscr D v = f\) on the discrete data location which is on the Earth's surface for terrestrial measurements and on a satellite track in the exterior for spaceborne measurement campaigns. \(\mathscr D\) is a first order derivative for methods like geometric astronomic levelling and satellite-to-satellite tracking (e.g. CHAMP); it is a second order derivative for other methods like terrestrial gradiometry and satellite gravity gradiometry (e.g. GOCE). Classically one can handle first order side conditions which are not tangential to the surface and second derivatives pointing in the radial direction employing integral and pseudo differential equation methods. We will present a different approach: We classify all first and purely second order operators \(\mathscr D\) which fulfill \(\Delta \mathscr D v = 0\) if \(\Delta v = 0\). This allows us to solve the problem with oblique side conditions as if we had ordinary i.e. non-derived side conditions. The only additional work which has to be done is an inversion of \(\mathscr D\), i.e. integration.
This thesis investigates the constrained form of the spherical Minimax location problem and the spherical Weber location problem. Specifically, we consider the problem of locating a new facility on the surface of the unit sphere in the presence of convex spherical polygonal restricted regions and forbidden regions such that the maximum weighted distance from the new facility on the surface of the unit sphere to m existing facilities is minimized and the sum of the weighted distance from the new facility on the surface of the unit sphere to m existing facilities is minimized. It is assumed that a forbidden region is an area on the surface of the unit sphere where travel and facility location are not permitted and that distance is measured using the great circle arc distance. We represent a polynomial time algorithm for the spherical Minimax location problem for the special case where all the existing facilities are located on the surface of a hemisphere. Further, we have developed algorithms for spherical Weber location problem using barrier distance on a hemisphere as well as on the unit sphere.
In modern textile manufacturing industries, the function of human eyes to detect disturbances in the production processes which yield defective products is switched to cameras. The camera images are analyzed with various methods to detect these disturbances automatically. There are, however, still problems with in particular semi-regular textures which are typical for weaving patterns. We study three parts of that problem of automatic texture analysis: image smoothing, texture synthesis and defect detection. In image smoothing, we develop a two dimensional kernel smoothing method with locally and directionally adaptive bandwidths allowing correlation in the errors. Two approaches are used in synthesising texture. The first is based on constructing a generalized Ising energy function in the Markov Random Field setup, and for the second, we use two-dimensional periodic bootstrap methods for semi-regular texture synthesis. We treat defect detection as multihypothesis testing problem with the null hypothesis representing the absence of defects and the other hypotheses representing various types of defects. We develop a test based on a nonparametric regression setup, and we use the bootstrap for approximating the distribution of our test statistic.
Die Erprobung neuer Fahrzeugachsen oder Achsvarianten auf Basis von Lastdaten aus dem Fahrbetrieb erfolgt meist mit Hilfe komplexer mehrkanaliger Prüfstände. Bei solchen Erprobungen sollen im Allgemeinen die im Fahrbetrieb gemessenen Radnabenkräfte und Momente vom Prüfstand reproduziert werden. Aufgrund der komplexen Wechselwirkungen zwischen Prüfling und Prüfmaschine stellt sich bei jedem neuen Konzept die Frage, ob der gewünschte Test mit einem vorgegebenen Prüfsystemaufbau durchführbar ist, bzw. welche Konfiguration des Prüfsystems für den geplanten Test geeignet erscheint. In dieser Arbeit wird die Modellierung eines neuartigen Achsprüfsystemkonzeptes beschrieben, das auf zwei Hexapoden basiert. Die Modellierung umfasst neben der geometrischen Anordnung des Prüfsystems auch die Hydraulik sowie den internen Controller. Das Prüfsystemmodell wurde als so genanntes Template innerhalb des Fahrzeugsimulationsprogramms ADAMS/Car entwickelt und kann mit verschiedenen Achsmodellen zu einem Gesamtsystem gekoppelt werden. An diesem Gesamtmodell können alle am realen Prüfsystem auftretenden Arbeitsschritte wie Controllereinstellung, Drive-File-Iteration und Simulation durchgeführt werden. Geometrische oder hydraulische Parameter können auf einfache Weise geändert werden, um eine optimale Anpassung des Prüfsystems an den Prüfling und die vorgegebenen Lastdaten zu ermöglichen. Das im Rahmen des Projektes entwickelte Modell unterstützt und begleitet einerseits die Einführung des neuen Achsprüfsystemkonzeptes und kann andererseits zur virtuellen Vorbereitung von Testläufen eingesetzt werden. Am Beispiel einer Vorder- und einer Hinterachse wird die allgemeine Vorgehensweise erläutert und die neuen Möglichkeiten aufgezeigt, die sich durch die Prüfsystemsimulation ergeben.
In diesem technischen Bericht werden drei Aufgaben zur Prüfung bzw. zur Beanspruchung unterschiedlicher Facetten der Arbeitsgedächtniskapazität beschrieben. Die Aufgaben beruhen zum Teil auf Material von Oberauer (1993) sowie Oberauer et al. (2000, 2003). Sie wurden in RSVP programmiert und sind auf Apple-Macintosh-Rechnern lauffähig. Die Aufgaben eignen sich zur computerunterstützten Erfassung oder Beanspruchung der Arbeitsgedächtniskapazität im Einzelversuch, teilweise auch im Gruppenversuch und werden hauptsächlich in Forschungskontexten benutzt. Für jede Aufgabe werden das Konzept, die Durchführung, Auswertungs- und Anwendungsmöglichkeiten sowie gegebenenfalls Vergleichsdaten geschildert.
An autoregressive-ARCH model with possible exogeneous variables is treated. We estimate the conditional volatility of the model by applying feedforward networks to the residuals and prove consistency and asymptotic normality for the estimates under the rate of feedforward networks complexity. Recurrent neural networks estimates of GARCH and value-at-risk is studied. We prove consistency and asymptotic normality for the recurrent neural networks ARMA estimator under the rate of recurrent networks complexity. We also overcome the estimation problem in stochastic variance models in discrete time by feedforward networks and the introduction of a new distributions on the innovations. We use the method to calculate market risk such as expected shortfall and Value-at risk. We tested this distribution together with other new distributions on the GARCH family models against other common distributions on the financial market such as Normal Inverse Gaussian, normal and the Student's t- distributions. As an application of the models, some German stocks are studied and the different approaches are compared together with the most common method of GARCH(1,1) fit.
There is a well known relationship between alternating automata on finite words and symbolically represented nondeterministic automata on finite words. This relationship is of practical relevance because it allows to combine the advantages of alternating and symbolically represented nondeterministic automata on finite words. However, for infinite words the situation is unclear. Therefore, this work investigates the relationship between alternating omega-automata and symbolically represented nondeterministic omega-automata. Thereby, we identify classes of alternating omega-automata that are as expressive as safety, liveness and deterministic prefix automata, respectively. Moreover, some very simple symbolic nondeterminisation procedures are developed for the classes corresponding to safety and liveness properties.
In this paper we introduce a derivative-free, iterative method for solving nonlinear ill-posed problems \(Fx=y\), where instead of \(y\) noisy data \(y_\delta\) with \(|| y-y_\delta ||\leq \delta\) are given and \(F:D(F)\subseteq X \rightarrow Y\) is a nonlinear operator between Hilbert spaces \(X\) and \(Y\). This method is defined by splitting the operator \(F\) into a linear part \(A\) and a nonlinear part \(G\), such that \(F=A+G\). Then iterations are organized as \(A u_{k+1}=y_\delta-Gu_k\). In the context of ill-posed problems we consider the situation when \(A\) does not have a bounded inverse, thus each iteration needs to be regularized. Under some conditions on the operators \(A\) and \(G\) we study the behavior of the iteration error. We obtain its stability with respect to the iteration number \(k\) as well as the optimal convergence rate with respect to the noise level \(\delta\), provided that the solution satisfies a generalized source condition. As an example, we consider an inverse problem of initial temperature reconstruction for a nonlinear heat equation, where the nonlinearity appears due to radiation effects. The obtained iteration error in the numerical results has the theoretically expected behavior. The theoretical assumptions are illustrated by a computational experiment.
This thesis deals with the development of thermoplastic polyolefin elastomers using recycled polyolefins and ground tyre rubber (GTR). The disposal of worn tyres and their economic recycling mean a great challenge nowadays. Material recycling is a preferred way in Europa owing to legislative actions and ecological arguments. This first step with worn tyres is already done in this direc-tion as GTR is available in different fractions in guaranteed quality. As the traditional applications of GTR are saturated, there is a great demand for new, value-added products containing GTR. So, the objective of this work was to convert GTR by reac-tive blending with polyolefins into thermoplastic elastomers (TPE) of suitable me-chanical and rheological properties. It has been established that bituminous reclamation of GTR prior to extrusion melt compounding with polyolefins is a promising way of TPE production. By this way the sol-content (acetone soluble fraction) of the GTR increases and the GTR particles can be better incorporated in the corresponding polyolefin matrix. The adhesion be-tween GTR and matrix is given by molecular intermingling in the resulting interphase. GTR particles of various production and mean particle size were involved in this study. As polyolefins recycled low-density polyethylene (LDPE), recycled high-density polyethylene (HDPE) and polypropylene (PP) were selected. First, the opti-mum conditions for the GTR reclamation in bitumen were established (160 °C < T < 180 °C; time ca. 4 hours). Polyolefin based TPEs were produced after GTR reclamation in extrusion compounding. Their mechanical (tensile behaviour, set properties), thermal (dynamic-mechanical thermal analysis, differential scanning calorimetry) and rheological properties (both in low- and high-shear rates ) were determined. The PE-based blends contained an ethylene/propylene/diene (EPDM) rubber as compatibilizer and their composition was as follows: PE/EPDM/GTR:bitumen = 50/25/25:25. The selected TPEs met the most important criterion, i.e. elongation at break > 100 %; compression set < 50%. The LDPE-based TPE (TPE(LDPE)) showed better me-chanical performance compared to the TPE(HDPE). This was assigned to the higher crystallinity of the HDPE. The PP-based blends of the compositions PP/(GTR-bitumen) 50/50 and 25/75, whereby the ratio of GTR/bitumen was 60/40, outperformed those containing non-reclaimed GTR. The related blends showed also a better compatibility with a PP-based commercial thermoplastic dynamic vulcanizate (TDV). Surprisingly, the mean particle size of the GTR, varied between < 0.2 and 0.4-0.7 mm, had a small effect on the mechanical properties, however somewhat larger for the rheological behaviour of the TPEs produced.
In der vorliegenden Arbeit wurden mittels ab initio Methoden die Strukturen und Eigenschaften sowie das Reaktionsverhalten der kationischen Zwischenstufen einer nucleophilen und einer elektrophilen Substitution untersucht. Geometrieoptimierungen wurden unter Verwendung des B3LYP-Dichtefunktionals durchgeführt, Energien und Enthalpien aus Coupled-Cluster (CCSD(T))-Rechnungen gewonnen. Standardmäßig wurde der Basissatz 6-31++G(d,p) benutzt. Im ersten Teil der Arbeit stand die Frage im Mittelpunkt, welchen Einfluss eine Aminosubstitution am Bicyclo[3.1.0]hex-3-ylium-Kation auf dessen Reaktionsverhalten im Vergleich zum Grundkörper hat und ob sich die experimentell beobachteten Produktbilder der nucleophilen Substitution mit der Annahme dieser Zwischenstufe vereinbaren lassen. Die Reaktions- und Aktivierungsenergien der beiden Teilschritte einer nucleophilen S(N)1-Reaktion an chlor-substituierten Amino-Bicyclo[3.1.0]hexanen unter Gegenwart von Methanol und Methanolat-Anionen sowie der Isomerisierungswege des intermediären Amino-Bicyclo[3.1.0]hex-3-ylium-Kations wurden in CCSD(T)-Rechnungen ermittelt. Der Einfluss des Lösungsmittels Methanol auf die Energien wurde unter Verwendung des C-PCM-Modells bestimmt. Es konnte gezeigt werden, dass die Anwesenheit eines NH2- oder Morpholin-Substituenten zwar wenig an den elektronischen Eigenschaften des Bicyclo[3.1.0]hex-3-ylium-Kations ändert, jedoch neue Reaktionswege, insbesondere Möglichkeiten zur Isomerisierung eröffnet. Die Energiebarrieren dieser Prozesse liegen jedoch durchweg über denen eines nucleophilen Angriffs, so dass die Bildung eines thermodynamisch stabilen, methoxy-substituierten Produktes die vorherrschende Reaktion bleibt und keine isomeren Kationen zu erwarten sind. Der Aminorest erzeugt aus sterischen Gründen und durch die von ihm bewirkte Erhöhung des Dipolmomentes im Trishomocyclopropenylium-Kation eine hohe Regioselektivität zugunsten eines nucleophilen Angriffs an Ringposition beta-C(3), was die aus Sicht der Experimentatoren dringendste Frage nach den Ursachen des regio- und stereochemischen Verlauf der Substitution beantwortet. Im zweiten Teil der vorliegenden Dissertation wurden die Strukturen der sigma- und pi-Komplexe der protonierten Aromaten Benzol, Toluol und Mesitylen bestimmt sowie die Energieprofile der in diesen Verbindungen ablaufenden 1,2-Hydridverschiebungen berechnet. Darüber hinaus wurden für Benzol und Mesitylen die Strukturen von Arenium-Aren-Komplexen und Arenium-Tetrachloroaluminat-Verbindungen ermittelt und die Energetik von Protonentransfers innerhalb dieser Spezies untersucht. Als Ergebnis liegt eine umfassende Datenbasis zur Energetik von intra- und intermolekularen Wasserstoffverschiebungen vor. Die von allen aufgeführten Spezies unter Verwendung der SQM-Methode simulierten Schwingungsspektren erlauben es, bestimmte Verbindungen und Zwischenstufen in Zukunft zuverlässiger und schneller zu identifizieren. Es zeigten sich durchweg sehr gute Übereinstimmungen mit Daten aus IR-, NMR-, MS- und Röntgenstruktur-Experimenten. Einige offene Fragen, wie etwa Auffälligkeiten in den Kristallstrukturen, konnten beantwortet werden. Viele bisher nur qualitativ bekannte Fakten können anhand der erarbeiteten Daten in einem neuen Licht betrachtet werden. Die vorliegenden quantenchemischen Daten bestätigen in den untersuchten Fällen die Beobachtungen der Experimentatoren und liefern darüber hinaus eine fundierte Basis für detaillierte Modellvorstellungen über den Mechanismus von Substitutionsreaktionen, die über substituierte Trishomocyclopropenylium-Kationen oder protonierte Alkylbenzole verlaufen.
Metallocenes containing diarylethene type photochromic switches are synthesized, characterized and tested in polyolefin catalysts. Propylene polymerizations using unbridged bis(2,3-dibenzo[b]thiophen-3-yl)cyclopenta[b]thien-3-yl)zirconium dichloride/MAO (80) treated with 254nm UV irradiation produced bimodal polymer distributions by GPC. This was due to an increase in the low molecular weight fractions when the closed form of the catalyst/photoswitch was made. Comparison with similarly structured catalyst without photoisomerization properties did not produce bimodal polymer under identical conditions. Propylene polymerizations made with dimethylsilyl[(1,5-dimethyl-3-phenylcyclopenta[b]thien-6-yl)][(2,3-dibenzothien-3-yl)cyclopenta[b]thien-6-yl)]zirconium dichloride/MAO (86) with 254nm UV irradiation caused a 3 fold increase in the polymer molecular weight. Polymers made with ethylene and ethylene/hexene using (80) after UV irradiation did not show differences in measured polymer properties. Polymerizations with ethylene/ hexene mixtures using (86) had increased activity and co-monomer (hexene) incorporation with UV irradiation.
A gradient based algorithm for parameter identification (least-squares) is applied to a multiaxial correction method for elastic stresses and strains at notches. The correction scheme, which is numerically cheap, is based on Jiang's model of elastoplasticity. Both mathematical stress-strain computations (nonlinear PDE with Jiang's constitutive material law) and physical strain measurements have been approximized. The gradient evaluation with respect to the parameters, which is large-scale, is realized by the automatic forward differentiation technique.
Im Rahmen der Arbeit wurden neue monosubstituierte, optisch aktive Cyclopentadienylliganden mit [2.2.1]-bicyclischen Substituenten synthetisiert. Als Startmaterialien dienten Verbindungen aus dem chiral pool wie beispielsweise Campher, Borneol oder auch Fenchon. Mit den neuen Liganden wurden optisch aktive Komplexe des Zirkoniums und des Titans hergestellt. Ein erstes Katalyseexperiment (katalytische Hydrierung) wurde durchgeführt.
Im vorliegenden Bericht werden die Erfahrungen und Ergebnisse aus dem Projekt OptCast zusammengestellt. Das Ziel dieses Projekts bestand (a) in der Anpassung der Methodik der automatischen Strukturoptimierung für Gussteile und (b) in der Entwicklung und Bereitstellung von gießereispezifischen Optimierungstools für Gießereien und Ingenieurbüros. Gießtechnische Restriktionen lassen sich nicht vollständig auf geometrische Restriktionen reduzieren, da die lokalen Eigenschaften nicht nur von der geometrischen Form des Gussteils, sondern auch vom verwendeten Material abhängen. Sie sind jedoch über eine Gießsimulation (Erstarrungssimulation und Eigenspannungsanalyse) adäquat erfassbar. Wegen dieser Erkenntnis wurde ein neuartiges Topologieoptimierungsverfahren unter Verwendung der Level-Set-Technik entwickelt, bei dem keine variable Dichte des Materials eingeführt wird. In jeder Iteration wird ein scharfer Rand des Bauteils berechnet. Somit ist die Gießsimulation in den iterativen Optimierungsprozess integrierbar.
We analyze the regular oblique boundary problem for the Poisson equation on a C^1-domain with stochastic inhomogeneities. At first we investigate the deterministic problem. Since our assumptions on the inhomogeneities and coefficients are very weak, already in order to formulate the problem we have to work out properties of functions from Sobolev spaces on submanifolds. An further analysis of Sobolev spaces on submanifolds together with the Lax-Milgram lemma enables us to prove an existence and uniqueness result for weak solution to the oblique boundary problem under very weak assumptions on coefficients and inhomogeneities. Then we define the spaces of stochastic functions with help of the tensor product. These spaces enable us to extend the deterministic formulation to the stochastic setting. Under as weak assumptions as in the deterministic case we are able to prove the existence and uniqueness of a stochastic weak solution to the regular oblique boundary problem for the Poisson equation. Our studies are motivated by problems from geodesy and through concrete examples we show the applicability of our results. Finally a Ritz-Galerkin approximation is provided. This can be used to compute the stochastic weak solution numerically.
It is considered an analytical model of defaultable bond portfolio in terms of its face value process. The face value process dynamically evolves with time and incorporates changes caused by recovery payment on default followed by purchasing of new bonds. The further studies involve properties, distribution and control of the face value process.
Flow of non-Newtonian fluid in saturated porous media can be described by the continuity equation and the generalized Darcy law. Efficient solution of the resulting second order nonlinear elliptic equation is discussed here. The equation is discretized by a finite volume method on a cell-centered grid. Local adaptive refinement of the grid is introduced in order to reduce the number of unknowns. A special implementation approach is used, which allows us to perform unstructured local refinement in conjunction with the finite volume discretization. Two residual based error indicators are exploited in the adaptive refinement criterion. Second order accurate discretization of the fluxes on the interfaces between refined and non-refined subdomains, as well as on the boundaries with Dirichlet boundary condition, are presented here, as an essential part of the accurate and efficient algorithm. A nonlinear full approximation storage multigrid algorithm is developed especially for the above described composite (coarse plus locally refined) grid approach. In particular, second order approximation of the fluxes around interfaces is a result of a quadratic approximation of slave nodes in the multigrid - adaptive refinement (MG-AR) algorithm. Results from numerical solution of various academic and practice-induced problems are presented and the performance of the solver is discussed.
The existence of a complete, embedded minimal surface of genus one, with three ends and whose total Gaussian curvature satisfies equality in the estimate of Jorge and Meeks, was a sensation in the middle eighties. From this moment on, the surface of Costa, Hoffman and Meeks has become famous all around the world, not only in the community of mathematicians. With this article, we want to fill a gap in the injectivity proof of Hoffman and Meeks, where there is a lack of a strict mathematical justification. We exclusively argue topologically and do not use additional properties like differentiability or even holomorphy.
The aim of the thesis is the numerical investigation of saturated, stationary, incompressible Newtonian flow in porous media when inertia is not negligible. We focus our attention to the Navier-Stokes system with two pressures derived by two-scale homogenization. The thesis is subdivided into five Chapters. After the introductory remarks on porous media, filtration laws and upscaling methods, the first chapter is closed by stating the basic terminology and mathematical fundamentals. In Chapter 2, we start by formulating the Navier-Stokes equations on a periodic porous medium. By two-scale expansions of the velocity and pressure, we formally derive the Navier-Stokes system with two pressures. For the sake of completeness, known existence and uniqueness results are repeated and a convergence proof is given. Finally, we consider Stokes and Navier-Stokes systems with two pressures with respect to their relation to Darcy's law. Chapter 3 and Chapter 4 are devoted to the numerical solution of the nonlinear two pressure system. Therefore, we follow two approaches. The first approach which is developed in Chapter 3 is based on a splitting of the Navier-Stokes system with two pressures into micro and macro problems. The splitting is achieved by Taylor expanding the permeability function or by discretely computing the permeability function. The problems to be solved are a series of Stokes and Navier-Stokes problems on the periodicity cell. The Stokes problems are solved by an Uzawa conjugate gradient method. The Navier-Stokes equations are linearized by a least-squares conjugate gradient method, which leads to the solution of a sequence of Stokes problems. The macro problem consists of solving a nonlinear uniformly elliptic equation of second order. The least-squares linearization is applied to the macro problem leading to a sequence of Poisson problems. All equations will be discretized by finite elements. Numerical results are presented at the end of Chapter 3. The second approach presented in Chapter 4 relies on the variational formulation in a certain Hilbert space setting of the Navier-Stokes system with two pressures. The nonlinear problem is again linearized by the least-squares conjugate gradient method. We obtain a sequence of Stokes systems with two pressures. For the latter systems, we propose a fast solution method which relies on pre-computing Stokes systems on the periodicity cell for finite element basis functions acting as right hand sides. Finally, numerical results are discussed. In Chapter 5 we are concerned with modeling and simulation of the pressing section of a paper machine. We state a two-dimensional model of a press nip which takes into account elasticity and flow phenomena. Nonlinear filtration laws are incorporated into the flow model. We present a numerical solution algorithm and the chapter is closed by a numerical investigation of the model with special focus on inertia effects.
The present thesis deals with a novel approach to increase the resource usage in digital communications. In digital communication systems, each information bearing data symbol is associated to a waveform which is transmitted over a physical medium. The time or frequency separations among the waveforms associated to the information data have always been chosen to avoid or limit the interference among them. By doing so, n the presence of a distortionless ideal channel, a single receive waveform is affected as little as possible by the presence of the other waveforms. The conditions necessary to meet the absence of any interference among the waveforms are well known and consist of a relationship between the minimum time separation among the waveforms and their bandwidth occupation or, equivalently, the minimum frequency separation and their time occupation. These conditions are referred to as Nyquist assumptions. The key idea of this work is to relax the Nyquist assumptions and to transmit with a time and/or frequency separation between the waveforms smaller than the minimum required to avoid interference. The reduction of the time and/or frequency separation generates not only an increment of the resource usage, but also a degradation in the quality of the received data. Therefore, to maintain a certain quality in the received signal, we have to increase the amount of transmitted power. We investigate the trade-off between the increment of the resource usage and the correspondent performance degradation in three different cases. The first case is the single carrier case in which all waveforms have the same spectrum, but have different temporal locations. The second one is the multi carrier case in which each waveform has its distinct spectrum and occupies all the available time. Finally, the hybrid case when each waveform has its unique time and frequency location. These different cases are framed within the general system modelling developed in the thesis so that they can be easily compared. We evaluate the potential of the key idea of the thesis by choosing a set of four possible waveforms with different characteristics. By doing so, we study the influence of the waveform characteristics in the three system configurations. We propose an interpretation of the results by modifying the well-known Shannon capacity formula and by explicitly expressing its dependency on the increment of resource usage and on the performance degradation. The results are very promising. We show that both in the case of a single carrier system with a time limited waveform and in the case of a multi-carrier system with a frequency limited waveform, the reduction of the time or frequency separation, respectively, has a positive effect on the channel capacity. The latter, depending on the actual SNR, can double or increase even more significantly.
Non-commutative polynomial algebras appear in a wide range of applications, from quantum groups and theoretical physics to linear differential and difference equations. In the thesis, we have developed a framework, unifying many important algebras in the classes of \(G\)- and \(GR\)-algebras and studied their ring-theoretic properties. Let \(A\) be a \(G\)-algebra in \(n\) variables. We establish necessary and sufficient conditions for \(A\) to have a Poincar'e-Birkhoff-Witt (PBW) basis. Further on, we show that besides the existence of a PBW basis, \(A\) shares some other properties with the commutative polynomial ring \(\mathbb{K}[x_1,\ldots,x_n]\). In particular, \(A\) is a Noetherian integral domain of Gel'fand-Kirillov dimension \(n\). Both Krull and global homological dimension of \(A\) are bounded by \(n\); we provide examples of \(G\)-algebras where these inequalities are strict. Finally, we prove that \(A\) is Auslander-regular and a Cohen-Macaulay algebra. In order to perform symbolic computations with modules over \(GR\)-algebras, we generalize Gröbner bases theory, develop and respectively enhance new and existing algorithms. We unite the most fundamental algorithms in a suite of applications, called "Gröbner basics" in the literature. Furthermore, we discuss algorithms appearing in the non-commutative case only, among others two-sided Gröbner bases for bimodules, annihilators of left modules and operations with opposite algebras. An important role in Representation Theory is played by various subalgebras, like the center and the Gel'fand-Zetlin subalgebra. We discuss their properties and their relations to Gröbner bases, and briefly comment some aspects of their computation. We proceed with these subalgebras in the chapter devoted to the algorithmic study of morphisms between \(GR\)-algebras. We provide new results and algorithms for computing the preimage of a left ideal under a morphism of \(GR\)-algebras and show both merits and limitations of several methods that we propose. We use this technique for the computation of the kernel of a morphism, decomposition of a module into central characters and algebraic dependence of pairwise commuting elements. We give an algorithm for computing the set of one-dimensional representations of a \(G\)-algebra \(A\), and prove, moreover, that if the set of finite dimensional representations of \(A\) over a ground field \(K\) is not empty, then the homological dimension of \(A\) equals \(n\). All the algorithms are implemented in a kernel extension Plural of the computer algebra system Singular. We discuss the efficiency of computations and provide a comparison with other computer algebra systems. We propose a collection of benchmarks for testing the performance of algorithms; the comparison of timings shows that our implementation outperforms all of the modern systems with the combination of both broad functionality and fast implementation. In the thesis, there are many new non-trivial examples, and also the solutions to various problems, arising in different fields of mathematics. All of them were obtained with the developed theory and the implementation in Plural, most of them are treated computationally in this thesis for the first time.
In this paper, theory and algorithms for solving the multiple objective minimum cost flow problem are reviewed. For both the continuous and integer case exact and approximation algorithms are presented. In addition, a section on compromise solutions summarizes corresponding results. The reference list consists of all papers known to the autheors which deal with the multiple objective minimum cost flow problem.
Inverse treatment planning of intensity modulated radiothrapy is a multicriteria optimization problem: planners have to find optimal compromises between a sufficiently high dose in tumor tissue that garantuee a high tumor control, and, dangerous overdosing of critical structures, in order to avoid high normal tissue complcication problems. The approach presented in this work demonstrates how to state a flexible generic multicriteria model of the IMRT planning problem and how to produce clinically highly relevant Pareto-solutions. The model is imbedded in a principal concept of Reverse Engineering, a general optimization paradigm for design problems. Relevant parts of the Pareto-set are approximated by using extreme compromises as cornerstone solutions, a concept that is always feasible if box constraints for objective funtions are available. A major practical drawback of generic multicriteria concepts trying to compute or approximate parts of the Pareto-set is the high computational effort. This problem can be overcome by exploitation of an inherent asymmetry of the IMRT planning problem and an adaptive approximation scheme for optimal solutions based on an adaptive clustering preprocessing technique. Finally, a coherent approach for calculating and selecting solutions in a real-timeinteractive decision-making process is presented. The paper is concluded with clinical examples and a discussion of ongoing research topics.
The thesis is focused on modelling and simulation of a Joint Transmission and Detection Integrated Network (JOINT), a novel air interface concept for B3G mobile radio systems. Besides the utilization of the OFDM transmission technique, which is a promising candidate for future mobile radio systems, and of the duplexing scheme time division duplexing (TDD), the subdivision of the geographical domain to be supported by mobile radio communications into service areas (SAs) is a highlighted concept of JOINT. A SA consists of neighboring sub-areas, which correspond to the cells of conventional cellular systems. The signals in a SA are jointly processed in a Central Unit (CU) in each SA. The CU performs joint channel estimation (JCE) and joint detection (JD) in the form of the receive-zero-forcing (RxZF) Filter for the uplink (UL) transmission and joint transmission (JT) in the form of the transmit-zero-forcing (TxZF) Filter for the downlink (DL) transmission. By these algorithms intra-SA multiple access interference (MAI) can be eliminated within the limits of the used model so that unbiased data estimates are obtained, and most of the computational effort is moved from mobile terminals (MTs) to the CU so that the MTs can do with low complexity. A simulation chain of JOINT has been established in the software MLDesigner by the author based on time discrete equivalent lowpass modelling. In this simulation chain, all key functionalities of JOINT are implemented. The simulation chain is designed for link level investigations. A number of channel models are implemented both for the single-SA scenario and the multiple-SA scenario so that the system performance of JOINT can be comprehensively studied. It is shown that in JOINT a duality or a symmetry of the MAI elimination in the UL and in the DL exists. Therefore, the typical noise enhancement going along with the MAI elimination by JD and JT, respectively, is the same in both links. In the simulations also the impact of channel estimation errors on the system performance is studied. In the multiple-SA scenario, due to the existence of the inter-SA MAI, which cannot be suppressed by the algorithms of JD and JT, the system performance in terms of the average bit error rate (BER) and the BER statistics degrades. A collection of simulation results show the potential of JOINT with respect to the improvement of the system performance and the enhancement of the spectrum e±ciency as compared to conventional cellular systems.
Since its invention by Sir Allistair Pilkington in 1952, the float glass process has been used to manufacture long thin flat sheets of glass. Today, float glass is very popular due to its high quality and relatively low production costs. When producing thinner glass the main concern is to retain its optical quality, which can be deteriorated during the manufacturing process. The most important stage of this process is the floating part, hence is considered to be responsible for the loss in the optical quality. A series of investigations performed on the finite products showed the existence of many short wave patterns, which strongly affect the optical quality of the glass. Our work is concerned with finding the mechanism for wave development, taking into account all possible factors. In this thesis, we model the floating part of the process by an theoretical study of the stability of two superposed fluids confined between two infinite plates and subjected to a large horizontal temperature gradient. Our approach is to take into account the mixed convection effects (viscous shear and buoyancy), neglecting on the other hand the thermo-capillarity effects due to the length of our domain and the presence of a small stabilizing vertical temperature gradient. Both fluids are treated as Newtonian with constant viscosity. They are immiscible, incompressible, have very different properties and have a free surface between them. The lower fluid is a liquid metal with a very small kinematic viscosity, whereas the upper fluid is less dense. The two fluids move with different velocities: the speed of the upper fluid is imposed, whereas the lower fluid moves as a result of buoyancy effects. We examine the problem by means of small perturbation analysis, and obtain a system of two Orr-Sommerfeld equations coupled with two energy equations, and general interface and boundary conditions. We solve the system analytically in the long- and short- wave limit, by using asymptotic expansions with respect to the wave number. Moreover, we write the system in the form of a general eigenvalue problem and we solve the system numerically by using Chebyshev spectral methods for fluid dynamics. The results (both analytical and numerical) show the existence of the small-amplitude travelling waves, which move with constant velocity for wave numbers in the intermediate range. We show that the stability of the system is ensured in the long wave limit, a fact which is in agreement with the real float glass process. We analyze the stability for a wide range of wave numbers, Reynolds, Weber and Grashof number, and explain the physical implications on the dynamics of the problem. The consequences of the linear stability results are discussed. In reality in the float glass process, the temperature strongly influences the viscosity of both molten metal and hot glass, which will have direct consequences on the stability of the system. We investigate the linear stability of two superposed fluids with temperature dependent viscosities by considering a different model for the viscosity dependence of each fluid. Although, the temperature-viscosity relationships for glass and metal are more complex than those used in our computations, our intention is to emphasize the effects of this dependence on the stability of the system. It is known from the literature that in the case of one fluid, the heat, which causes viscosity to decrease along the domain, usually destabilizes the flow. For the two superposed fluids problem we investigate this behaviour and discuss the consequences of the linear stability in this new case.
Carbon-fibre reinforced plastics have been widely used in the aerospace industry as
materials for structural components. During recent years, the focus has been on
preform/RTM materials with the aim of improving material properties and reducing
costs. Harnessing the full potential of these materials requires a model for assessing
the properties and in particular long-term behaviour. Such a model needs to take into
account the special conditions of these materials. Basic failure mechanisms have to
be analysed in order to develop this kind of model.
Consequently, the aim of the work was to investigate the fatigue phenomenon in
preform-CFRP materials with thermoset matrices on a microstructural level. The
influence of the dynamic loading and the temperature on the emerging fracture
phenomena should be identified. Based on the results, a common fracture mechanism
should be found. The failure should be described on a mesoscopic level so that
it is not restricted on the fatigue failure at a single crack front.
To achieve this aim, different preform materials with EP matrix (some of which had
been subjected to impact) were loaded with dynamic compression load and high
frequent alternate bending. The fatigue behaviour of the matrix systems was investigated
by CT tests.
By means of microfractography, the only method for detecting fatigue failure as such,
the failure mechanisms were analysed at submicroscopic level. The results showed
correlations between microstructure and failure.
It became apparent that what in the technical literature has been given as an explanation
for the appearance of the fatigue striations in the scanning electron microscope
had to be corrected. As undercuts are not reflected in the SEM as dark striations,
the appearance of the striations must be based on different inclinations of the
local fractured surface to the primary electron beam.
On the basis of this result the shape and the formation of the fatigue striations could
be shown in resin pockets and fibre imprints. Fatigue striations have a shape which
sticks out from the fracture plane, preferably in the form of steps.
There was no proof for an influence of the high frequent load on the formation of
fatigue striations. However, it was possible to find lamellar fracture phenomena which
have not been described in the technical literature yet. Due to their shape and their occurrence these can be understood rather as a sign of a dynamic load then as a
fracture phenomenon of a high frequent cyclic loading.
The examinations of the high frequent loaded samples, where temperatures up to
120°C occurred, as well as in the CT tests with elevated temperatures (60% Tg)
yielded no proof that the temperature has an influence on the mechanical failure
behaviour. However, the formation of the fatigue striations in high frequent loaded
specimens leads to the deduction that adiabatic heating exists at the crack tip which
leads to large plastic deformations because the glass transition temperature is exceeded
locally.
The microfractographic investigations showed that the fatigue striations appear as
separate static fractures. On account of their shape and in relation to the matching
fracture surfaces plastic processes can be held responsible for the formation of the
striations. Altogether this leads to a modification of the models for the origin of fatigue
striations prevalent in the technical literature. The suggested model associates the
real fracture growth under fatigue loading only with a small part of the loading cycle.
Crack propagation only occurs when the maximum stress intensity is reached in the
area of the upper loading of the cycle. Microplastic processes by molecular rearrangement
in the stress field ahead of the crack tip lead to the blunting of the crack
tip, which is reflected as fatigue striations on the fracture surface. Simultaneously, the
cyclic loading causes damages in the molecular network of the thermoset. This leads
to the possibility of fracture formation below the static stress at break.
On the basis of the model and of fatigue crack growth diagrams it is possible to
establish thresholds for the stress intensity necessary for crack propagation under
cyclic load. The upper threshold of the stress intensity corresponds to KC, because it
marks the transition to unstable crack growth. The lower threshold is determined by
the value of the cyclic stress intensity factor where crack growth has just ceased to
be ascertainable.
With the existing model of local crack growth under fatigue loading and the results of
the chronological course of failure from the microfractographic investigations of the
different materials it was possible to detect a general failure mechanism for the
preform-CFRP materials.
When an external alternating load is applied, an inhomogeneous stress field forms in
the composite material. In areas stressed within the growth stress, fatigue growth occurs in the form of secondary fractures within the matrix. The primary crack front
runs along these damaged points in the material until global failure occurs. This leads
to a discontinuous, stepwise failure expiration under fatigue loading. This general
mechanism permits assessment of the damage behaviour and the progression of
failure in various types of fibre reinforcement.
Die Nachhaltige Entwicklung gilt spätestens seit der Weltkonferenz 1992 in Rio de Janeiro als globales Leitbild. Zehn Jahre später wurde es durch die Konferenz in Johannesburg weiter konkretisiert und ausdifferenziert. In diesem Kontext entwickelten zahlreiche Regierungen Nachhaltigkeitsstrategien, um das Leitbild der Nachhaltigen Entwicklung zu implementieren. Dennoch sind die Diskussionen hierzu oft noch unspezifisch und konzeptionell ungenügend abgesichert. Insbesondere wurden Indikatoren und Handlungsfelder sowie deren Wechselbeziehungen stark isoliert diskutiert. Der vorliegende Diskussionsbeitrag präsentiert daher eine Methode zur Systematisierung von Handlungsfeldern und Indikatoren. Ausgangspunkt sind die drei Säulen der Nachhaltigen Entwicklung (Ökologie, Ökonomie und Soziales), die in einem Dreieck zusammengeführt werden. Das Dreieck ist in Felder aufgeteilt, um die verschiedenen Zusammenhänge zwischen den drei Säulen abzubilden. Das „Integrierende Nachhaltigkeits-Dreieck“ soll Handlungsfelder und Indikatoren systematisch im Rahmen der Nachhaltigen Entwicklung einordnen. Dieser methodische Ansatz wird gegenwärtig in der Entwicklung der „Nachhaltigkeitsstrategie für Rheinland-Pfalz“ umgesetzt.
Over the last decades, mathematical modeling has reached nearly all fields of natural science. The abstraction and reduction to a mathematical model has proven to be a powerful tool to gain a deeper insight into physical and technical processes. The increasing computing power has made numerical simulations available for many industrial applications. In recent years, mathematicians and engineers have turned there attention to model solid materials. New challenges have been found in the simulation of solids and fluid-structure interactions. In this context, it is indispensable to study the dynamics of elastic solids. Elasticity is a main feature of solid bodies while demanding a great deal of the numerical treatment. There exists a multitude of commercial tools to simulate the behavior of elastic solids. Anyhow, the majority of these software packages consider quasi-stationary problems. In the present work, we are interested in highly dynamical problems, e.g. the rotation of a solid. The applicability to free-boundary problems is a further emphasis of our considerations. In the last years, meshless or particle methods have attracted more and more attention. In many fields of numerical simulation these methods are on a par with classical methods or superior to them. In this work, we present the Finite Pointset Method (FPM) which uses a moving least squares particle approximation operator. The application of this method to various industrial problems at the Fraunhofer ITWM has shown that FPM is particularly suitable for highly dynamical problems with free surfaces and strongly changing geometries. Thereby, FPM offers exactly the features that we require for the analysis of the dynamics of solid bodies. In the present work, we provide a numerical scheme capable to simulate the behavior of elastic solids. We present the system of partial differential equations describing the dynamics of elastic solids and show its hyperbolic character. In particular, we focus our attention to the constitutive law for the stress tensor and provide evolution equations for the deviatoric part of the stress tensor in order to circumvent limitations of the classical Hooke's law. Furthermore, we present the basic principle of the Finite Pointset Method. In particular, we provide the concept of upwinding in a given direction as a key ingredient for stabilizing hyperbolic systems. The main part of this work describes the design of a numerical scheme based on FPM and an operator splitting to take the different processes within a solid body into account. Each resulting subsystem is treated separately in an adequate way. Hereby, we introduce the notion of system-inherent directions and dimensional upwinding. Finally, a coupling strategy for the subsystems and results are presented. We close this work with some final conclusions and an outlook on future work.
The HMG-CoA reductase inhibitors SIM, LOV, ATV, PRA, FV and NKS were investigated for their effects on human SkMCs. We were able to demonstrate that statins can induce oxidative stress (ROS formation, GSH-depletion, TBARS), apoptosis (, caspase-3 activity, nuclear morphology) and necrosis (LDH-leakage) in hSkMCs. After incubation with statins, the sequence of cellular events starts by the increased formation of ROS (30 min) followed by caspase-3 activation (2-4 hours) and necrosis (LDH-leakage) and formation of condensed and fragmented nuclei after 24-72 hours. It was shown that, antioxidants (NAC, DTT, TPGS, M-2 and M-3) and the HMG-CoA reductase downstream metabolites (MVA, F, FPP, GG and GGPP) protected against statin-induced ROS formation, caspase-3 activation and partially from necrosis. The caspase-3 inhibitor Ac-DEVD-CHO rescues cells partially from necrosis. These results suggest that the statin-induced necrosis is HMG-CoA dependent and occurs secondary to apoptosis, which by decrease of ATP is driven into necrosis. The increase of ATP observed at low concentrations and early time points suggest an increased glycolytic activity. This was confirmed by increased PDK-4 gene expression and increased PFK2/F-2,6-BPase expression both activator of glycolysis. Glycolysis was also confirmed for some statins by increased cellular lactate concentations. The consequence of PDK-4 mediated pyruvate dehydrogenase inactivation is the metabolic switching from fatty acid to amino acid from proteins as energy source. The oxidative stress hypothesis was further supported by the induction of the FOXO3A transcription factor, which is involved in regulating MnSOD-2 expression in the mitochondrium. The mechanism by which statins produce ROS is still not resolved. There is an indirect evidence from our experiments as well as from the literature, that immediately after the statin treatment, intracellular Ca2+ is mobilized due to HMG-CoA reductase inhibition, which after mitochondrial uptake could lead to increased ROS formation.
In the first part of this work, called Simple node singularity, are computed matrix factorizations of all isomorphism classes, up to shiftings, of rank one and two, graded, indecomposable maximal Cohen--Macaulay (shortly MCM) modules over the affine cone of the simple node singularity. The subsection 2.2 contains a description of all rank two graded MCM R-modules with stable sheafification on the projective cone of R, by their matrix factorizations. It is given also a general description of such modules, of any rank, over a projective curve of arithmetic genus 1, using their matrix factorizations. The non-locally free rank two MCM modules are computed using an alghorithm presented in the Introduction of this work, that gives a matrix factorization of any extension of two MCM modules over a hypersurface. In the second part, called Fermat surface, are classified all graded, rank two, MCM modules over the affine cone of the Fermat surface. For the classification of the orientable rank two graded MCM R-modules, is used a description of the orientable modules (over normal rings) with the help of codimension two Gorenstein ideals, realized by Herzog and Kühl. It is proven (in section 4), that they have skew symmetric matrix factorizations (over any normal hypersurface ring). For the classification of the non-orientable rank two MCM R-modules, we use a similar idea as in the case of the orientable ones, only that the ideal is not any more Gorenstein.
In this thesis we have discussed the problem of decomposing an integer matrix \(A\) into a weighted sum \(A=\sum_{k \in {\mathcal K}} \alpha_k Y^k\) of 0-1 matrices with the strict consecutive ones property. We have developed algorithms to find decompositions which minimize the decomposition time \(\sum_{k \in {\mathcal K}} \alpha_k\) and the decomposition cardinality \(|\{ k \in {\mathcal K}: \alpha_k > 0\}|\). In the absence of additional constraints on the 0-1 matrices \(Y^k\) we have given an algorithm that finds the minimal decomposition time in \({\mathcal O}(NM)\) time. For the case that the matrices \(Y^k\) are restricted to shape matrices -- a restriction which is important in the application of our results in radiotherapy -- we have given an \({\mathcal O}(NM^2)\) algorithm. This is achieved by solving an integer programming formulation of the problem by a very efficient combinatorial algorithm. In addition, we have shown that the problem of minimizing decomposition cardinality is strongly NP-hard, even for matrices with one row (and thus for the unconstrained as well as the shape matrix decomposition). Our greedy heuristics are based on the results for the decomposition time problem and produce better results than previously published algorithms.
Der Begriff Risiko ist heutzutage durch politische Bewegungen wie KonTraG und Basel II sowie spektakuläre Unternehmenszusammenbrüche in aller Munde. Dabei wird immer wieder darauf hingewiesen, dass die Unternehmen ganzheitliche integrierte Risikomanagement- und controllingsysteme installieren sollen, um den gesetzlichen Regelungen Genüge zu leisten, keinen dramatischen Anstieg der Refinanzierungskosten verzeichnen zu müssen und das Unternehmen rechtzeitig vor der Illiquidität bewahren zu können. Dazu sind mittlerweile unzählige Vorschläge zur funktionalen und institutionalen Ausgestaltung dieser Systeme vorgenommen worden. Risikosoftware wurde zunächst vermehrt in Banken eingesetzt, da dies schon früh eine Forderung des Bundesaufsichtsamtes für Kreditwesen zum Betreiben von Handelsgeschäften war. In den vergangenen Jahren hat sich, hervorgerufen durch die oben genannten Veränderungen, ein allgemeiner Markt (also auch für Nicht-Banken) für Softwareprogramme, welche die Behandlung des Risikos unterstützen, gebildet. Dieser ist zum einen durch eine Vielzahl von Anbietern geprägt und zum anderen sind die Ausgestaltungsformen der einzelnen Programme sehr unterschiedlich. Es existiert quasi für jedes Risiko (z.B. Liquidität) eine spezielle Software. Damit ergibt sich bei der Auswahl von Software ein komplexes Entscheidungsproblem, auf welches in den folgenden Ausführungen näher eingegangen werden soll. Die Ergebnisse, die in dieser Studie zusammengefasst sind wurden in dem auf Excel basierenden Tool Lynkeus programmiert, sodass eine unternehmensindividuelle Auswahl der Alternativen auf Basis der Nutzwertanalyse möglich ist.
Wireless LANs operating within unlicensed frequency bands require random access schemes such as CSMA/ CA, so that wireless networks from different administrative domains (for example wireless community networks) may co-exist without central coordination, even when they happen to operate on the same radio channel. Yet, it is evident that this Jack of coordination leads to an inevitable loss in efficiency due to contention on the MAC layer. The interesting question is, which efficiency may be gained by adding coordination to existing, unrelated wireless networks, for example by self-organization. In this paper, we present a methodology based on a mathematical programming formulation to determine the
parameters (assignment of stations to access points, signal strengths and channel assignment of both access points and stations) for a scenario of co-existing CSMA/ CA-based wireless networks, such that the contention between these networks is minimized. We demonstrate how it is possible to solve this discrete, non-linear optimization problem exactly for small
problems. For larger scenarios, we present a genetic algorithm specifically tuned for finding near-optimal solutions, and compare its results to theoretical lower bounds. Overall, we provide a benchmark on the minimum contention problem for coordination mechanisms in CSMA/CA-based wireless networks.