Refine
Year of publication
- 2023 (146)
- 2021 (117)
- 2022 (109)
- 2015 (100)
- 2020 (99)
- 2016 (92)
- 2017 (84)
- 2018 (79)
- 2014 (78)
- 2009 (76)
- 2019 (74)
- 2006 (69)
- 2012 (66)
- 2013 (65)
- 2007 (64)
- 2008 (63)
- 2011 (62)
- 2010 (61)
- 2005 (58)
- 2004 (55)
- 2002 (50)
- 2003 (50)
- 2024 (49)
- 2001 (35)
- 2000 (34)
- 1999 (28)
- 1998 (7)
- 1995 (3)
- 1996 (2)
- 1997 (2)
- 1994 (1)
Document Type
- Doctoral Thesis (1878) (remove)
Language
- English (941)
- German (931)
- Multiple languages (6)
Keywords
- Visualisierung (21)
- Simulation (19)
- Katalyse (15)
- Stadtplanung (15)
- Apoptosis (12)
- Finite-Elemente-Methode (12)
- Phasengleichgewicht (12)
- Modellierung (11)
- Infrarotspektroskopie (10)
- Mobilfunk (10)
- Tribologie (10)
- Visualization (10)
- finite element method (10)
- Eisen (9)
- Flüssig-Flüssig-Extraktion (9)
- Optimierung (8)
- Oxidativer Stress (8)
- Querkraft (8)
- Streptococcus pneumoniae (8)
- Deep Learning (7)
- Gasphase (7)
- Numerische Strömungssimulation (7)
- Optimization (7)
- Palladium (7)
- Polyphenole (7)
- Querkrafttragfähigkeit (7)
- Acrylamid (6)
- Algebraische Geometrie (6)
- Evaluation (6)
- Heterogene Katalyse (6)
- Leber (6)
- Machine Learning (6)
- Mischwasserbehandlung (6)
- Portfolio Selection (6)
- Ruthenium (6)
- Supramolekulare Chemie (6)
- Bemessung (5)
- Bewertung (5)
- CFD (5)
- Carbonsäuren (5)
- Clusterion (5)
- Cobalt (5)
- Computergraphik (5)
- Cyclopeptide (5)
- Finanzmathematik (5)
- Fluoreszenz (5)
- Homogene Katalyse (5)
- Leichtbau (5)
- Messtechnik (5)
- Metabolismus (5)
- Model checking (5)
- Monte-Carlo-Simulation (5)
- Phosphaalkin (5)
- Polymere (5)
- Raumordnung (5)
- Regionalentwicklung (5)
- Retentionsbodenfilter (5)
- Robotik (5)
- Stahlbeton (5)
- Stochastische dynamische Optimierung (5)
- Unsicherheit (5)
- Verbundbauweise (5)
- Zeolith (5)
- machine learning (5)
- verification (5)
- Ab-initio-Rechnung (4)
- Apfelsaft (4)
- Artificial Intelligence (4)
- Bauen im Bestand (4)
- Bildverarbeitung (4)
- Cluster (4)
- Computeralgebra (4)
- Elastizität (4)
- Faser-Kunststoff-Verbunde (4)
- Faserkunststoffverbunde (4)
- Filtration (4)
- Genregulation (4)
- Geoinformationssystem (4)
- Glutathion (4)
- Homogenisierung <Mathematik> (4)
- Kontinuumsmechanik (4)
- Massenspektrometrie (4)
- Materialermüdung (4)
- Molekularstrahl (4)
- Navier-Stokes-Gleichung (4)
- Nichtlineare Finite-Elemente-Methode (4)
- Numerische Mathematik (4)
- Phosphor (4)
- Portfolio-Optimierung (4)
- Reibung (4)
- Spincrossover (4)
- Stadtentwicklung (4)
- Stahlbetonbau (4)
- Verifikation (4)
- catalysis (4)
- computational mechanics (4)
- homogene Katalyse (4)
- mobile radio (4)
- palladium (4)
- polyphenols (4)
- portfolio optimization (4)
- Ackerschmalwand (3)
- Algorithmus (3)
- Anthocyane (3)
- Apfel (3)
- Apoptose (3)
- Automation (3)
- Biaryle (3)
- Bruchmechanik (3)
- CAD (3)
- Comet Assay (3)
- Cytochrom P-450 (3)
- Datenanalyse (3)
- Datenbank (3)
- Differenzierung (3)
- Digitalisierung (3)
- Dissertation (3)
- EPR (3)
- ESR (3)
- ESR-Spektroskopie (3)
- Effizienter Algorithmus (3)
- Elektromobilität (3)
- Ermüdung (3)
- Erwarteter Nutzen (3)
- Evolution (3)
- Experiment (3)
- Extrapolation (3)
- FT-ICR-Spektroskopie (3)
- Faserverbundwerkstoff (3)
- Finite-Volumen-Methode (3)
- Flavonoide (3)
- Formal Verification (3)
- Formale Beschreibungstechnik (3)
- Glycidamid (3)
- Gröbner-Basis (3)
- Harnstoff (3)
- Hochleistungsbeton (3)
- Hohlkörper (3)
- Hydrodynamik (3)
- IR-MPD (3)
- IRMPD (3)
- ITC (3)
- Immobilisierung (3)
- Implementation (3)
- In vitro (3)
- In vivo (3)
- Indirubin (3)
- Innenstadt (3)
- Interaktion (3)
- Inverses Problem (3)
- Ionenfalle (3)
- Katalytische Hydrierung (3)
- Kinetik (3)
- Koordinationschemie (3)
- Kupfer (3)
- Kupferkomplexe (3)
- Layout (3)
- Leukämie (3)
- MIMO (3)
- Machine learning (3)
- Maschinelles Lernen (3)
- Mehrskalenmodell (3)
- Mensch-Maschine-Kommunikation (3)
- Metallcluster (3)
- Metallorganische Chemie (3)
- Microarray (3)
- Mikroklima (3)
- Mikrostruktur (3)
- Molekulardynamik (3)
- Mosco convergence (3)
- Mustererkennung (3)
- N-Liganden (3)
- NOx (3)
- NURBS (3)
- Nachhaltigkeit (3)
- Nanopartikel (3)
- Netzwerk (3)
- Neural Networks (3)
- Nickel (3)
- Niederspannungsnetz (3)
- OFDM (3)
- Oberflächenvorbehandlung (3)
- Optionspreistheorie (3)
- Optische Zeichenerkennung (3)
- Organische Chemie (3)
- P (3)
- Partial Differential Equations (3)
- Phylogenie (3)
- Portfolio Optimization (3)
- Portfoliomanagement (3)
- ROS (3)
- Ray casting (3)
- Reaktive Sauerstoffspezies (3)
- Reaktivextraktion (3)
- Recommender Systems (3)
- Regenwasserbehandlung (3)
- Regionalplanung (3)
- Resistenz (3)
- Risikoanalyse (3)
- Risikomanagement (3)
- Räumliche Statistik (3)
- Schnittstelle (3)
- Scientific Visualization (3)
- Semantic Web (3)
- Sensitivitätsanalyse (3)
- Sicherheit (3)
- Signaltransduktion (3)
- Stoffübergang (3)
- System-on-Chip (3)
- Tetrachlordibenzodioxine (3)
- Thermodynamik (3)
- Thermoplast (3)
- Transaction Costs (3)
- Tropische Geometrie (3)
- Verbundträger (3)
- Verbundwerkstoffe (3)
- Verschleiß (3)
- Virtual Reality (3)
- Wasserstoffbrückenbindungen (3)
- Wavelet (3)
- Wein (3)
- anionic receptors (3)
- cobalt (3)
- computer graphics (3)
- deep learning (3)
- document analysis (3)
- isogeometric analysis (3)
- model (3)
- optical character recognition (3)
- optimales Investment (3)
- phase equilibria (3)
- phase equilibrium (3)
- ruthenium (3)
- supramolecular chemistry (3)
- visualization (3)
- Überflutung (3)
- 1 (2)
- 1'-Binaphthyle (2)
- ADAS (2)
- AG-RESY (2)
- ATP-Synthase (2)
- Abgasnachbehandlung (2)
- Absorption (2)
- Abwasserreinigung (2)
- Acetylcholinrezeptor (2)
- Activity recognition (2)
- Additionsreaktion (2)
- Adhäsion (2)
- Adipinsäure (2)
- Adsorption (2)
- Affinitätschromatographie (2)
- Aktivierung <Chemie> (2)
- Algorithm (2)
- Alkin (2)
- Alterung (2)
- Alterungsbeständigkeit (2)
- Aluminium (2)
- Amide (2)
- Aminierung (2)
- Androgene (2)
- Anionenrezeptor (2)
- Anpassung (2)
- Arabidopsis (2)
- Arsen (2)
- Arylhydrocarbon-Rezeptor (2)
- Asymptotic Expansion (2)
- Asymptotik (2)
- Ausschnitt <Öffnung> (2)
- Automatische Differentiation (2)
- B-Spline (2)
- B-splines (2)
- Bauphysik (2)
- Beta-Lactam-Resistenz (2)
- Beteiligung (2)
- Betriebsfestigkeit (2)
- Bildung (2)
- Biogeographie (2)
- Bioinformatik (2)
- Biologie (2)
- Biomarker (2)
- Biomechanik (2)
- Bismut (2)
- Blattschneiderameisen (2)
- Bodenfilter (2)
- Bottom-up (2)
- Bounded Model Checking (2)
- CAFM (2)
- CF-PEEK (2)
- CSO (2)
- CYP1A1 (2)
- Calcium (2)
- Carboxylierung (2)
- Carcinogenese (2)
- Cat-CVD-Verfahren (2)
- Cluster-Analyse (2)
- Cobaltcluster (2)
- Code Generation (2)
- Cognitive Load (2)
- Combined Sewer Overflow (2)
- Computational Fluid Dynamics (2)
- Computersimulation (2)
- Cyanobacteria (2)
- Cyanobakterien (2)
- Cyclometallierung (2)
- Cyclopentadienderivate (2)
- Cytochrome P450 (2)
- Cytotoxizität (2)
- DNA-Schäden (2)
- DNS (2)
- DNS-Schädigung (2)
- Dampf-flüssig-flüssig-Gleichgewicht (2)
- Darm (2)
- Darmkrebs (2)
- Daseinsvorsorge (2)
- Decarboxylierende Kreuzkupplung (2)
- Demographie (2)
- Demographischer Wandel (2)
- Derivat <Wertpapier> (2)
- Dimerisierung (2)
- Diskrete Fourier-Transformation (2)
- Disproportionierung (2)
- Domänenumklappen (2)
- Dreidimensionale Bildverarbeitung (2)
- EAG (2)
- Effizienz (2)
- Elasticity (2)
- Elastomer (2)
- Elastoplasticity (2)
- Elastoplastizität (2)
- Electronic Design Automation (2)
- Elektromagnetische Induktion (2)
- Elektronenspinresonanzspektroskopie (2)
- Emissionsverringerung (2)
- Empfangssignalverarbeitung (2)
- Endliche Geometrie (2)
- Endlicher Automat (2)
- Energieeffizienz (2)
- Entscheidungsunterstützung (2)
- Entwurf (2)
- Epoxidharz (2)
- Epoxydharz (2)
- Erdmagnetismus (2)
- Ermüdung bei hohen Lastspielzahlen (2)
- Ethan (2)
- Ethylen (2)
- Eupoecilia ambiguella (2)
- Experimentelle Psychologie (2)
- Experimentelle Untersuchungen (2)
- Explainable Artificial Intelligence (2)
- FEM (2)
- FFT (2)
- FISH (2)
- Facility Management (2)
- Faserbeton (2)
- Faserverstärkter Kunststoff (2)
- Fassade (2)
- Fermentation (2)
- Festkörper (2)
- Filtergesetz (2)
- Finite Elemente Methode (2)
- Finite Pointset Method (2)
- Flechten (2)
- Flexibilität (2)
- Fließgewässer (2)
- Flugzeitmassenspektrometrie (2)
- Flüssig-Flüssig-Gleichgewicht (2)
- Formale Methode (2)
- Fremdstoffmetabolismus (2)
- Frequenzverdopplung (2)
- Fruchtsaft (2)
- Funktionale Sicherheit (2)
- Fußgängerverkehr (2)
- Füllstoff (2)
- Gebäudeautomation (2)
- Gelonin (2)
- Genetik (2)
- Gentoxizität (2)
- Geometric Ergodicity (2)
- Geometrische Produktspezifikation (2)
- Geotechnik (2)
- Gesundheit (2)
- Glasfaser (2)
- Glasfaserverstärkter Thermoplast (2)
- Governance (2)
- HIF-1 (2)
- HPLC (2)
- Hamilton-Jacobi-Differentialgleichung (2)
- Hardwareverifikation (2)
- Hochdrucktechnik (2)
- Hochschuldidaktik (2)
- Hochskalieren (2)
- Homogenization (2)
- Hydrodynamics (2)
- Hypoxie (2)
- Hysterese (2)
- Hämodialyse (2)
- IMRT (2)
- Implementierung (2)
- Induktionsschweißen (2)
- Industrie 4.0 (2)
- Inhibitor (2)
- Innovation (2)
- Integration (2)
- Interaction (2)
- Interfaces (2)
- Ionencluster (2)
- Isogeometrische Analyse (2)
- Jugendfußball (2)
- KCC2 (2)
- Kalibrierung (2)
- Kleben (2)
- Klebstoff (2)
- Kläranlage (2)
- Knowledge Management (2)
- Kohlendioxid (2)
- Kohlenstoffdioxid (2)
- Kohlenstofffaser (2)
- Kollaboration (2)
- Kombinierte IR/UV-Spektroskopie (2)
- Kommunikation (2)
- Kooperation (2)
- Kopfbolzendübel (2)
- Kopplung (2)
- Korrelationsanalyse (2)
- Kraftfahrzeugindustrie (2)
- Kreditrisiko (2)
- Kreuzkupplung (2)
- Kristallographie (2)
- Kultivierung (2)
- Kunststofftechnik (2)
- Kühlturm (2)
- Künstliche Intelligenz (2)
- Langevin equation (2)
- Laserverdampfung (2)
- Lebensversicherung (2)
- Lebertumor (2)
- Leichtbeton (2)
- Lennard-Jones (2)
- Level-Set-Methode (2)
- Lineare Elastizitätstheorie (2)
- Lineare partielle Differentialgleichung (2)
- Lobesia botrana (2)
- Local smoothing (2)
- Magnetische Eigenschaften (2)
- Malondialdehyd (2)
- Martensit (2)
- Materialcharakterisierung (2)
- Mathematik (2)
- Mathematische Modellierung (2)
- Mehrschichtige Stahlbetonwandtafel (2)
- Mehrskalenanalyse (2)
- Mehrsprachigkeit (2)
- Membranprotein (2)
- Mercaptursäuren (2)
- Merkmalsextraktion (2)
- Merocyanine (2)
- Metallocene (2)
- Metathese (2)
- Metathese <Chemie> (2)
- Metropolregion (2)
- Migration (2)
- Mikrofräsen (2)
- Mikroskopie (2)
- Model Checking (2)
- Model Predictive Control (2)
- Modell (2)
- Modelling (2)
- Modulraum (2)
- Molekularsieb (2)
- Molybdän (2)
- Monte Carlo (2)
- Morphologie (2)
- Morphology (2)
- Motivation (2)
- Multiset Multicover (2)
- Mutagenität (2)
- Myasthenia gravis (2)
- Münzmetall (2)
- N (2)
- Nanocomposites (2)
- Nanodur-Beton (2)
- Natural Language Processing (2)
- Naturstoffverteilung (2)
- Network Calculus (2)
- Netzzustandsschätzung (2)
- Nichtlineare Kontinuumsmechanik (2)
- Nichtlineare Optik (2)
- Niederschlag (2)
- Niederschlagsabfluss (2)
- Niederspannung (2)
- Numerisches Modell (2)
- Oberflächenmorphologie (2)
- Objektorientierung (2)
- Ochratoxin A (2)
- Ontologie (2)
- Optik (2)
- Organische Synthese (2)
- Organometallcluster (2)
- Oxidation (2)
- Ozon (2)
- P10 (2)
- Palladiumkomplexe (2)
- Partielle Differentialgleichung (2)
- Pattern Recognition (2)
- Peng-Robinson-EoS (2)
- Pflanzenschutzmittel (2)
- Photochemie (2)
- Photodissoziation (2)
- Piezoelektrizität (2)
- Planung (2)
- Plastizität (2)
- Plazenta (2)
- Pn-Liganden (2)
- Polychlorierte Biphenyle (2)
- Polyetheretherketon (2)
- Polymermatrix-Verbundwerkstoff (2)
- Polypropylen (2)
- Populationsbilanzen (2)
- Poröser Stoff (2)
- Potenzialhyperfläche (2)
- Produktionssystem (2)
- Projektmanagement (2)
- Propanole (2)
- Proteine (2)
- Protonentransfer (2)
- Pyrazole (2)
- Pyrimidin (2)
- Qualität (2)
- Quantization (2)
- RFID (2)
- RNS-Interferenz (2)
- Ratte (2)
- Raumakustik (2)
- Raumplanung (2)
- Reaktionskinetik (2)
- Redoxchemie (2)
- Reduktion (2)
- Regressionsanalyse (2)
- Regularisierung (2)
- Response Priming (2)
- Riesling (2)
- Rissausbreitung (2)
- Robotics (2)
- Robust Optimization (2)
- Room acoustics (2)
- Rotordynamik (2)
- Rutheniumkomplexe (2)
- SCR (2)
- SDL (2)
- SKY (2)
- SOC (2)
- Sandwichbauweise (2)
- Sauerstoff (2)
- Scattered-Data-Interpolation (2)
- Scheduling (2)
- Schlauchflechten (2)
- Schnitttheorie (2)
- Schrumpfung (2)
- Schwermetalle (2)
- Schädigung (2)
- Self-X (2)
- Sendesignalverarbeitung (2)
- Sensor (2)
- Sensorik (2)
- Sepsis (2)
- Sheet-Molding-Compounds (2)
- Signalverarbeitung (2)
- Smart Grid (2)
- Smart Meter (2)
- Softwareentwicklung (2)
- Solarzelle (2)
- Soziale Ungleichheit (2)
- Sprache (2)
- Stadtmodell (2)
- Stadtumbau (2)
- Starkregen (2)
- Statine (2)
- Statistisches Modell (2)
- Stegöffnung (2)
- Stochastic Control (2)
- Stochastische Differentialgleichung (2)
- Stoffaustausch (2)
- Streptococcus (2)
- Streptomyces (2)
- Strukturationstheorie (2)
- Strömungsmechanik (2)
- Suzuki-Kupplung (2)
- TD-CDMA (2)
- Technische Mechanik (2)
- Teilchen (2)
- Textilbeton (2)
- Thermal Comfort (2)
- Thermische Behaglichkeit (2)
- Thermodynamics (2)
- Thermoformen (2)
- Time Series (2)
- Topology (2)
- Transaktionskosten (2)
- Transformation (2)
- Transkription (2)
- Transport (2)
- Tribology (2)
- Tropenökologie (2)
- Tumorpromotion (2)
- UML (2)
- UPnP (2)
- UV-VIS-Spektroskopie (2)
- Ultraschall (2)
- Uncertain Data (2)
- Uncertainty Visualization (2)
- Unterricht (2)
- Upscaling (2)
- Valenztautomerie (2)
- Vanadium (2)
- Vektorwavelets (2)
- Verbunddecken (2)
- Verkehrsablauf (2)
- Verkehrssicherheit (2)
- Verschleißprüfung (2)
- Virtuelles Messen (2)
- Viskoelastizität (2)
- Visual Analytics (2)
- Voronoi-Diagramm (2)
- Wahrscheinlichkeitsfunktion (2)
- Wearable computing (2)
- White Noise Analysis (2)
- WiFi (2)
- Wissenschaftliches Rechnen (2)
- Wohnen (2)
- Wässrige Lösung (2)
- Zellulares Mobilfunksystem (2)
- Zielgruppe (2)
- Zielgruppen (2)
- ab initio (2)
- acrylamide (2)
- aftertreatment (2)
- air interface (2)
- alkyne (2)
- alpha (2)
- ammonia (2)
- analysis (2)
- anisotropy (2)
- anthocyanins (2)
- apoptosis (2)
- apple (2)
- artificial intelligence (2)
- auditory brainstem (2)
- autoencoder (2)
- benzene (2)
- beyond 3G (2)
- biaryls (2)
- biodiversity (2)
- bottom-up (2)
- calcium channel (2)
- carboxylic acids (2)
- chiral (2)
- classification (2)
- climate change (2)
- cluster (2)
- clustering (2)
- combined sewer overflow treatment (2)
- composite beam (2)
- computational fluid dynamics (2)
- computational homogenization (2)
- computer vision (2)
- configurational forces (2)
- constructed wetland (2)
- continuum mechanics (2)
- cross-coupling (2)
- curvature (2)
- curve singularity (2)
- cyclopeptides (2)
- deuteration (2)
- dipeptide (2)
- direct laser writing (2)
- domain decomposition (2)
- duality (2)
- elastomer (2)
- elektronisch angeregte Zustände (2)
- enamide (2)
- epoxy (2)
- experiment (2)
- experimentelle Untersuchung (2)
- fatigue (2)
- finite deformations (2)
- finite elements (2)
- finite volume method (2)
- fluid interface (2)
- forest fragmentation (2)
- gas phase (2)
- geomagnetism (2)
- glioblastoma (2)
- glycidamide (2)
- high-pressure vapour-liquid-liquid equilibria (2)
- higher education (2)
- homogenization (2)
- hypoxia (2)
- ice shelves (2)
- illiquidity (2)
- image analysis (2)
- image processing (2)
- impedance spectroscopy (2)
- infrared spectroscopy (2)
- interface (2)
- interface problem (2)
- intervention study (2)
- invariant (2)
- iron (2)
- langfaserverstärkte Thermoplaste (2)
- layout analysis (2)
- leukemia (2)
- lichen (2)
- linear kinetics theory (2)
- lineare kinetische Theorie (2)
- liquid-liquid-extraction (2)
- liquid-liquid-extraction of natural products (2)
- mass transfer (2)
- material forces (2)
- mechanische Eigenschaften (2)
- mesh generation (2)
- metabolism (2)
- metal (2)
- metal cluster (2)
- mobile radio systems (2)
- molecular simulation (2)
- nahekritischer Zustandsbereich (2)
- near-critical ethene+water+propanol (2)
- numerics (2)
- numerische Mechanik (2)
- optimal investment (2)
- phase field model (2)
- placenta (2)
- probabilistic approach (2)
- processing (2)
- rate-dependency (2)
- real-time systems (2)
- reduction (2)
- regression analysis (2)
- rolling friction (2)
- semantic web (2)
- sensor fusion (2)
- shear bearing capacity (2)
- single molecule magnet (2)
- social media (2)
- spatial planning (2)
- splines (2)
- thermisches Gebäudeverhalten (2)
- thermodynamics (2)
- thermophysical properties (2)
- thermoplastische Halbzeuge (2)
- tractor (2)
- tribology (2)
- urban shrinkage (2)
- virtual acoustics (2)
- viscoelasticity (2)
- web opening (2)
- wetting (2)
- zerstörungsfreie Prüfung (2)
- Übergangsmetallcluster (2)
- "Slender-Body"-Theorie (1)
- "Stress-Mentor" (1)
- (Joint) chance constraints (1)
- (oxidative) DNA-Schäden (1)
- 150 bar loop (1)
- 17beta-Estradiol (1)
- 19-century architecture (1)
- 1D-CFD (1)
- 2,3,7,8-Tetrachlordibenzo-p-dioxin (1)
- 2,3,7,8-tetrachlordibenzo-p-dioxine (1)
- 2-D-Elektrophorese (1)
- 2D-CFD (1)
- 3D (1)
- 3D City Model (1)
- 3D Druckverfahren (1)
- 3D Gene Expression (1)
- 3D Point Data (1)
- 3D image analysis (1)
- 3D printing (1)
- 3D-City Model (1)
- 3D-Druck (1)
- 3D-Prozesssimulation (1)
- 3D-Stadtmodell (1)
- 3MET (1)
- 4-Aminopiperidin (1)
- 4-Aminopiperidine (1)
- 5-> (1)
- 50CrMo4 (1)
- A-infinity-bimodule (1)
- A-infinity-category (1)
- A-infinity-functor (1)
- A/D conversion (1)
- A431 (1)
- ADAM10 (1)
- ADMM (1)
- AFDX (1)
- AFLP (1)
- AFS (1)
- AFSfein (1)
- ALE-Methode (1)
- AMC225xe (1)
- AMS (1)
- ANC (1)
- ANTR5 (1)
- ASM (1)
- ATP (1)
- ATP-Stoffwechsel (1)
- ATPase-Aktivität (1)
- AUTOSAR (1)
- Aarhus-Konvention (1)
- Abfluss (1)
- Abflussbeiwert (1)
- Abflussbildung (1)
- Abflussmodellierung (1)
- Abgastemperaturmanagement (1)
- Ablation <Medizin> (1)
- Ableitungsfreie Optimierung (1)
- Ableitungsschätzung (1)
- Abrasiver Verschleiß (1)
- Abrechnungsmanagement (1)
- Abscheidung (1)
- Absorptionsspektroskopie (1)
- Abstraction (1)
- Abstraction-Based Controller Design (1)
- Abstraktion (1)
- Abtragen (1)
- Abwasserbeseitigung (1)
- Abwasserentsorgung (1)
- Abwasserinfrastruktur (1)
- Abwassersystem (1)
- Abwasserwärmenutzung (1)
- Abwasserwärmerückgewinnung (1)
- Accelerometer (1)
- Accounting Agent (1)
- Acetylcholin (1)
- Acetylcholinrezetorfragment (1)
- Acetylierung (1)
- Achslage (1)
- Acinetobacter calcoaceticus (1)
- Acrolein (1)
- Acrylglas (1)
- Acrylic Glas (1)
- Active Noice Canceling (1)
- Actor Engagement (1)
- Actor Roles (1)
- Acute toxicity (1)
- Ad-hoc networks (1)
- Ad-hoc-Netz (1)
- Adaptive Antennen (1)
- Adaptive Building Controller (1)
- Adaptive Data Structure (1)
- Adaptive Entzerrung (1)
- Adaptive time step (1)
- Additive 3D-Druckverfahren (1)
- Additive Fertigung (1)
- Addukt (1)
- Addukte (1)
- Adenylat (1)
- Adiabatische Abkühlung (1)
- Adjoint method (1)
- Adjungiertenverfahren (1)
- Adsorbermaterialien (1)
- Adsorptionsisotherme (1)
- Adsorptionskinetik (1)
- Adult identity (1)
- Adult learning (1)
- Advanced Encryption Standard (1)
- Aerodynamik (1)
- Aerosol (1)
- Aerosol Particles (1)
- Aerosol Partikeln (1)
- Affine Arithmetic (1)
- Agglomerat (1)
- Aggregation (1)
- Agile Methoden (1)
- Agriculture Loan (1)
- Ah-Rezeptor (1)
- AhR (1)
- AhR-Knockout Mäuse (1)
- AhR-VDR-Crosstalk (1)
- AhR-deficient mice (1)
- AhR/ER Crosstalk (1)
- AhRR (1)
- Ahr Knockout Model (1)
- Akquisition (1)
- Aktin (1)
- Aktive Stadt- und Ortsteilzentren (1)
- Aktivierung <Physiologie> (1)
- Aktivitätskoeffizient (1)
- Akustik (1)
- Akute lymphatische Leukämie (1)
- Akute myeloische Leukämie (1)
- Akzeptanz (1)
- Algebraic dependence of commuting elements (1)
- Algebraic geometry (1)
- Algebraic groups (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Algebraischer Funktionenkörper (1)
- Algorithmic Differentiation (1)
- Alkenylbenzene (1)
- Alkine (1)
- Alkoholfreier Wein (1)
- Alkylaromaten (1)
- Alkylcarbonate (1)
- Alkylglucosid (1)
- Allgemeine Mikrobiologie (1)
- Allgemeinheit (1)
- Allokation (1)
- Allylierung (1)
- Altern (1)
- Aluminiumphosphate (1)
- Amazonia (1)
- Ambidextrie (1)
- Ambulatory Assessment (1)
- Ames-Fluktuationstest (1)
- Amharic, Attention, Factored Convolutional Neural Network, OCR (1)
- Amidbindungsknüpfung (1)
- Amination (1)
- Aminosäuren (1)
- Ammoniak (1)
- Ammoniumcarbamat (1)
- Analyse (1)
- Analysis (1)
- Analytical method (1)
- Analytische Modellierung (1)
- Ananasgewächse (1)
- Aneignung (1)
- Aneuploidy, Whole Genome Doubling (1)
- Angebotspreise (1)
- Angeregter Zustand (1)
- Angewandte Mathematik (1)
- Angewandte Toxikologie (1)
- Anion recognition (1)
- Anionenerkennung (1)
- Anionenrezeptoren (1)
- Anisotropie (1)
- Annulus (1)
- Anomaly Detection (1)
- Anorganische Chemie (1)
- Anorganisches Pigment (1)
- Ansäuerung (1)
- Ant Colony Optimization (1)
- Anthocyanidine (1)
- Anthropogener Einfluss (1)
- Anthropotechnik (1)
- Anti-diffusion (1)
- Antiandrogene (1)
- Antidiffusion (1)
- Antiestrogene (1)
- Antigenspezifische Immunsuppression (1)
- Antimon (1)
- Antioxidans (1)
- Antioxidanzien (1)
- Antioxidative Kapazität (1)
- Anämiebehandlung (1)
- Aperiodic Crystal (1)
- Aperiodischer Kristall (1)
- Apfelsaftextrakte (1)
- Apfelsorte (1)
- Apple juice (1)
- Application Framework (1)
- Approximationsalgorithmus (1)
- Arabidopsis thaliana (1)
- Arachidonsäurestoffwechsel (1)
- Arbeitsmarkt (1)
- Arbeitsmaschine (1)
- Arbitrage (1)
- Arc distance (1)
- Archaikum (1)
- Archimedische Kopula (1)
- Architectural History (1)
- Architektur (1)
- Architektur des 19. Jahrhunderts (1)
- Aren-Komplex (1)
- Aristoteles (1)
- Arithmetic data-path (1)
- Arithmetik (1)
- Armierung (1)
- Aromastoffe (1)
- Aromatizität (1)
- Aroniabeere (1)
- Arseniden (1)
- Artefakt (1)
- Aryl hydrocarbon Receptor (1)
- Arylaminierung (1)
- Arzneimittelresistenz (1)
- Asarone (1)
- Ascorbat (1)
- Ascorbinsäure (1)
- Ascorbylradikal (1)
- Asiatische Option (1)
- Asset Administration Shell (1)
- Asset allocation (1)
- Asset-liability management (1)
- Assistenzsystem (1)
- Association (1)
- Asympotic Analysis (1)
- Asymptotic Analysis (1)
- Asymptotische Entwicklung (1)
- Atmungskette (1)
- Atom optics (1)
- Atomoptik (1)
- Attribution (1)
- Aufbau von Steuerungsmechanismen (1)
- Aufladung (1)
- Augmented Reality (1)
- Augustinus (1)
- Ausdrucksfähig (1)
- Ausdrucksfähigkeit (1)
- Ausfallrisiko (1)
- Ausflussfaktor (1)
- Ausfällen (1)
- Aushärtung (1)
- Auslegung (1)
- Ausparken (1)
- Austin (1)
- Aut (1)
- Automat <Automatentheorie> (1)
- Automatic Differentiation (1)
- Automatic Image Captioning (1)
- Automatic risk assessment (1)
- Automatische Gefahrenanalyse (1)
- Automatische Risikobewertung (1)
- Automatisches Beweisverfahren (1)
- Automatisiertes Fahren (1)
- Automatisierungssystem (1)
- Automobil (1)
- Automobilindustrie (1)
- Automorphismengruppe (1)
- Autonomer Agent (1)
- Autonomer Roboter (1)
- Autonomes Fahren (1)
- Autopoiese (1)
- Autoprozessierung (1)
- Autoregressive Hilbertian model (1)
- Autökologie (1)
- Außerschulisches Lernen (1)
- Aven (1)
- Avirulence (1)
- Axiale Chiralität (1)
- Axialschub (1)
- Azaindirubin (1)
- Azidifizierung (1)
- Aziridin (1)
- B-CLL (1)
- B3 (1)
- B3- (1)
- BAD (1)
- BASIL-Verfahren (1)
- BASIL-process (1)
- BDH (1)
- BMP1 MAP-Kinase (1)
- BRCA2 (1)
- Backlog (1)
- Bacteriocin (1)
- Baeocyte (1)
- Bahnplanung (1)
- Baiturrahman Grand Mosque (1)
- Bakteriophage (1)
- Balance sheet (1)
- Baldwin (1)
- Bamipin (1)
- Banda Aceh old city center (1)
- Barbiturate (1)
- Barock (1)
- Baroque (1)
- Barrierefreiheit (1)
- Barriers (1)
- Basaltfaserverstärkte Kunststoffe (1)
- Basic Scheme (1)
- Basis Risk (1)
- Basisband (1)
- Basische Feststoffkatalysatoren (1)
- Basket Option (1)
- Bass-ackwards analysis (1)
- Batch Methode (1)
- Batchkalorimetrie (1)
- Baugeschichte (1)
- Baugrund-Tragwerk-Interaktion (1)
- Bauleitplanung (1)
- Baulicher Brandschutz (1)
- Bauplanung (1)
- Bauteilauslegung (1)
- Bauvorhaben (1)
- Bauwesen (1)
- Bayes method (1)
- Bayes-Entscheidungstheorie (1)
- Bayes-Verfahren (1)
- Beam models (1)
- Beam orientation (1)
- Bearing (1)
- Bearing capacitance (1)
- Bearing current (1)
- Bebauungsplan (1)
- Beere (1)
- Beerenobst (1)
- Befahrbarkeitsanalyse (1)
- Befestigungsmittel (1)
- Befähigung (1)
- Begriff (1)
- Behinderung (1)
- Bekreuzter Traubenwickler (1)
- Belastung nichtruhende (1)
- Belastung zyklisch (1)
- Beleuchtung (1)
- Bemessungskonzept (1)
- Bemessungsmodell (1)
- Benetzbarkeit (1)
- Benetzung (1)
- Benutzer (1)
- Benutzerfreundlichkeit (1)
- Benutzeroberfläche (1)
- Benzinverbrauch (1)
- Benzol (1)
- Bernstein–Gelfand–Gelfand construction (1)
- Berry fruit juice (1)
- Berufliche Entwicklung (1)
- Berufsberatung (1)
- Berufspädagogik (1)
- Berufsrolle (1)
- Berufswahl (1)
- Berührungslose Messung (1)
- Berührungsloser Sensor (1)
- Beschichten (1)
- Beschichtungsprozess (1)
- Beschichtungswerkstoffen (1)
- Beschleunigung (1)
- Beschränkte Arithmetik (1)
- Beschränkte Krümmung (1)
- Besonderheit (1)
- Beton (1)
- Betonbau (1)
- Betonfestigkeit (1)
- Betonstahl (1)
- Betonverzahnung (1)
- Betrachtung des Schlimmstmöglichen Falles (1)
- Betriebliche Gesundheitsförderung (1)
- Betriebsmittel (1)
- Betriebspädagogik (1)
- Bettungsmodul (1)
- Bevolkingsdaling (1)
- Bewegungsanalyse (1)
- Bewertungsverfahren (1)
- Bewältigung (1)
- Bewässerung (1)
- Biaryl (1)
- Bibligraphic References (1)
- Bicyclus (1)
- Biegetragfähigkeit (1)
- Bifunktioneller Katalysator (1)
- Bifurkation (1)
- Bilanzierung (1)
- Bilanzstrukturmanagement (1)
- Bilayer (1)
- Bildsegmentierung (1)
- Bildungsforschung (1)
- Bildungskapital (1)
- Bildungsreform (1)
- Bildungsungleichheit (1)
- Binaphthalin (1)
- Binaphthyle (1)
- Bindiger Boden (1)
- Binomialbaum (1)
- Bio-basierte Materialien (1)
- Bio-inspired (1)
- Biodiesel (1)
- Biodiversität (1)
- Biogeography (1)
- Biokatalyse (1)
- Biomimetic (1)
- Bionas (1)
- Bionik (1)
- Biophysics (1)
- Bioplastic-based blend nanocomposites (1)
- Biorthogonalisation (1)
- Biot Poroelastizitätgleichung (1)
- Biot-Savart Operator (1)
- Biot-Savart operator (1)
- Biotechnologie (1)
- Biotransformation (1)
- Biotrophy (1)
- Bioturbation (1)
- Bipedal Locomotion (1)
- Bis(kronenether) (1)
- Bispyrazole (1)
- Bitvektor (1)
- Blitztemperatur (1)
- Blue Collar Worker (1)
- Blue Ocean (1)
- Bluetooth (1)
- Blumenbach (1)
- Boltzmann Equation (1)
- Bondindizes (1)
- Boosting (1)
- Bootstrap (1)
- Bootstrapping (1)
- Boratom (1)
- Botrytis cinerea (1)
- Botrytis fabae (1)
- Boundary Value Problem / Oblique Derivative (1)
- Brandenburg-Lubuskie (1)
- Brandfall (1)
- Brandschutz (1)
- Brandversuche (1)
- Bremerhaven (1)
- Brinkman (1)
- Brombeere (1)
- Bromierung (1)
- Broutman-Versuch (1)
- Brownian Diffusion (1)
- Brownian motion (1)
- Brownsche Bewegung (1)
- Brückenbau (1)
- Buchstabe (1)
- Buffer (1)
- Buffer Zone Method (1)
- Buffon (1)
- Building Design (1)
- Building Simulation (1)
- Bundle Methods (1)
- Buntsaftkonzentrat (1)
- Busfahrer (1)
- Business Model Innovation (1)
- Business Sustainability (1)
- Butterflykomplex (1)
- Bändchenhalbzeuge (1)
- Büroflächen (1)
- Bürostuhl mit Heiz- und Kühlfunktion (1)
- Büyükçekmece and Mogan Lake (1)
- C-Si-Kupplung (1)
- C3+ (1)
- C5H3(SiMe3)2-Liganden (1)
- CAD-Modell (1)
- CBR (1)
- CCT (1)
- CD11b (1)
- CD18 (1)
- CDK1 (1)
- CDMA (1)
- CDO (1)
- CDS (1)
- CDSwaption (1)
- CFD Simulation (1)
- CFK, Epoxidharzmatrix (1)
- CFRP (1)
- CGH (1)
- CHAMP (1)
- CID (1)
- CMC (1)
- CMOS (1)
- CMOS-Schaltung (1)
- CNS (1)
- COX-2 (1)
- CPDO (1)
- CSI (1)
- CSOs treatment (1)
- CUDA (1)
- CYP1A (1)
- CYP1A2 (1)
- CYP1B1 (1)
- CYP3A4 (1)
- Caching (1)
- Cadmium (1)
- Caedibacter (1)
- Calcitriol (1)
- Calciumkanal (1)
- Calibration (1)
- Capabilities (1)
- Car-Sharing (1)
- CarSharing (1)
- Carben (1)
- Carbene (1)
- Carbon Capture (1)
- Carbon footprint (1)
- Carbonbeton (1)
- Carcinogenesis (1)
- Careless Responding (1)
- Caspase (1)
- Castelnuovo Funktion (1)
- Castelnuovo function (1)
- Catecholat (1)
- Cauchy-Born Regel (1)
- Cauchy-Born Rule (1)
- Cauchy-Born rule (1)
- Cauchy-Navier-Equation (1)
- Cauchy-Navier-Gleichung (1)
- Cefotaxim (1)
- Cell crosstalk (1)
- Cell proliferation (1)
- Cellular Communications (1)
- Cellulose (1)
- Celluloseacetat (1)
- Censoring (1)
- Center Location (1)
- Chalkogen (1)
- Chancengleichheit (1)
- Change (1)
- Change Point Analysis (1)
- Change Point Test (1)
- Change-point Analysis (1)
- Change-point estimator (1)
- Change-point test (1)
- Channel Hopping (1)
- Channel Scheduling (1)
- Channel estimation (1)
- Channel hopping (1)
- Channel sensing (1)
- Chaperone (1)
- Charakter <Gruppentheorie> (1)
- Chemisc (1)
- Chemische Analyse (1)
- Chemische Bindung / Theorie (1)
- Chemische Reaktion (1)
- Chemische Synthese (1)
- Chemolumineszenz (1)
- Chevron-Prozess (1)
- Chi-Quadrat-Test (1)
- Chiral (1)
- Chirale Induktion (1)
- Chiralität <Chemie> (1)
- Chlamydomonas reinhardii (1)
- Chloratom (1)
- Chloride regulation (1)
- Chlorierung (1)
- Cholesky-Verfahren (1)
- Chow Quotient (1)
- Chrom (1)
- Chromatin (1)
- Chromatographiesäule (1)
- Chronische Darmentzündung (1)
- Chroococcales (1)
- Chroococcidiopsis (1)
- Chroococcidiopsis cubana (1)
- Chroococcidiopsis thermalis (1)
- Chroococcidiopsisdaceae (1)
- Circle Location (1)
- City center (1)
- Classification (1)
- Classification of biomedical signals (1)
- Cleaning Efficiency (1)
- Click chemistry (1)
- Click-Chemie (1)
- Clock and Data Recovery Circuits (1)
- Clostridium difficile (1)
- Closure (1)
- Clustering (1)
- Clusterverbindungen (1)
- Co-Curing (1)
- Coaching (1)
- Coarse graining (1)
- Cobalt(II)-Komplexe (1)
- Cobalt(III)-Komplexe (1)
- Cobalt-Halbsandwichkomplexe (1)
- Cobaltkomplexe (1)
- Codierung (1)
- Coenogonium (1)
- Cognitive Amplification (1)
- Cohen-Lenstra heuristic (1)
- Collaboration (1)
- Collision Induced Dissociation (1)
- Combinatorial Optimization (1)
- Combinatorial Testing (1)
- Combined IR/UV spectroscopy (1)
- Combined Mobility (1)
- Commodity Index (1)
- Comparative Genomische Hybridisierung (1)
- Competence (1)
- Complex Structures (1)
- Composite Materials (1)
- Composites (1)
- Computational Homogenization (1)
- Computational Mechanics (1)
- Computer Algebra (1)
- Computer Algebra System (1)
- Computer Graphic (1)
- Computer Supported Cooperative Work (1)
- Computer algebra (1)
- Computer graphics (1)
- Computer-Aided Diagnosis (1)
- Computeralgebra System (1)
- Computerphysik (1)
- Computertomographie (1)
- Computervision (1)
- Concrete experience (1)
- Concurrent data structures (1)
- Conditional Value-at-Risk (1)
- Configurational Forces (1)
- Congitive Radio Networks (1)
- Connectivity (1)
- Conservation laws (1)
- Consistencyanalysis (1)
- Consistent Price Processes (1)
- Consolidation (1)
- Constraint Generation (1)
- Constraint-Coupled Systems (1)
- Constructed Wetland (1)
- Construction of hypersurfaces (1)
- Constructivism (1)
- Containertypen (1)
- Containertypes (1)
- Content Management (1)
- Context Awareness (1)
- Context-sensitive Assistance (1)
- Continuous-Time Neural Networks (1)
- Continuum Damage (1)
- Continuum-Atomistic Multiscale Algorithm (1)
- Continuum-Atomistics (1)
- Control Engineering (1)
- Controller Synthesis (1)
- Convergence Rate (1)
- Convex Optimization (1)
- Cook Wilson (1)
- Coordination (1)
- Coping (1)
- Copper (1)
- Copula (1)
- Correct-by-Design Controller Synthesis (1)
- Corridors (1)
- Coupled PDEs (1)
- Coxeter-Freudenthal-Kuhn triangulation (1)
- Crack resistance (1)
- Crash (1)
- Crash Hedging (1)
- Crash modelling (1)
- Crash-Charakteristiken (1)
- Crashmodellierung (1)
- Crashverhalten (1)
- Cre-loxP-System (1)
- Credit Default Swap (1)
- Credit Risk (1)
- Cross-Cultural Product Development (1)
- Cross-border regions (1)
- Cross-border transport (1)
- Crowdsourcing (1)
- Cryo (1)
- Cryptophyte (1)
- Cryptothecia (1)
- Crystallization fouling (1)
- Curvature (1)
- Curved viscous fibers (1)
- Cusanus (1)
- Cy (1)
- Cyanobakterium (1)
- Cyber-Physical Systems (1)
- Cycle Accuracy (1)
- Cycle Decomposition (1)
- Cyclin-abhängige Kinasen (1)
- Cyclisches Nucleotid Phosphodiesterase <3 (1)
- Cyclisches Pseudopeptid (1)
- Cyclo-AMP (1)
- Cyclo-GMP (1)
- Cycloheximid (1)
- Cyclopentadien (1)
- Cyclopentadienylliganden (1)
- Cyclophilin B (1)
- Cyochrome P450 (1)
- Cyp1a1 (1)
- Cyp1a1 Genexpression (1)
- Cyp24a1 Genexpression (1)
- Cytochomes P450 (1)
- Cytochrome P-450 (1)
- Cytogenetik (1)
- Cytokine (1)
- Cytotoxicity (1)
- Czochralski (1)
- DC/DC Converter (1)
- DCE <Programm> (1)
- DDR-Bildungssystem (1)
- DDR-Leistungssportsystem (1)
- DFG (1)
- DFT (1)
- DFT calculation (1)
- DGR (1)
- DHFR (1)
- DL-PCBs (1)
- DLW (1)
- DNA (1)
- DNA Addukte (1)
- DNA adducts (1)
- DNA damage (1)
- DNA metabarcoding (1)
- DNA-Addukte (1)
- DNA-Schädigung (1)
- DNA-damage (1)
- DNS-Bindung (1)
- DNS-Chip (1)
- DNS-Doppelstrangbruch (1)
- DNS-Reparatur (1)
- DNS-Strangbruch (1)
- DNS-Topoisomerase I (1)
- DOSY (1)
- DPN (1)
- DSM (1)
- DSMC (1)
- Dach (1)
- Damage (1)
- Dampf-Flüssigkeit-Gemisch (1)
- Dark-state Polariton (1)
- Darmflora (1)
- Darmkrankheit (1)
- Darstellungstheorie (1)
- Darwin (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Data Analysis (1)
- Data Modeling (1)
- Data Spreading (1)
- Data path (1)
- Dataset (1)
- Datenausgabegerät (1)
- Datenbankanbindungen und Datenorganisation (1)
- Datenbanken (1)
- Datenbanksystem (1)
- Datenfusion (1)
- Datenrückgewinnungsschaltungen (1)
- Datenspreizung (1)
- Dauerhaftigkeit (1)
- Dauerstandfestigkeit (1)
- De (1)
- Debt Management (1)
- Decane (1)
- Decarboxylierende Kupplung (1)
- Decarboxylierende Kupplungen (1)
- Decarboxylierung (1)
- Decision Support Systems (1)
- Deckensystem (1)
- Defaultable Options (1)
- Defektinteraktion (1)
- Definition (1)
- Deformationsmessung (1)
- Deformationstheorie (1)
- Degenerate Diffusion Semigroups (1)
- Dehnungsmessung (1)
- Dehydratasen (1)
- Dehydratisierung (1)
- Dehydrierung (1)
- Dekonsolidierung (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Delaunay-Triangulierung (1)
- Demand Side Integration (1)
- Demografischer Wandel (1)
- Demographische Alterung (1)
- Dendritische Zellen (1)
- Derivative Estimation (1)
- DesLaNAS (1)
- Despoblacion (1)
- Detektionsverfahren (1)
- Deterioration (1)
- Determination (1)
- Deuterierung (1)
- Deuteromycetes (1)
- Dexamethason (1)
- Dezentrale Heiz- und Kühlsysteme (1)
- Dialektische Logik (1)
- Dialkali-Halogen (1)
- Dialysemembran (1)
- Diamant (1)
- Diamine (1)
- Diazapyridinophane (1)
- Diazoniumverbindungen (1)
- Dicarbonsäuren (1)
- Dichte (1)
- Dichtefunktionalformalismus (1)
- Dickwandigkeit (1)
- Diclofenac (1)
- Dienstgüte (1)
- Dienstleistungen (1)
- Dienstschnittstellen (1)
- Diesel engine (1)
- Dieselmotor (1)
- Differential forms (1)
- Differentialgleichung des Sandwichtheorie (1)
- Differenz (1)
- Differenzenverfahren (1)
- Differenzkonstruktion (1)
- Differenzmenge (1)
- Diffusion (1)
- Diffusion processes (1)
- Diffusionskoeffizient (1)
- Diffusionsmessung (1)
- Diffusionsmodell (1)
- Diffusionsprozess (1)
- Digital Image Correlation (1)
- Digital Manufacturing System (1)
- Digital technology (1)
- Digitale Arbeitsblätter (1)
- Digitale Medien (1)
- Digitale integrierte Schaltung (1)
- Digitaler Zwilling (1)
- Digitales Produktionssystem (1)
- Digitalmodulation (1)
- Dihydrofolatreduktase (1)
- Dimethyldisulfid (1)
- Dimethylfumarat (1)
- Dinickelocen (1)
- Dioxin (1)
- Dioxin-like Compounds (1)
- Dioxolanligand (1)
- Dipeptide (1)
- Diphenylfulven (1)
- Direct Numerical Simulation (1)
- Direct X (1)
- Direktantrieb (1)
- Discrete Event Simulation (DES) (1)
- Discrimination (1)
- Discriminatory power (1)
- Diskontinuität (1)
- Diskrepanz (1)
- Diskrete Elemente Methode (1)
- Diskrete Simulation (1)
- Diskretisierung (1)
- Diskriminierung (1)
- Diskurs (1)
- Diskursive Haltung (1)
- Dislocations (1)
- Dispergierung (1)
- Dispersionsrelation (1)
- Disproportionierung von Ethylbenzol (1)
- Distickstoff (1)
- Distributed Hash Tables (1)
- Distributed Optimization (1)
- Distributed Rendering (1)
- Distributed system (1)
- Disulfidbrücken-Transfer (1)
- Diversifikation (1)
- Diversity Management (1)
- Diversität (1)
- Diversitätsgenerierende Retroelemente (1)
- Diversitätssensibler Unterricht (1)
- Dokumentation (1)
- Domain switching (1)
- Doppelresonanz (1)
- Doppelschicht (1)
- Dopplerlimitiert (1)
- Dosierung (1)
- Dotieren (1)
- Double Dissociation (1)
- Double Minutes (1)
- Downlink (1)
- Drehen (1)
- Dreidimensionale Modellierung (1)
- Dreidimensionale Rekonstruktion (1)
- Dreidimensionale Software (1)
- Dreidimensionale Strömung (1)
- Dreikerncluster (1)
- Drilltragverhalten (1)
- Drohne (1)
- Drone (1)
- Droplet breakage (1)
- Droplet coalescence (1)
- Drosselspalt (1)
- Druckabfall (1)
- Druckdifferenz (1)
- Druckkorrektur (1)
- Drug delivery systems (1)
- Dry Coating (1)
- Dry Fiber Plachement Preforms (1)
- Dry Weight (1)
- Dsitribution (1)
- Dual Decomposition (1)
- Duftstoffanalyse (1)
- Duftstoffe (1)
- Dunkelzustandspolariton (1)
- Duplicate Identification (1)
- Duplikaterkennung (1)
- Durchfluss (1)
- Durchflußzytometrie (1)
- Durchlaufbecken (1)
- Durchlaufträger (1)
- Durchmessersensor (1)
- Duromere (1)
- Dynamically reconfigurable analog circuits (1)
- Dynamik / Baumechanik (1)
- Dynamische Gebäudesimulation (1)
- Dynamische Massengenerierung (1)
- Dynamischer Test (1)
- Dyslexie (1)
- Dyson-Schwinger Gleichung (1)
- Dübelkennlinie (1)
- Dünnfilmapproximation (1)
- Dünnschichtsolarz (1)
- E (1)
- E-Auto (1)
- E-Learning (1)
- EAMG (1)
- ECC (1)
- EDA (1)
- EDF observation models (1)
- EEG (1)
- EGFR (1)
- ELISA (1)
- EM algorithm (1)
- EMPO (1)
- EMS-Training (1)
- ENT (1)
- EP-Komposite (1)
- EPDM (1)
- EPR Spectroscopy (1)
- EPR Spektroskopie (1)
- EPR-Spectroscopy (1)
- EROD (1)
- EROD Induktion (1)
- ESPI (1)
- ESTELLE (1)
- ETB (1)
- Earthworms (1)
- Eastern Boundary Upwelling Systems (1)
- Ebullition time-series (1)
- Echtzeit (1)
- Ecology (1)
- Eddy (1)
- Education (1)
- Edwards Model (1)
- Effective Conductivity (1)
- Effects of Design Choices (1)
- Efficiency (1)
- Efficient Reliability Estimation (1)
- Effizienzsteigerung (1)
- Eierstockkrebs (1)
- Eigenfrequenz (1)
- Eigenfrequenzbeeinflussung (1)
- Eigenschaftsprüfung (1)
- Eigenspannungen (1)
- Eikonal equation (1)
- Einbindiger Traubenwickler (1)
- Einparken (1)
- Einzelhandel (1)
- Einzelheit <Philosophie> (1)
- Einzelmolekülspektroskopie (1)
- Einzelzell-Analyse (1)
- Eisen(II)-Komplexe (1)
- Eisen-Phosphor-Cluster (1)
- Eisen-Schwefel-Cluster (1)
- Eisencarbonylkomplexe (1)
- Elastase (1)
- Elastische Deformation (1)
- Electrical model (1)
- Electroless Plating (1)
- Electronic Commerce (1)
- Electronic Laboratory Notebook (1)
- Electronically excited states (1)
- Elektrisch (1)
- Elektrischer Durchschlag (1)
- Elektroautomobil (1)
- Elektrochromie (1)
- Elektrohydraulik (1)
- Elektrolytlösung (1)
- Elektrolytlösungen (1)
- Elektromagnetische Streuung (1)
- Elektromuskelstimulationstraining (1)
- Elektromyostimulationstraining (1)
- Elektronenspinresonanz (1)
- Elektronisches Laborjournal (1)
- Elektrooptik (1)
- Elektrophysiologie (1)
- Elektroporation (1)
- Elementarteilchenphysik (1)
- Eliminationsverfahren (1)
- Ellagsäure (1)
- Ellipsometrie (1)
- Elliptische Verteilung (1)
- Elliptisches Randwertproblem (1)
- Emission (1)
- Emissionen (1)
- Emissionsspektroskopie (1)
- Emotion (1)
- Empfehlungssysteme (1)
- Empfängerorientierung (1)
- Empirische Forschung (1)
- Empirische Pädagogik (1)
- Enamide (1)
- Endliche Gruppe (1)
- Endliche Lie-Gruppe (1)
- Endokrin wirksamer Stoff (1)
- Endoplasmatisches Retikulum (1)
- Energie (1)
- Energie auf Kläranlagen (1)
- Energieabsorptionsvermögen (1)
- Energieeinsparung (1)
- Energieversorgungsnetz (1)
- Energiewende (1)
- Energy Efficiency (1)
- Energy markets (1)
- Engagement (1)
- Engagementförderung (1)
- Engineering 4.0 (1)
- Englisches Planungssystem (1)
- English Planning System (1)
- Ensemble Feature Selection (1)
- Ensemble Visualization (1)
- Entalkoholisierter Wein (1)
- Entladung (1)
- Entlastung (1)
- Entlastungsverhalten (1)
- Entscheidung (1)
- Entscheidungsbaum (1)
- Entscheidungsproblem (1)
- Entscheidungstheorie (1)
- Entstehung (1)
- Entwicklung (1)
- Entwicklungsgeschichte (1)
- Entwicklungspsychologie (1)
- Entwicklungsstabilität (1)
- Entwurfsautomation (1)
- Entzündung (1)
- Enumerative Geometrie (1)
- Environmental Psychology (1)
- Environmental inequality (1)
- Environmental stress cracking resistance (1)
- Enzyminhibitor (1)
- Epidermaler Wachstumsfaktor-Rezeptor (1)
- Epigenese (1)
- Epigenetik (1)
- Epiphyten (1)
- Epitaxie (1)
- Epoxidation (1)
- Epoxide (1)
- Epoxidharzklebstoff (1)
- Epoxidharzverbunden (1)
- Epoxidklebstoffe Härtungskinetik Oberflächenvorbehandlung Aluminium Netzwerkstrukturen (1)
- Epoxy Adhesive (1)
- Erdbebeningenieurwesen (1)
- Erdbeere (1)
- Erdschluss (1)
- Erdschlussentfernung (1)
- Erdöl Prospektierung (1)
- Erfüllbarke (1)
- Erfüllbarkeitsproblem (1)
- Ergonomie (1)
- Erhaltungsgleichungen (1)
- Erkenntnistheorie (1)
- Erklärungen (1)
- Ermüdung bei niedrigen Lastspielzahlen (1)
- Ermüdungsrisse (1)
- Erreichbarkeit (1)
- Ersatzteil (1)
- Ersatzteilmarkt (1)
- Ersatzteilversorgung (1)
- Ersatzwertgenerierung (1)
- Erschließungsform (1)
- Erwachsenenbildung (1)
- Erwartungswert-Varianz-Ansatz (1)
- Erweiterte Realität <Informatik> (1)
- Essential m-dissipativity (1)
- Estradiol (1)
- Estradiolrezeptor (1)
- Estragol (1)
- Estrogene (1)
- Ethanol (1)
- Ethernet (1)
- Ethik (1)
- Ethylbenzol (1)
- Etylbenzene disproportionation (1)
- Europa (1)
- European Pollutant Release and Transfer Register (E-PRTR) (1)
- European Territorial Cooperation (1)
- European Union (1)
- European Union policy-making (1)
- European integration (1)
- Europeanisation (1)
- Europäische Metropolregionen in Deutschland (1)
- Europäische Strukturpolitik (1)
- Europäische Territoriale Zusammenarbeit (1)
- Eurosoils (1)
- Evakuierung (1)
- Event psychology (1)
- Eventpsychologie (1)
- Eventual consistency (1)
- Evolutionary Algorithm (1)
- Evolutionärer Algorithmus (1)
- Exekutive Funktionen (1)
- Expected shortfall (1)
- Experiential learning (1)
- Experimentation (1)
- Experimentauswertung (1)
- Experimentelle Charakterisierung (1)
- Experimentelle Ermittlung (1)
- Experimentelle Untersuchung (1)
- Explainability (1)
- Exponential Utility (1)
- Exponentieller Nutzen (1)
- Exposed Datapath Architectures (1)
- Expressiveness (1)
- Extended Finite-Elemente-Methode (1)
- Extended Kalman Filter (1)
- Extended Mind (1)
- Extreme Events (1)
- Extreme value theory (1)
- Extrudiertes Polystyrol (1)
- Eye-Tracking (1)
- Eyewear Computing (1)
- F1-ATPase (1)
- FERAL (1)
- FICTION (1)
- FKV (1)
- FKV-Rotor (1)
- FOF1-ATP-Synthase (1)
- FPM (1)
- FT-ICR Zelle (1)
- FT-IR-Spektrsokopie (1)
- FT-MIR (1)
- FTIR-Spektroskopie (1)
- Facade (1)
- Fachwerkmodell (1)
- Facility-Management (1)
- Faden (1)
- Fahrerassistenzsystem (1)
- Fahrgemeinschaft (1)
- Fahrtkostenmodelle (1)
- Fahrzeugbau (1)
- Fahrzeugcrashberechnung (1)
- Fahrzeughydraulik (1)
- Faltenbildung (1)
- Faltungsaktivität (1)
- Farbe (1)
- Farbstabilität (1)
- Farnesylpyrophosphat-Synthase (1)
- FasLigand (1)
- Faser (1)
- Faser-Kunststoff-Verbund-Laminate (1)
- Faser-Thermoplast-Verbunden (1)
- Faser/Matrix-Haftung (1)
- Faserablage (1)
- Faserfestigkeit (1)
- Faserorientierung (1)
- Faserorientierungen (1)
- Faserschädigung (1)
- Faserverbundmaterialien (1)
- Faserverbundstrukturen (1)
- Faserverstärkte Thermoplaste (1)
- Faserverstärkter Kunststoff, Faserkunststoffbewehrung, Betonbauteile, Verbund, Verankerung, Langzeitverhalten (1)
- Fast Mode-Signaling (1)
- Fatigue (1)
- Fattyacids (1)
- Fault Injection (1)
- Fault Tree Analysis (1)
- Fe(II)-Komplexe (1)
- Feasibility study (1)
- Feature (1)
- Feature Detection (1)
- Feature Extraction (1)
- Feature extraction (1)
- Federated Learning (1)
- Federgelenk (1)
- Feedfoward Neural Networks (1)
- Fehlerbaumanalyse (1)
- Fehlerwiderstand (1)
- Feige (1)
- Feinkornbeton (1)
- Femtosecond Laser (1)
- Femtosekundenlaser (1)
- Femtosekundenspektroskopie (1)
- Fernerkundung (1)
- Fernstudium (1)
- Ferrocen (1)
- Fertigteildecken mit Ortbetonergänzung (1)
- Fertigungslogistik (1)
- Fertigungsmesstechnik (1)
- Fertigungsverfahren (1)
- Festkörperchemie (1)
- Festkörpergrenzschichten (1)
- Festkörperlaser (1)
- Feststoff (1)
- Feststoff-Dosiersystem (1)
- Feststoffe (1)
- Fettsäurealkohol (1)
- Fettsäuremethylester (1)
- Fettsäuren (1)
- Feuchtetransport (1)
- Feynman Integrals (1)
- Feynman path integrals (1)
- Fiber suspension flow (1)
- Fifth generation (5G) mobile networks (1)
- Filler (1)
- Filmkühlung (1)
- Filterauslegung (1)
- Filterbecken (1)
- Filterkuchenwiderstand (1)
- Filtermittelwiderstand (1)
- Filtersubstrat Lavasand (1)
- Filterversuche (1)
- Filtrierbarkeit (1)
- Financial Engineering (1)
- Finanzkrise (1)
- Finanznumerik (1)
- Fine Grain Concrete (1)
- Finite Element Method (1)
- Finite Elementes (1)
- Finite Elements (1)
- Finite element method (1)
- Finite-Elemente-Simulation (1)
- Finite-Punktmengen-Methode (1)
- Firmware (1)
- Firmwertmodell (1)
- First Order Optimality System (1)
- Fitts Law (1)
- Flachwasser (1)
- Flachwassergleichungen (1)
- Flechtenalgen (1)
- Flexibilisierung (1)
- Fliehkraftinvariant (1)
- Fließanalyse (1)
- Fließgelenk (1)
- Fließpresshalbzeugen (1)
- Floatglas (1)
- Floatingpotential (1)
- Flow Visualization (1)
- Fluid dynamics (1)
- Fluid-Feststoff-Strömung (1)
- Fluid-Struktur-Kopplung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Fluktuationen der kleinen Eigenwerte des Dirac-Operators (1)
- Fluorescence (1)
- Fluoreszenzfarbstoff (1)
- Fluoreszenzspektrometer (1)
- Fluoreszenzspektroskopie (1)
- Fluoreszenzspektrum (1)
- Fluoridkatalysierte Kupplungsreaktionen (1)
- Flächennutzungsplan (1)
- Flächennutzungsplanung (1)
- Flüssig-Flüssig System (1)
- Flüssig-Flüssig-System (1)
- Flüssigkeitsreibung (1)
- Foam decay (1)
- Fokker-Planck-Gleichung (1)
- Folgeprozesse (1)
- Forgetting-enabled Information Systems (1)
- Formale Grammatik (1)
- Formale Ontologie (1)
- Formale Sprache (1)
- Formaler Beweis (1)
- Forward-Backward Stochastic Differential Equation (1)
- Fotochemie (1)
- Fourier-Transformation (1)
- Fracture behavior (1)
- Fragmentierung (1)
- Framework (1)
- Fredholmsche Integralgleichung (1)
- Freiformfläche (1)
- Freiheit (1)
- Freiheitsgrad (1)
- Freiraumentwicklung (1)
- Freiraumplanung (1)
- Freiraumschutz (1)
- Freistrahl (1)
- Fremdspracherwerb (1)
- Frequenzsprungverfahren (1)
- Friction (1)
- Frischbetondruck (1)
- Fulven (1)
- Functional Safety (1)
- Functional autoregression (1)
- Functional time series (1)
- Fungi (1)
- Funkdienst (1)
- Funktioneller Gradientenwerkstoff (1)
- Funktionenkörper (1)
- Funktionsanalyse (1)
- Funktionsmorphologie (1)
- Furan (1)
- Furocumarine (1)
- Fusion (1)
- Future Internet (1)
- Fußball (1)
- Fußgängerzone Kaiserslautern (1)
- Fußverkehr (1)
- Fußverkehrsnetz (1)
- Fähigkeiten (1)
- Fügetechnik (1)
- Fügetechnologie (1)
- Führen im Change (1)
- Führen ist eine Frage der Haltung (1)
- Führung (1)
- Füllkörpersäule (1)
- Füllstandüberwachung (1)
- Fünfgliedrige heterocyclische Betaine (Synthese und Reaktivität) (1)
- G. Bateson (1)
- GABAerge Nervenzelle (1)
- GARCH (1)
- GARCH Modelle (1)
- GFK-Rotorglocke (1)
- GGPP (1)
- GIS (1)
- GIS-Analyse (1)
- GK-EMS (1)
- GPU (1)
- Galerkin Verfahren (1)
- Galerkin methods (1)
- Galerkin-Methode (1)
- Gamification (1)
- Gamma-Konvergenz (1)
- Ganze (1)
- Ganzkörper-Elektromyostimulation (1)
- Garantiezins (1)
- Garbentheorie (1)
- Gasströmung (1)
- Gast-Wirt-Beziehung (1)
- Gateway (1)
- Gauß-Filter (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gebogener viskoser Faden (1)
- Gebäude (1)
- Gebäudegesamtsystem (1)
- Gebäudesimulation (1)
- Gebäudetechnik (1)
- Gedächtnis (1)
- Gefahren- und Risikoanalyse (1)
- Gefährdungsanalyse (1)
- Gelenkige Rohrverbindung (1)
- Gemeinsame Kanalschaetzung (1)
- Gen-Expression (1)
- Genamplifikation (1)
- Gender (1)
- Gender Classification (1)
- Gene Expression (1)
- Gene expression programming (1)
- Genealogie (1)
- Generalisierte Plastizität (1)
- Generierung (1)
- Genetische Algorithmen (1)
- Genexpression (1)
- Genome analysis (1)
- Genotoxizität (1)
- Gentechnisch transfizierte V79-Zellen (1)
- Gentherapie (1)
- Geo-referenced data (1)
- GeoWeb (1)
- Geodesie (1)
- Geographic Information System (GIS) (1)
- Geographie (1)
- Geometrical Nonlinear Thermomechanics (1)
- Geometrical product specification (1)
- Geometrienormale (1)
- Geometrische Ergodizität (1)
- Geomodellierung (1)
- Georg Wilhelm Friedrich (1)
- Georgien (1)
- Georgien / Zivilgesetzbuch (1)
- Georgien <Ost> (1)
- Georgien <West> (1)
- Geostatistik (1)
- Geovisualization (1)
- Geoweb (1)
- German census (1)
- Geräusch (1)
- Geschlecht (1)
- Geschlechterdiskriminierung (1)
- Geschlechtsbestimmung (1)
- Geschwindigkeitsbegrenzung (1)
- Geschwindigkeitsregelung (1)
- Geschwindigkeitswahrnehmung (1)
- Gestaltungslehre (1)
- Gestaltungsplanung (1)
- Gesundheitsberichterstattung (1)
- Gesundheitsförderung / Unternehmen (1)
- Gesundheitsmanagement (1)
- Gesundheitsverhalten (1)
- Gesundheitszustand (1)
- Gewebeverstärkter Thermoplast (1)
- Gewerbeimmobilien (1)
- Gewerbliche Abwärme (1)
- Gewichteter Sobolev-Raum (1)
- Gewässerschutz (1)
- Giga bit per second (1)
- Gitterbaufehler (1)
- Gittererzeugung (1)
- Gitterträger (1)
- Glasbau (1)
- Glasfaserverstärkte Kunststoffe (1)
- Glasfaserverstärkter Kunststoff (1)
- Glasklebung (1)
- Glassy polymers (1)
- Glaziale Refugien (1)
- Gleichgewichtsstrategien (1)
- Gleichspannungswandler (1)
- Gleichzeitigkeit (1)
- Gleitlager (1)
- Gleitverschleiß (1)
- Glioblastom (1)
- Gliome (1)
- GlucDOR (1)
- Glucocorticoidrezeptor (1)
- Glucosedehydrogenase (1)
- Glutamat (1)
- GlyHis (1)
- Glycogen Synthase Kinase 3 (1)
- Glycogen-Synthase-Kinase-3 (1)
- Glycosidasen (1)
- Glykolipid (1)
- Glykosylierung (1)
- Goethe (1)
- Gold nanoparticles (1)
- Goldnanopartikel (1)
- Golgi-Apparat (1)
- Google Earth (1)
- Grabeloser Leitungsbau (1)
- Gradient based optimization (1)
- Gradientenverfahren (1)
- Grand challenge (1)
- Granular (1)
- Granular flow (1)
- Granulat (1)
- Granulozyten (1)
- Grape Quality (1)
- Grapevine Fanleaf Virus (1)
- Graph Theory (1)
- Grauwasser (1)
- Grauwasserbehandlung (1)
- Grauwertkorrelation (1)
- Gravitationsfeld (1)
- Greater Region Saar-Lor-Lux+ (1)
- Green's functions (1)
- Green-Funktion (1)
- Grenzfläche (1)
- Grenzflächen (1)
- Grenzflächenpolarisation (1)
- Grenzflächenspannung (1)
- Grenzraum (1)
- GroEL/GroES-System (1)
- Gromov Witten (1)
- Gromov-Witten-Invariante (1)
- Group and Organizational Learning (1)
- Grouping by similarity (1)
- Große Abweichung (1)
- Großregion Saar-Lor-Lux+ (1)
- Grundlagen der Imprägnierung (1)
- Grundschule (1)
- Grundwassersanierung (1)
- Gruppenoperation (1)
- Gruppentheorie (1)
- Gröbner bases (1)
- Gröbner basis (1)
- Gröbner-basis (1)
- Grüne Chemie (1)
- GtCPES (1)
- Guillardia theta (1)
- Gyroscopic (1)
- H/D exchange (1)
- H4IIE (1)
- HAZOP Automation (1)
- HAZOP Automatisierung (1)
- HAZOP Digitalization (1)
- HAZOP-Verfahren (1)
- HCL (1)
- HCVL (1)
- HDACi (1)
- HETE (1)
- HMG-CoA-Reduktase (1)
- HNE (1)
- HODE (1)
- HPC (1)
- HPRT-Test (1)
- HSF (1)
- HSF1 (1)
- HSP (1)
- HSP70 (1)
- HT29 cells (1)
- HT29-Zellen (1)
- Hadamard manifold (1)
- Hadamard space (1)
- Hadamard-Mannigfaltigkeit (1)
- Hadamard-Raum (1)
- Haeckel (1)
- Haftreibwert (1)
- Halbfrequenzwirbel (1)
- Halbsandwich-Verbindungen (1)
- Halogenasen (1)
- Halogencyclisierung (1)
- Haloperoxidase-Modelle (1)
- Haltung (1)
- Hamburg <Metropolregion> (1)
- Hamilton Rezeptor (1)
- Hamiltonian Path Integrals (1)
- Hamiltonian systems (1)
- Hand (1)
- Hand gestures (1)
- Hand-Arm-System (1)
- Handbuch für die Bemessung von Straßenverkehrsanlagen (1)
- Handelsstrategien (1)
- Handlungsempfehlungen (1)
- Handwerker (1)
- Hardware Security (1)
- Hardware/Software co-verification (1)
- Harmonische Analyse (1)
- Harmonische Spline-Funktion (1)
- Harold Arthur (1)
- Hartschaumstoffe (1)
- Harvey (1)
- Harze (1)
- Harzinjektionstechnik (1)
- Harzinjektionsverfahren (1)
- Haushalt (1)
- Haustoria (1)
- Hazard Analysis (1)
- Hazard Functions (1)
- HeLa-Zelle (1)
- Heat stress response (1)
- Heat transfer (1)
- Heavy-tailed Verteilung (1)
- Hedging (1)
- Hegel (1)
- Heizrate (1)
- Heißes Elektron (1)
- Helicene (1)
- Helmholtz Type Boundary Value Problems (1)
- HepG2 (1)
- Hepatocytes (1)
- Hepatotoxizität (1)
- Herkunftsfläche (1)
- Hertwig (1)
- Herz-Lungen-System (1)
- Heston-Modell (1)
- Heteroaromaten (1)
- Heterogeneous (1)
- Heterosolarzelle (1)
- Heterozyklen (1)
- Heusler (1)
- Hexaphenylcyclohexaarsan (1)
- Hexen <1-> (1)
- Hexenol (1)
- Hidden Markov models for Financial Time Series (1)
- Hierarchische Matrix (1)
- Hierarchische verteilte Architekturen (1)
- High Voltage (1)
- High-Spin-Komplexe (1)
- High-cycle fatigue (1)
- Higher education (1)
- Hilbert complexes (1)
- Himbeere (1)
- Hinweise für barrierefreie Verkehrsanlagen (1)
- Hirnstamm (1)
- HisGly (1)
- Histamin (1)
- Histologie (1)
- Histondeacetylase-inhibitor (1)
- Hitting families (1)
- Hochbau (1)
- Hochleistungsoptik (1)
- Hochleistungsverbundbauteile (1)
- Hochspannung (1)
- Hochspannungsfeld (1)
- Hohlkörperdecke (1)
- Hohlkörperdecken (1)
- Hohlräume (1)
- Holz-Beton-Verbundbau (1)
- Holzbau (1)
- Holzhäcksel (1)
- Holzleichtbaukonstruktion (1)
- Homogeneous deformation (1)
- Homogenisieren (1)
- Homologische Algebra (1)
- Honduras (1)
- Horizontal gene transfer (1)
- Hormon (1)
- Hot-Wire-Verfahren (1)
- Hub Location Problem (1)
- Hufeisenwirbel (1)
- Human Liver Cell Models (1)
- Human Pose (1)
- Human-Computer Interaction (1)
- Human-Robot-Coexistence (1)
- Human-Robot-Cooperation (1)
- Human-centric lighting (1)
- Human-centric virtual lighting (1)
- Humanblut (1)
- Humangenetik (1)
- Humanism (1)
- Humanstudie (1)
- Hunsrück (1)
- Hybrid (1)
- Hybrid CBR (1)
- Hybrid Models (1)
- Hybrid Thermoplastisch Duroplastisch Wickeln (1)
- Hybride Werkstoffsysteme (1)
- Hybridlager (1)
- Hybridmaterialien (1)
- Hybridverbundwerkstoff (1)
- Hybridwerkstoff (1)
- Hydratation (1)
- Hydride (1)
- Hydrierung (1)
- Hydroamidierung (1)
- Hydrogel (1)
- Hydrogen Bonding (1)
- Hydrokracken (1)
- Hydrolyse (1)
- Hydrostatischer Druck (1)
- Hydrovinylierung (1)
- Hydroxymethylierung (1)
- Hyperelastizität (1)
- Hyperelliptische Kurve (1)
- Hyperflächensingularität (1)
- Hypergraph (1)
- Hyperspektraler Sensor (1)
- Hypocoercivity (1)
- Hysteresismessung (1)
- Hämoglobin (1)
- Hämoglobin-Addukt (1)
- Härten (1)
- Hören (1)
- Hüftkraft (1)
- ICT (1)
- IEC 61508 (1)
- IEEE 802.15.4 (1)
- IMU (1)
- IP Address (1)
- IP Traffic Accounting (1)
- IP-XACT (1)
- IRTG 2057 (1)
- ISO 26262 (1)
- ISO26262 (1)
- ITSM (1)
- Idealklassengruppe (1)
- Ileostomy (1)
- Illiquidität (1)
- Image (1)
- Image Processing (1)
- Image restoration (1)
- Imatinib mesilat (1)
- Imidacloprid (1)
- Imidazoliumsalze (1)
- Imido (1)
- Imine (1)
- Immersion (1)
- Immiscible lattice BGK (1)
- Immobilien (1)
- Immobilienaktie (1)
- Immobilienmarkt (1)
- Immobilienpreis (1)
- Immobilisierte Katalysatoren (1)
- Immunity (1)
- Immunität (1)
- Immunoblot (1)
- Immunofluoreszenzmikroskopie (1)
- Immuntoxin (1)
- Imote2 (1)
- Imprägnierung (1)
- Impulsfrequenz (1)
- In-Line Qualitätssicherungsmerkmal (1)
- In-vitro-Translation (1)
- Incommensurate Structure (1)
- Incremental recomputation (1)
- Index Insurance (1)
- Indigo (1)
- Indigorot (1)
- Individual (1)
- Individualisierte Mobilitätsdienstleistungen (1)
- Induction heating (1)
- Induktive logische Programmierung (1)
- Induktives Fügen (1)
- Industrial Robotics (1)
- Industrial air pollution (1)
- Industrieabwasser (1)
- Industrielle Abwärme (1)
- Industrielle Mikrobiologie (1)
- Industrielle Produktion (1)
- Infektionsmechanismus (1)
- Inflation (1)
- Information Extraction (1)
- Information Management (1)
- Information Visualization (1)
- Informationsfunktion (1)
- Informationslogistik (1)
- Informationsmodellierung (1)
- Informationstheorie (1)
- Informationsübertragung (1)
- Infrared Multi Photon Dissociation (1)
- Infrared Multiphoton Dissociation Spectroscopy (IR-MPD) (1)
- Infrarotspek (1)
- Ingenieurmodell (1)
- Innovationsadoption (1)
- Innovationsbewertung (1)
- Innovationsmanagement (1)
- Input-To-State Stability (1)
- Insekten (1)
- Insertionsreaktion (1)
- Instandsetzung (1)
- Insurance (1)
- Integralbauweise (1)
- Integrated Operation (1)
- Integration geometrischer Oberflächenparameter (1)
- Integrative Beleuchtung (1)
- Integrative lighting (1)
- Integrine (1)
- Intensity estimation (1)
- Intensität (1)
- Intentional Forgetting (1)
- Interactive decision support systems (1)
- Interaktionsanalyse (1)
- Interaktionsgerät (1)
- Interaktionsmodell (1)
- Interaktive Planung (1)
- Interaktive Verhandlung (1)
- Interferenz (1)
- Interferenzklassifizierung (1)
- Interferenzreduktion (1)
- Interferometrie (1)
- Intergeschlechtlichkeit (1)
- Intergine (1)
- Interkulturelle Produktentwicklung (1)
- Intermediate Composition (1)
- Internationale Diversifikation (1)
- Interorganisationales (1)
- Interpolation (1)
- Interpolation Algorithm (1)
- Intersexualität (1)
- Intervention (1)
- Interventionsstudie (1)
- Interzellinterferenz (1)
- Intra-Rezeptor Wechselwirkungen (1)
- Invariante (1)
- Invariante Momente (1)
- Inverse Problem (1)
- Inverse spin injection (1)
- Inwertsetzung (1)
- Ion pairs (1)
- Ionensolvatation (1)
- Ionentauscher (1)
- Ionentransport (1)
- Iron (1)
- Irreduzibler Charakter (1)
- Isogeometric Analysis (1)
- Isoindigo (1)
- Isomerisierung (1)
- Isomerisierung von n-Decan (1)
- Isomerisierungsreaktion (1)
- Isopropylacrylamid Natriummethacrylat N-Vinyl-2-pyrrolidon (1)
- Isotopieeffekt (1)
- Isotrope Geometrie (1)
- Isotropes System (1)
- Isotropie (1)
- Ito (1)
- JAK-STAT-Weg (1)
- JAK-STAT-pathway (1)
- Jablonka (1)
- Jacobigruppe (1)
- Jet in crossflow (1)
- Jitter (1)
- Johannisbeere (1)
- John L. (1)
- Joint Transmission (1)
- KANO-Modell (1)
- Kaffee (1)
- Kaffeeextrakte (1)
- Kaffeegetränke (1)
- Kaffeeinhaltsstoffe (1)
- Kalibriernormale (1)
- Kalmus (1)
- Kalziumkanal (1)
- Kanalcodierung (1)
- Kanalisation (1)
- Kanalnetzsteuerung (1)
- Kanalschätzung (1)
- Kant (1)
- Kapazitiver Sensor (1)
- Kapazitätsmessung (1)
- Karbonatisierung (1)
- Kardiotoxizität (1)
- Kardiovaskuläre Krankheit (1)
- Karhunen-Loève expansion (1)
- Kartoffel (1)
- Katalysator (1)
- Katalytische Iso (1)
- Katalytische Isomerisierung (1)
- Kategorientheorie (1)
- Kationenrezeptoren (1)
- Kaukasus (1)
- Kausale Inferenz (1)
- Kausalmodell (1)
- Keilzinkenverbindung (1)
- Keimbildung (1)
- Kellerautomat (1)
- Kelvin Transformation (1)
- Keramik <T (1)
- Keratinozyten (1)
- Kern-Schale-Struktur (1)
- Kerngenom (1)
- Ketone (1)
- Kettengelenk (1)
- Kettentrieb (1)
- Kieselgel (1)
- Kinder mit Migrationshintergrund (1)
- Kinder- und Jugendsportschule der DDR (1)
- Kirchenmanagement (1)
- Kirchenreform (1)
- Kirchhoff-Love shell (1)
- Kiyoshi (1)
- Klassifikation (1)
- Klebverbindungen (1)
- Klein- und Mittelstädte (1)
- Klima (1)
- Klimagerechtes Bauen (1)
- Klimastuhl (1)
- Klimawandel (1)
- Klimawandel Weinbau (1)
- Klimaänderung (1)
- Kniestabilität (1)
- Knochenmetastase (1)
- Knotenpunkt (1)
- Knowledge Work (1)
- Knowledge transfer (1)
- Knuth-Bendix completion (1)
- Knuth-Bendix-Vervollständigung (1)
- Ko-Kontraktion (1)
- Koaleszenz (1)
- Kobalt (1)
- Koexistenz (1)
- Koexpression (1)
- Kognition (1)
- Kognitive Psychologie (1)
- Kogut-Susskind-Fermionen (1)
- Kohlenstoff (1)
- Kohlenstoff-Wasserstoff-Bindung (1)
- Kohlenstofffasern (1)
- Kohlenstofffaserverstärkte Kunststoffe (1)
- Kohlenstofffaserverstärkter Kohlenstoffwer (1)
- Kohlenstofffaserverstärkter Kohlenstoffwerkstoff (1)
- Kohlenstofffaserverstärkter Kunststoff (1)
- Kohäsive Grenzschichten (1)
- Koinzidenz (1)
- Kolonisierung (1)
- Kombinationsschwingungen (1)
- Kombinatorik (1)
- Kombinierte Mobilität (1)
- Kommunale Zusammenarbeit (1)
- Kommunalentwicklung (1)
- Kommutative Algebra (1)
- Kompaktierungs und Permabilitätskennwert (1)
- Kompetenz (1)
- Kompetenzentwicklung (1)
- Kompetenzmodellierung (1)
- Komplex-analytische Struktur (1)
- Komplexaufklärung (1)
- Komplexchemie (1)
- Komplexe (1)
- Komplexität (1)
- Komponentenmodell (1)
- Kompression (1)
- Konfiguration <Chemie> (1)
- Konfigurationskräfte (1)
- Konfigurationsmechanik (1)
- Konformation (1)
- Kongruenz (1)
- Konjugierte Dualität (1)
- Konsolidierung (1)
- Konstitutionsform (1)
- Konstruktion von Hyperflächen (1)
- Konstruktivismus (1)
- Kontaktlose Chipkarte (1)
- Kontaktwiderstand (1)
- Kontextbezogenes System (1)
- Kontinuum <Mathematik> (1)
- Kontinuums-Atomistische Kopplung (1)
- Kontinuumsphysik (1)
- Konvektion (1)
- Konvergenz (1)
- Konvergenzrate (1)
- Konvergenzverhalten (1)
- Konvexe Optimierung (1)
- Konzentrationsmessung (1)
- Koordinatengeber (1)
- Koordinationslehre (1)
- Koordinationsverbindungen (1)
- Koordinierte Regionalentwicklung (1)
- Kopolymere (1)
- Kopplungsmethoden (1)
- Kopplungsproblem (1)
- Kopula <Mathematik> (1)
- Korpusanalyse (1)
- Korrelationen (1)
- Korrosion (1)
- Kraft-Verformungs-Verhalten (1)
- Krafteinleitung (1)
- Krafteinleitungen (1)
- Kraftstoffverbrauch (1)
- Kraftwa (1)
- Kraftwelligkeit (1)
- Kraftwelligkeitsausgleich (1)
- Krebs <Medizin> / Prävention (1)
- Kreiselpumpe (1)
- Kreisverkehr (1)
- Kreitderivaten (1)
- Kreuzkupplungen (1)
- Kreuzkupplungsreaktion (1)
- Kriechen (1)
- Kristallcharakterisierung (1)
- Kristallisation (1)
- Kristallisationsfouling (1)
- Kristallstrukturanalyse (1)
- Kryptoanalyse (1)
- Kryptologie (1)
- Krümmung (1)
- Kullback-Leibler divergence (1)
- Kulturelles Kapital (1)
- Kunststoff (1)
- Kunststoff / Verbundwerkstoff (1)
- Kunststoffeinfärbung (1)
- Kunststoffschweißen (1)
- Kupferion (1)
- Kurbelgehäuse (1)
- Kurve (1)
- Kurvenschar (1)
- Kurzkohärente Interferometrie (1)
- Kutaissi (1)
- Körperhaltung (1)
- Körperliche Arbeit (1)
- Körperschall (1)
- Körperschallübertragung (1)
- Kühni (1)
- L1 (1)
- LEADER (1)
- LIBOR (1)
- LIDAR (1)
- LIR-Tree (1)
- LOADBAL (1)
- Lactone (1)
- Lagerstrom (1)
- Lagerung (1)
- Lagrangian relaxation (1)
- Lamarck (1)
- Lambda-cyhalothrin (1)
- Laminare Grenzschicht (1)
- Land Use Planning (1)
- Landesplanung (1)
- Landschaftsplanung (1)
- Landwirtschaft (1)
- Language Management (1)
- Language Policy (1)
- Laplace transform (1)
- Large Data (1)
- Large Eddy Simulation (1)
- Large High-Resolution Displays (1)
- Large Synchronous Networks (1)
- Laser Wakefield Particle Accelerator (1)
- Laser spectroscopy (1)
- Laserdiode (1)
- Laserparameter (1)
- Laserspektroskopie (1)
- Lastkollektive (1)
- Lastprofil (1)
- Lastpunktverschiebung (1)
- Lateinamerika (1)
- Latentwärmespeicher (1)
- Lateral superior olive (1)
- Latin America (1)
- Lattice Boltzmann (1)
- Lattice Boltzmann Method (1)
- Lattice-BGK (1)
- Lattice-Boltzmann (1)
- Lautarium (1)
- Laval-Düse (1)
- Lead (1)
- Leading-Order Optimality (1)
- Lean Automation (1)
- Lean Production (1)
- Learning Analytics (1)
- Least-squares Monte Carlo method (1)
- Lebensdauer (1)
- Lebensdaueranalyse (1)
- Lebensmittel (1)
- Lebensstil (1)
- Lebenszykloskostenanalyse (1)
- Leberepithelzelle (1)
- Lecitin (1)
- Leckage (1)
- Lehrer (1)
- Lehrerverhalten (1)
- Lehrkräftebildung (1)
- Lehrkräfteprofessionalisierung (1)
- Leibniz (1)
- Leichtbaupotenzial (1)
- Leichtbauvarianten (1)
- Leichtbauweise Faser-Kunststoff-Verbundwerkstoffe (FKV) (1)
- Leistungsdiagnostik (1)
- Leistungseffizienz (1)
- Leistungsflussregler (1)
- Leistungsmessung (1)
- Leistungsmotivation (1)
- Leitbilder und Handlungsstrategien für die Raumentwicklung (1)
- Leitbilder und Handlungsstrategien für die Raumentwicklung in Deutschland (1)
- Leitfähigkeit (1)
- Leittechnik (1)
- Leitungen (1)
- Leitungsabschottung (1)
- Leitungsführungen (1)
- Leptonen (1)
- Leptonen Massen (1)
- Lernhilfen (1)
- Lerntheorie (1)
- Lese-Rechtschreiberwerb (1)
- Lese-Rechtschreibschwierigkeiten (1)
- Lese-Rechtschreibstörung (1)
- Lesen (1)
- Lesenlernen (1)
- Lesestörung (1)
- Level set methods (1)
- Lexikalische Sprachgebrauchsmuster (1)
- Lezitin (1)
- LiDAR (1)
- Lichtdurchlässige Kunststoffe (1)
- Lichtflecke (1)
- Lichtforschung (1)
- Lichtsignalanlage (1)
- Lichtspeicherung (1)
- Lie algebras (1)
- Lie-Typ-Gruppe (1)
- Lieferkette (1)
- Ligandenaustauschreaktion (1)
- Ligandenfeldstärke (1)
- Light Storage (1)
- Lighting (1)
- Lighting Design (1)
- Lighting research (1)
- Lightweight Structures (1)
- Limonen-Umsetzung (1)
- Linear-Quadratic-Regulator (1)
- Linearmotor (1)
- Linguistic Landscape (1)
- Linguistic Schoolscape (1)
- Linguistik (1)
- Linienbus (1)
- Link Metric (1)
- Linked Data (1)
- Linking Data Analysis and Visualization (1)
- Linksparken (1)
- Lipid (1)
- Lipidperoxidation (1)
- Lipoxygenasen (1)
- Lippmann-Schwinger Equation (1)
- Lippmann-Schwinger equation (1)
- Liquid Crystal Phases (1)
- Liquid-Liquid Extraction (1)
- Liquid-liquid dispersion (1)
- Liquid-liquid extraction (1)
- Liquid-liquid-equilibrium (1)
- Liquidität (1)
- Literaturdatenbank (1)
- Literature review (1)
- Liver (1)
- Liver Toxicity (1)
- Liver toxicity (1)
- Local Development Framework (1)
- Local continuum (1)
- Locally Supported Zonal Kernels (1)
- Location (1)
- Location Based Service (1)
- Logiksynthese (1)
- Logistik (1)
- Lokale Durchstanztragfähigkeit (1)
- Lokales Durchstanzen (1)
- Lokalisierung (1)
- London-Dispersion (1)
- Low Jitter (1)
- Lubricant film thickness (1)
- Lubrication (1)
- Ludwig von Mises (1)
- Luftfahrt (1)
- Luftfahrtindustrie (1)
- Luftlager (1)
- Luftpermeabilität (1)
- Luftschnittstellen (1)
- Lumineszenzspektroskopie (1)
- Lunge (1)
- Lungenchirurgie (1)
- Lungenemphysem (1)
- Lungenkrebs (1)
- Lyse <Biologie> (1)
- Lysozyme (1)
- Ländliche (1)
- Ländliche Regionen (1)
- Ländliche Räume (1)
- Ländlicher Raum (1)
- Längenmessung (1)
- Längszug (1)
- Lärm (1)
- Lärmbelastung (1)
- Lärmimmission (1)
- Lärmschädigung (1)
- Lärmverteilung (1)
- Löslichkeit (1)
- MAC protocols (1)
- MAP-Kinase (1)
- MAPK signaling (1)
- MBS (1)
- MCF (1)
- MCF-7 (1)
- MCFK (1)
- MCM-41 (1)
- MCMT (1)
- MFC-Knochennagel-Implantat (1)
- MHYT-Domäne (1)
- MIMO Systeme (1)
- MIMO-Antennen (1)
- MIP-Emissionsspektroskopie (1)
- MIP-Massenspektrometrie (1)
- MKS (1)
- ML-estimation (1)
- MO-Theorie (1)
- MS-Klebstoff (1)
- MS-Polymers (1)
- MSR (1)
- MYC (1)
- Macaulay’s inverse system (1)
- Mach-Zehnder-Interferometer (1)
- Macht (1)
- Magnesium (1)
- Magnesiumhydroxid (1)
- Magnetfeldbasierter Lokalisierung (1)
- Magnetfelder (1)
- Magnetischer Röntgenzirkulardichroismus (1)
- Magnetismus (1)
- Magnetit-Partikel (1)
- Magneto-Elastic Coupling (1)
- Magnetoelastic coupling (1)
- Magnetoelasticity (1)
- Magnetometer (1)
- Magnetostriction (1)
- Makrophage (1)
- Makrophagen (1)
- Management (1)
- Manufacturing (1)
- Manufacturing Control (1)
- Manufacturing System (1)
- MapReduce (1)
- Marangoni (1)
- Marangoni-Effekt (1)
- Marine Biotechnologie (1)
- Marke (1)
- Market Equilibrium (1)
- Markierungsgen (1)
- Markov Chain (1)
- Markov Kette (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Markov-Prozess (1)
- Markt (1)
- Marktmanipulation (1)
- Marktplatz (1)
- Marktrisiko (1)
- Marktsegmentierung (1)
- Martensite transformation (1)
- Martingaloptimalitätsprinzip (1)
- Maschennetz (1)
- Maschinelle Übersetzung (1)
- Mass transfer (1)
- Massivbau (1)
- Material Modelling (1)
- Material Properties under Exctreme Conditions (1)
- Material-Force-Method (1)
- Materialmanagement (1)
- Materialmodellierung (1)
- Materialsysteme (1)
- Materialverhalten (1)
- Materielle Kräfte (1)
- Mathematical Finance (1)
- Mathematics (1)
- Mathematische Optimierung (1)
- Mathematisches Modell (1)
- Matrix Completion (1)
- Matrixkompression (1)
- Matrizenfaktorisierung (1)
- Matrizenzerlegung (1)
- Maturana (1)
- Mauerwerk (1)
- Maus (1)
- Maus <Datentechnik> (1)
- Mausmodell (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Intensity Projection (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- Maxwell's equations (1)
- Maßnahmen (1)
- McKay conjecture (1)
- McKay-Conjecture (1)
- McKay-Vermutung (1)
- Meaningful Work (1)
- Measurement (1)
- Measurement plattform (1)
- Measurement standards (1)
- Mechanical (1)
- Mechanics (1)
- Mechanisch (1)
- Medical Image Analysis (1)
- Medienexperimentelles Entwerfen (1)
- Medizinische Physik (1)
- Mehragentensystem (1)
- Mehrdimensionale Bildverarbeitung (1)
- Mehrdimensionales Variationsproblem (1)
- Mehrfruchtsaft (1)
- Mehrkernige Koordinationsverbindungen (1)
- Mehrkriterielle Optimierung (1)
- Mehrkörpersimulation (1)
- Mehrkörpersystem (1)
- Mehrperspektivischer Unterricht (1)
- Mehrphasenströmung (1)
- Mehrschichtsystem (1)
- Mehrskalen (1)
- Mehrtraegeruebertragungsverfahren (1)
- Membranreinigung (1)
- Memory Architecture (1)
- Memory Consistency (1)
- Memory Theory (1)
- Mendel (1)
- Meningeom (1)
- Mensch (1)
- Mensch-Roboter-Koexistenz (1)
- Mensch-Roboter-Kooperation (1)
- Menschenmenge (1)
- Merkmalsraum (1)
- Mesh-Free (1)
- Messplatform (1)
- Messung des Durchmessers (1)
- Metabolism (1)
- Metabolomics (1)
- Metacontrast Masking (1)
- Metal-Free (1)
- Metallkomplexe (1)
- Metalloproteinasen (1)
- Metalloptik (1)
- Metallorganik (1)
- Metalloxide (1)
- Metallschicht (1)
- Metapopulation (1)
- Metasprache (1)
- Metasprachliche Erklärungen (1)
- Metasprachliche Fähigkeiten (1)
- Metaverse (1)
- Metaversum (1)
- Meter (1)
- Methane emissions (1)
- Methode der finiten Elemente (1)
- Methotrexat (1)
- Methyleugenol (1)
- Methylphenylfulven (1)
- Metric Learning (1)
- Meßverfahren (1)
- Michael-Addition (1)
- Micro Cutting (1)
- Micro Grinding (1)
- Micro Lead (1)
- Microcystins (1)
- Microelectromechanical Systems (1)
- Microstructure (1)
- Microstructure morphology (1)
- Microsystem Technology (1)
- Mie- and Helmholtz-Representation (1)
- Mie- und Helmholtz-Darstellung (1)
- Mikrobewehrter Beton (1)
- Mikrobewehrung (1)
- Mikrobiegeversuch (1)
- Mikrobieller (1)
- Mikrobiologie (1)
- Mikrodrall (1)
- Mikroelektronik (1)
- Mikrofiltration (1)
- Mikromorphe Kontinua (1)
- Mikrosomen (1)
- Mikrosystemtechnik (1)
- Mikrowelle (1)
- Mikrozerspanung (1)
- Milchsäurebakterien (1)
- Milieus (1)
- Mindesthaltbarkeitsdatum (1)
- Mindfulness (1)
- Miner-Regel (1)
- Minimal Cut Set Visualization (1)
- Minimal training (1)
- Miniplant (1)
- Mischcluster (1)
- Mischsystem (1)
- Mischwasser (1)
- Mitarbeiterbefragungen (1)
- Mitarbeitergesundheit (1)
- Mitfahrerparkplatz (1)
- Mitochondria (1)
- Mitochondrien (1)
- Mitochondrium (1)
- Mittelstädte (1)
- Mixed Connectivity (1)
- Mixed Reality (1)
- Mixed integer programming (1)
- Mixed method (1)
- Mixed-integer Programming (1)
- Mobile Communications (1)
- Mobile Computing (1)
- Mobile Machines (1)
- Mobile Robots (1)
- Mobile Telekommunikation (1)
- Mobile system (1)
- Mobiler Roboter (1)
- Mobilfunksysteme (1)
- Mobility (1)
- Mobility as a Service (1)
- Mobility on Demand (1)
- Mobilität (1)
- Mode-Based Scheduling with Fast Mode-Signaling (1)
- Model-Dynamics (1)
- Model-driven Engineering (1)
- Modelica (1)
- Modeling (1)
- Modellbasierte Fehlerdiagnose (1)
- Modellbildung (1)
- Modellgenerierung (1)
- Modellgetriebene Entwicklung (1)
- Modellprädiktive Regelung (1)
- Modellvorhaben (1)
- Modernisierung (1)
- Modes of learning (1)
- Modifizierte Teilsicherheitsbeiwerte (1)
- Modifiziertes Epoxidharz (1)
- Modularisierung (1)
- Modulationsspektroskopie (1)
- Modulationsübertragungsfunktion (1)
- Modusbasierte Signalisierung (1)
- Molecular Dynamics (1)
- Molecular Redistribution (1)
- Molecular beam (1)
- Molecular dynamics (1)
- Moleculardynamics (1)
- Molekularbiologie (1)
- Molekulare Bioinformatik (1)
- Molekulare Erkennung (1)
- Molekulargenetik (1)
- Molekülcluster (1)
- Molekülorbital (1)
- Molkeproteine (1)
- Molybdat (1)
- Molybdenium (1)
- Moment Invariants (1)
- Moment-Generating Functions (1)
- Momentum and Mas Transfer (1)
- Monitoring (1)
- Monomer (1)
- Monte Carlo simulation (1)
- Monte-Carlo Modelling (1)
- Monte-Carlo-Simulation chiraler (1)
- Mood-based Music Recommendations (1)
- Moreau-Yosida regularization (1)
- Morphismus (1)
- Most (1)
- Mostkonzentrierung (1)
- Motorprozeßrechnung (1)
- MucR (1)
- Multi Primary and One Second Particle Method (1)
- Multi-Asset Option (1)
- Multi-Edge Graph (1)
- Multi-Field (1)
- Multi-Variate Data (1)
- Multibody (1)
- Multicore Resource Management (1)
- Multicore Scheduling (1)
- Multicriteria optimization (1)
- Multidisciplinary Optimization (1)
- Multidrug-Resistenz (1)
- Multifield Data (1)
- Multifunktionalität (1)
- Multigenanalyse (1)
- Multileaf collimator (1)
- Multipass-Amplifier (1)
- Multipass-Verstärker (1)
- Multiperiod planning (1)
- Multiperspektivität (1)
- Multiphase Flows (1)
- Multiple Jobholding (1)
- Multiresolution Analysis (1)
- Multiscale (1)
- Multiscale modelling (1)
- Multiskalen-Entrauschen (1)
- Multispektralaufnahme (1)
- Multispektralfotografie (1)
- Multivariate Analyse (1)
- Multivariate Wahrscheinlichkeitsverteilung (1)
- Multivariates Verfahren (1)
- Murine Knochenmarkzellen (1)
- Muskeltraining (1)
- Mutation (1)
- Mykotoxine (1)
- Myosin (1)
- Mössbauer-Spektroskopie (1)
- Mößbauer-Spektroskopie (1)
- Mündlicher Sprachgebrauch (1)
- N-Liganden Palladium Allyl Rhodium Ruthenium Cymol (1)
- N-Nitroso-verbindungen (1)
- N-isopropyl acrylamide (1)
- N-ligands Palladium allyl Rhodium Ruthenium cymene (1)
- N-tridentate Liganden (1)
- NCN-Pinzettenliganden (1)
- NICS (1)
- NIR-Spektroskopie (1)
- NLO-Chromophor (1)
- NMR (1)
- NMR Spectroscopy (1)
- NMR und ITC (1)
- NMR-Spektroskopie (1)
- NNK (1)
- NO (1)
- NO-Synthase (1)
- Nachbarkanalinterferenz (1)
- Nachbarschaft (1)
- Nachbarschaftshilfe (1)
- Nachrechnung (1)
- Nachwachsende Rohstoffe (1)
- Nachweis (1)
- Nachwuchsleistungssport (1)
- Nahmobilität (1)
- Nahrung (1)
- Nahrungsnetz (1)
- Namibia (1)
- Nanocomposite (1)
- Nanofaser (1)
- Nanokomposit (1)
- Nanoverbundwerkstoffe (1)
- Natriummolekül (1)
- Natriumsulfat (1)
- Natural Neighbor (1)
- Natural Neighbor Interpolation (1)
- Naturfasern (1)
- Naturstoffe (1)
- Naturstoffsynthese (1)
- Natürliche Auslese (1)
- Natürliche Nachbarn (1)
- Navigation (1)
- NbdA (1)
- Nd:YAG (1)
- Nekrose (1)
- Nematode (1)
- Nennspannungskonzept (1)
- Neodym (1)
- Nephrotoxizit (1)
- Network (1)
- Network Architecture (1)
- Networked Automation Systems (1)
- Networks (1)
- Netzbasierte Automatisierungssysteme (1)
- Netzkonzept (1)
- Netzplanung (1)
- Netzregler (1)
- Netztopologie (1)
- Netzwerksteuerungsmechanismen (1)
- Netzwerksynthese (1)
- Neuartige Sanitärsysteme (1)
- Neue Institutionenökonomie (1)
- Neue Institutionenökonomik (1)
- Neural ADC (1)
- Neural Architecture Search (1)
- Neurales Zell-Adhäsionsmolekül (1)
- Neuronales Netz (1)
- Neurotransmitterrezeptor (1)
- Neutrinos (1)
- New Venture (1)
- Nexus (1)
- Nexus Analysis (1)
- Nexus of practice (1)
- Nexusanalyse (1)
- NiTi-Formgedächtnislegierung (1)
- Nicht dioxin-artig (1)
- Nicht-Desarguessche Ebene (1)
- Nichtglatte Optimierung (1)
- Nichtinvasiv (1)
- Nichtkommutative Algebra (1)
- Nichtkonvexe Optimierung (1)
- Nichtkonvexes Variationsproblem (1)
- Nichtlineare Approximation (1)
- Nichtlineare Diffusion (1)
- Nichtlineare Dynamik (1)
- Nichtlineare Mechanik (1)
- Nichtlineare Optimierung (1)
- Nichtlineare Schwingung (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nichtlineare partielle Differentialgleichung (1)
- Nichtpositive Krümmung (1)
- Nichtrauchen (1)
- Nickel-Halbsandwichkomplexe (1)
- Nickel-katalysierte Decarboxylierung (1)
- Nickelkomplexe (1)
- Niere (1)
- Nierenversagen (1)
- Nilpotent elements (1)
- Ninequilibirum Electron Kinetics (1)
- Niob (1)
- Nische (1)
- Nitniol (1)
- Nitridierung (1)
- Nitrierung (1)
- Nitrone (1)
- Nitrones (1)
- Nitrosamine (1)
- Nitsches method (1)
- No-Arbitrage (1)
- NoSQL (1)
- Nocardia (1)
- Node-Link Diagram (1)
- Nodularins (1)
- Noise control (1)
- Non dioxin like polychlorinated biphenyls (1)
- Non--local atomistic (1)
- Non-Newtonian (1)
- Non-commutative Computer Algebra (1)
- Nonlinear Optimization (1)
- Nonlinear time series analysis (1)
- Nonparametric time series (1)
- Nonsmooth Optimization (1)
- Nonspecific Adsorption (1)
- Normalbeton (1)
- Normenvorschlag (1)
- North East Lincolnshire (1)
- North Sea (1)
- Nostocales (1)
- Nrf2 (1)
- Nrf2/ARE-Signalweg (1)
- Nucleoside (1)
- Nucleotid-Bindungsstellen (1)
- Nukleosidtransporter (1)
- Nukleotid (1)
- Nukleotidtransporter (1)
- Null Modell (1)
- Nulldimensionale Schemata (1)
- Numerical Flow Simulation (1)
- Numerical methods (1)
- Numerische Homogenisierung (1)
- Numerische Integration (1)
- Numerische Mathematik / Algorithmus (1)
- Numerische Simulat (1)
- Numerische Untersuchungen (1)
- Numerisches Verfahren (1)
- Nutzerorientierte Produktentwicklung (1)
- Nutzungsbedürfnisse (1)
- Nutzungsintention (1)
- Nährstoffelimination (1)
- Nürnberg <Metropolregion> (1)
- O3 (1)
- OCR (1)
- OFDM mobile radio systems (1)
- OFDM-Mobilfunksysteme (1)
- OME (1)
- OWL (1)
- Oberflächenanalyse (1)
- Oberflächenmaße (1)
- Oberflächenmesstechnik (1)
- Oberflächenphysik (1)
- Oberflächenplasmonresonanz (1)
- Oberflächenprotein (1)
- Oberflächenproteine (1)
- Oberflächenschutzsystem (1)
- Oberflächenspannung (1)
- Oberflächenstruktur (1)
- Oberkörperkontrolle (1)
- Oberpfalz (1)
- Oberschwingung (1)
- Oberton (1)
- Object-orientation (1)
- Objekterkennung (1)
- Octylglucosid (1)
- Oedometer (1)
- Off-road Robotics (1)
- Off-road Robotik (1)
- Offenheit (1)
- Oktylglukosid (1)
- Olefine (1)
- Oligomerisation (1)
- Omics data analysis (1)
- Omics-Thechnologie (1)
- On-line-Methode (1)
- On-line-Verfahren (1)
- Onkogen (1)
- Online chain partitioning (1)
- Online-Arbeitsmärkte (1)
- Online-Handel (1)
- Ontogenie (1)
- Ontologiebasierte Kausalmodelle (1)
- Ontology (1)
- Ontology-based causation model (1)
- Open Estelle (1)
- Open Strategy (1)
- Operationen (1)
- Operator (1)
- Optiksimulation (1)
- Optimal Control (1)
- Optimale Kontrolle (1)
- Optimale Portfolios (1)
- Optimierender Compiler (1)
- Optimization Algorithms (1)
- Option (1)
- Option Valuation (1)
- Optionsbewertung (1)
- Optische Abbildung (1)
- Optische Anisotropie (1)
- Optische Fernerkundung (1)
- Optische Messtechn (1)
- Optische Spektroskopie (1)
- Optischer Sensor (1)
- Optisches Messen (1)
- Orchideen (1)
- Order (1)
- Organik (1)
- Organisationsentwicklung (1)
- Organische Moleküle (1)
- Organisches Pigment (1)
- Organizational structure (1)
- Organizational value (1)
- Organoblech (1)
- Organobleche (1)
- Organosilica (1)
- Organosiloxane (1)
- Ortspezifische Mutagenese (1)
- Osteoblast (1)
- Osteomimicry (1)
- Ottomotor (1)
- Oversampling (1)
- Ovoid (1)
- Oxidant Evolution (1)
- Oxidationskatalyse (1)
- Oxidative Stress (1)
- P-Glykoprotein (1)
- P2 (1)
- P2C2Ph2 (1)
- P3CtBu (1)
- P4 (1)
- P5 (1)
- PCB (1)
- PCDD/Fs (1)
- PCDD/Fs PCBs (1)
- PCM (1)
- PCS (1)
- PDD (1)
- PDE-Constrained Optimization, Robust Design, Multi-Objective Optimization (1)
- PDF3D (1)
- PFGE (1)
- PM63 (1)
- PMN (1)
- PMO (1)
- PN-Hybridliganden (1)
- POD (1)
- PPARgamma (1)
- PSPICE (1)
- PTA (1)
- PV-Anlage (1)
- PXR (1)
- Paarungsstörung (1)
- Packed Columns (1)
- Palindrom (1)
- Palladium-katalysierte Isomerisierung (1)
- Panama (1)
- Pangenese (1)
- Papiermaschine (1)
- Parabolrinne (1)
- Paradoxien (1)
- Parallel Algorithms (1)
- Paralleler Algorithmus (1)
- Paralleler Hybrid (1)
- Paramecium primaurelia (1)
- Parameter (1)
- Parameter identification (1)
- Parameteridentifikation (1)
- Pareto Optimality (1)
- Parken (1)
- Parkhaus (1)
- Parking Abrasion Test (1)
- Parkmanöver (1)
- Partially ordered sets (1)
- Participant Burden (1)
- Participatory Sensing (1)
- Particle (1)
- Particle-In-Cell (1)
- Partieller Hygrothermischer Abbau (1)
- Partikel Methoden (1)
- Partikel-Schwarm-Optimierung (1)
- Partikelgrößenverteilung (1)
- Partizipation (1)
- Parvovirus (1)
- Passivrauchen (1)
- Patchworking Methode (1)
- Patchworking method (1)
- Pathfinder (1)
- Pathogen (1)
- Pathogenabwehr (1)
- Pathogenese (1)
- Pathwise Optimality (1)
- Pauson-Khand (1)
- Paxillin (1)
- Pedestrian (1)
- Pedestrian FLow (1)
- Peer Feedback (1)
- Peltier (1)
- Pendlerverkehr (1)
- Penicillin-Bindeprotein 2x (1)
- Penicillin-resistance (1)
- Penicilline (1)
- Pentaphosphaferrocen (1)
- Pentenol (1)
- Peptide (1)
- Peptide synthesis (1)
- Peptidsynthese (1)
- Perceptual grouping (1)
- Performance (1)
- Periglazialraum (1)
- Periodic Homogenization (1)
- Permeabilität (1)
- Permutationsäquivalenz (1)
- Personal Comfort (1)
- Personalentwicklung (1)
- Personalisation (1)
- Pervasive health (1)
- Pestizid (1)
- Pestizidbelastung (1)
- Peugeot (1)
- Pfadintegral (1)
- Pflanzenfressende Insekten (1)
- Pflanzenkläranlage (1)
- Pflanzenphysiologie (1)
- Pflanzenschutz (1)
- Pflanzenzellkultivierung (1)
- Pflasterflächen (1)
- Phagozytose (1)
- Phase Transition (1)
- Phase Transition Effect (1)
- Phase Transition Effekt (1)
- Phase equilibria (1)
- Phase field method (1)
- Phasenfeld (1)
- Phasengrenzfläche (1)
- Phasenumwandlung (1)
- Phasenverhalten (1)
- Phasenwechselmaterial (1)
- Phasmatodea (1)
- Phenothiazin (1)
- Phenothiazinderivate (1)
- Phenylalanin (1)
- Phenylessigsäurederivate (1)
- Phenylpropanoide (1)
- Phenylpropene (1)
- Pheromon (1)
- Pheromone (1)
- Philosophy of Technology (1)
- Phonemwahrnehmung (1)
- Phosphaalkine (1)
- Phosphaheterocyclen (1)
- Phosphatasen (1)
- Phosphatidylinositolkinase <3-> (1)
- Phosphit (1)
- Phosphodiesterasehemmer (1)
- Phosphonit (1)
- Phosphor-Metall-Komplexe (1)
- Phosphor-Phosphor-Bindung (1)
- Phosphorkomplexe (1)
- Photochemischer Smog (1)
- Photoelektron (1)
- Photolumineszenz (1)
- Photonische Kristalle (1)
- Photonischer Kristall (1)
- Photoreaktionen (1)
- Photovoltaik (1)
- Photovoltaikanlage (1)
- Phycobiliproteine (1)
- Phycobiliproteinlyase (1)
- Phycoerythrin (1)
- Phyllopsora (1)
- Phylogeny (1)
- Phylogeographie (1)
- Physical activity monitoring (1)
- Physical spaces (1)
- Physiksimulation (1)
- Physiologie (1)
- Physiologische Psychologie (1)
- Piezoelectric Materials (1)
- Piezoelectricity (1)
- Piezokeramik (1)
- Pigment (1)
- Pilotmaßstab (1)
- Pilze (1)
- Planar Pressure (1)
- Planares Polynom (1)
- Planning (1)
- Planning Support Systems (1)
- Planu (1)
- Planungsgrundsätze (1)
- Planungskontrolle (1)
- Planungsprozess (1)
- Plasma / Dimension 2 (1)
- Plasma-Immersions-Implantation (1)
- Plasmamembran (1)
- Plasmaphysik (1)
- Plasmarandschicht (1)
- Plasmaschicht (1)
- Plasmaschwingung (1)
- Plasmatechnik (1)
- Plasmid (1)
- Plasmon (1)
- Plastizitätstheorie (1)
- Plate heat exchanger (1)
- Platformeconomy (1)
- Platon (1)
- Plattenextrusion (1)
- Plattenwärmeübertrager (1)
- Plattformökonomie (1)
- Pleurocapsales (1)
- Plug and Play (1)
- Pn (1)
- Pn ligands (1)
- Pn-ligands (1)
- PnAsm Liganden (1)
- Poisson noise (1)
- Poisson-Gleichung (1)
- Polarisierbarkeit zweiter Ordnung (1)
- Polariton (1)
- Policy implementation (1)
- Politikfeldanalyse (1)
- Poly( vinyl pyrrolidone) (1)
- Poly(butylene adipate-co-terephthalate) (PBAT) (1)
- Poly(lactic acid) (PLA) (1)
- PolyBoRi (1)
- Polyamid 66 (1)
- Polyamide (1)
- Polycycloarsan (1)
- Polyelektrolyt (1)
- Polyester- und Epoxidharze (1)
- Polyethylenterephthalat (PET) (1)
- Polyhydroxyalkanoate (PHA) (1)
- Polyme (1)
- Polymer (1)
- Polymer OR Verbund OR Füllstoff OR Klebstoff OR Epoxid OR Polyurethan OR innere Oberfläche (1)
- Polymer nanocomposites (1)
- Polymer-Metall-Verbund (1)
- Polymerisation (1)
- Polymerlösung (1)
- Polymers (1)
- Polymertribologie (1)
- Polymorphismus (1)
- Polynukleare Komplexe (1)
- Polyphosphol (1)
- Polystyrolschaumstoff (1)
- Polyurethan (1)
- Polyvinylpyrrolidon (1)
- Population Balance Equation (1)
- Population balance (1)
- Population balances (1)
- Populationsbilanzmodelle (1)
- Populationsstruktur (1)
- Populationswachstum (1)
- Porenwasserdruck (1)
- Portfolio Optimierung (1)
- Portfoliooptimierung (1)
- Positive Leadership (1)
- Positive Psychologie (1)
- Posttranskriptionelle Regulation (1)
- Potentialabschätzung (1)
- Potenzial (1)
- Potenzial- und Risikoanalyse (1)
- Potenzialanalyse (1)
- Potenzialbestimmung (1)
- Power Efficiency (1)
- Pragmatism (1)
- Praktiken (1)
- Praxeologie (1)
- Preimage of an ideal under a morphism of algebras (1)
- Prepreg-Autoklav-Fertigung (1)
- Prepreg-Technologie (1)
- Prepregtechnologie (1)
- Presstechnik (1)
- Pressure Drop (1)
- Prichard (1)
- Primary human Hepatocytes (1)
- Primärschlamm (1)
- Privacy (1)
- Privatwirtschaftslehre (1)
- Probabilistic (1)
- Probabilistik (1)
- Probust optimization (1)
- Process Data (1)
- Process-Structure-Property relationships (1)
- Processor Architecture (1)
- Processor Architectures (1)
- Produkt-Service Systeme (1)
- Produktentwicklung (1)
- Produktinnovation (1)
- Produktionstechnik (1)
- Professional development (1)
- Prognose (1)
- Programmverifikation (1)
- Projektentwicklung (1)
- Projektionsoperator (1)
- Projektive Fläche (1)
- Projektplanung (1)
- Proliferation (1)
- Promotor (1)
- Propenylfulven (1)
- Property checking (1)
- Property-Driven Design (1)
- Prostaglandinsynthase (1)
- Prostatakrebs (1)
- Protein phosphatases (1)
- Protein-Tyrosin-Kinasen (1)
- Protein/detergent complexes (1)
- Proteinaufreinigung (1)
- Proteincarbonyle (1)
- Proteinfaltung (1)
- Proteinkinase C (1)
- Proteinkinaseinhibitor (1)
- Proteinstruktur (1)
- Proteintransport (1)
- Protistan Plankton (1)
- Protocol Compliance (1)
- Protocol Composition (1)
- Protonentransf (1)
- Prototyp (1)
- Prototype (1)
- Prox-Regularisierung (1)
- Prozessanalyse (1)
- Prozessanalyse des Umformens (1)
- Prozessauslegung (1)
- Prozessvisualisierung (1)
- Präformationstheorie (1)
- Prävention (1)
- Prüfungen (1)
- Pseudomonas aeruginosa (1)
- Pseudomonas syringae (1)
- Psychoakustik (1)
- Psychologie (1)
- Psychology of Perception (1)
- Psychosocial theory (1)
- Psychosoziale Gesundheit (1)
- Pteridine (1)
- Public Transport (1)
- Pultrusion (1)
- Pump Intake Flows (1)
- Pumpe (1)
- Punktdefekte (1)
- Punktprozess (1)
- Push-Out-Test (1)
- Pyrolyse (1)
- Pädagogen an Sportschulen (1)
- QMC (1)
- QVIs (1)
- QoS (1)
- Quadratic Approximation (1)
- Quadratischer Raum (1)
- Qualitative Forschung (1)
- Quality (1)
- Qualitätsbewertung (1)
- Qualitätssicherung (1)
- Quantenchemie (1)
- Quantenchromodynamik (1)
- Quantencomputer (1)
- Quanteninformatik (1)
- Quantenmechanik (1)
- Quantenwell (1)
- Quantifizierung (1)
- Quantile autoregression (1)
- Quantilwertbestimmung (1)
- Quantisierungsfehler (1)
- Quantitative Bildanalyse (1)
- Quartz (1)
- Quasi-Newton Methods (1)
- Quasi-Variational Inequalities (1)
- Quasiplastisches Verformungsverhalten (1)
- Quelldehnung (1)
- Quelldruck (1)
- Quellen (1)
- Quellung (1)
- Quellung in wässrigen Lösungen (1)
- Quellverhalten (1)
- Quenched and tempered steel (1)
- Querschnitt (1)
- Quest3D (1)
- Quicksort (1)
- Quorum Sensing (1)
- Quorum sensing (1)
- R-Beton (1)
- R. Fisher (1)
- REMPI (1)
- RH795 (1)
- RKHS (1)
- RNAi (1)
- RNS-Viren (1)
- RSK-Werte (1)
- RTL (1)
- RTM Prozess (1)
- Radfahrerverkehr (1)
- Radial Basis Functions (1)
- Radialwellendichtring (1)
- Radiative Cooling (1)
- Radikalliganden (1)
- Radio Resource Managements (1)
- Radiofrequenzidentifikation (1)
- Radiotherapy (1)
- Radsport (1)
- Radverkehr (1)
- Raman-Spektroskopie (1)
- Random testing (1)
- Random-Matrix-Theorie (1)
- Randwertproblem (1)
- Randwertproblem / Schiefe Ableitung (1)
- Rank test (1)
- Rapid-Chase Theory (1)
- Rarefied gas (1)
- Rate Gyro (1)
- Ratenabhängigkeit (1)
- Rauchen (1)
- Raucherentwöhnung (1)
- Rauheit auf Konturen (1)
- Rauheitsmessung (1)
- Raum als Ressource (1)
- Raumplanung - Mittelzentren - Kleinstädte (1)
- Rauschen (1)
- Ray Tracing (1)
- Ray tracing (1)
- Reachability (1)
- Reactive Absorption (1)
- Reactive extraction (1)
- Reaktionsdynamik (1)
- Reaktionsquerschnitt (1)
- Reaktive Sauerstoff Spezies (1)
- Reaktivität (1)
- Real-Time (1)
- Real-Time Control (1)
- Real-Time Systems (1)
- Realität (1)
- Realtime (1)
- Receptor design (1)
- Rechtecksgitter (1)
- Recognition (1)
- Rectilinear Grid (1)
- Recyclingverfahren (1)
- Red Sea (1)
- Redundanzvermeidung (1)
- Referenzarchitektur (1)
- Reflexionsspektroskopie (1)
- Regelkennlinie (1)
- Regelung (1)
- Regenwurm (1)
- Regenüberlaufbecken (1)
- Regime Shifts (1)
- Regime-Shift Modell (1)
- Regionale Kooperation (1)
- Regionale Netzwerke (1)
- Regionalpark (1)
- Regularisierung / Stoppkriterium (1)
- Regularität (1)
- Regularization / Stop criterion (1)
- Regularization methods (1)
- Regulatorgen (1)
- Regulatory gene search (1)
- Reifung (1)
- Reinforcement Learning (1)
- Reinigungsleistung (1)
- Reinigungswirkung (1)
- Rekollektion (1)
- Rekonstruktion (1)
- Relative effect potencies (REPs) (1)
- Reliability (1)
- Repeat (1)
- Repeated-Batch (1)
- Repeats (1)
- Representation (1)
- Repression (1)
- Requirements engineering (1)
- Resilienz (1)
- Response-Zeit (1)
- Ressourcenschonung (1)
- Restricted Regions (1)
- Retention Soil Filter (1)
- Retroviren (1)
- Retrovirus (1)
- Revitalisierung (1)
- Revitalisierung/Modernisierung (1)
- Rhabdomyolyse (1)
- Rhein-Main-Gebiet (1)
- Rheinhessen (1)
- Rheinland-Pfalz (1)
- Rhenium (1)
- Rheologie (1)
- Rho (1)
- Rho-Proteine (1)
- Rhodium (1)
- Rhodiumcluster (1)
- Rhodiumkomplexe (1)
- Richtlinien für integrierte Netzgestaltung (1)
- Riemannian manifolds (1)
- Riemannsche Mannigfaltigkeiten (1)
- Rigid Body Motion (1)
- Ringschlussmetathese (1)
- Ringversuch (1)
- Risikobewertung (1)
- Risikomaße (1)
- Risikotheorie (1)
- Risk Assessment (1)
- Risk Management (1)
- Risk Measures (1)
- Risk Sharing (1)
- Risk assessment (1)
- Rissbreite (1)
- Robot Control (1)
- Roboter (1)
- Robotic Manipulators (1)
- Robust smoothing (1)
- Rohrsanierung (1)
- Rohrvortrieb (1)
- Rohstoffhandel (1)
- Rohstoffindex (1)
- Rolling bearing (1)
- Rollreibung (1)
- Rollreibung und -verschleiß (1)
- Rollstuhl (1)
- Rombopak (1)
- Rote Traube (1)
- Routing (1)
- Rust effector (1)
- Ruthenium-Phosphor-Cluster (1)
- Ruthenium-Vinyliden (1)
- Ruß / Anorganisches Pigment (1)
- Rydberg molecule (1)
- Räumliche Differenzierung (1)
- Räumliche Planung (1)
- Röntgenfluoreszenzanalyse (1)
- Röntgenstrukturanalyse (1)
- Röntgenstrukturen (1)
- Rückmeldung (1)
- Rütteltisch (1)
- S. pneumoniae (1)
- S1P (1)
- SAHARA (1)
- SBA-15 (1)
- SBR-Verfahren (1)
- SCAD (1)
- SCR-Verfahren (1)
- SDL extensions (1)
- SDZ IMM125 (1)
- SFK-Materialmodell (1)
- SHCC (1)
- SIMERO (1)
- SM-SQMOM (1)
- SMS (1)
- SOEP (1)
- SPARQL (1)
- SPARQL query learning (1)
- SQMOM (1)
- SQUID-Magnetometer (1)
- SUc(2)-Eichfelder (1)
- SWARM (1)
- Safety (1)
- Safety Analysis (1)
- Sagnac-Effekt (1)
- Salzlösung (1)
- Sandmeyer Trifluormethylierung (1)
- Sandwiching algorithm (1)
- Sandwichkomplex (1)
- Sandwichtheorie (1)
- Sandwichverbindung (1)
- Sandwichwand (1)
- Satelliten-DNS (1)
- Satellitenfernerkundung (1)
- Sauerstoffradikal (1)
- Sauerstoffverbrauch (1)
- Saugspannung (1)
- Saugspannungsmessung (1)
- Scalar (1)
- Scale function (1)
- Scanning Electron Microscope (1)
- Scattering Light Sensor (1)
- Scavenger (1)
- Schadensmechanik (1)
- Schadenspotenzialanalyse (1)
- Schadenstoleranz (1)
- Schale (1)
- Schalenringelement (1)
- Schaltwerk (1)
- Schaum (1)
- Schaumzerfall (1)
- Scheduler (1)
- Schelling (1)
- Schema <Informatik> (1)
- Schematisation (1)
- Schematisierung (1)
- Scherer (1)
- Schichtstruktur (1)
- Schiefe Ableitung (1)
- Schlagfrequenz (1)
- Schlauch (1)
- Schlauchleitungen (1)
- Schluss (1)
- Schmierstoff (1)
- Schnelligkeit (1)
- Schnellkraft (1)
- Schnittstellen (1)
- Schrittmotor (1)
- Schrumpfen (1)
- Schub (1)
- Schubspannung (1)
- Schulbau (1)
- Schule (1)
- Schwache Formulierung (1)
- Schwache Konvergenz (1)
- Schwache Lösu (1)
- Schwarze Johannisbeere (1)
- Schwefeldioxid (1)
- Schwermetallakkumulation (1)
- Schwermetallaufnahme (1)
- Schwermetallbelastung (1)
- Schwinden (1)
- Schwingermüdung (1)
- Schwingfestigkeit (1)
- Schwingungsisolierung (1)
- Schüttgutsilo (1)
- Scientific Community Analysis (1)
- Scientific Computing (1)
- Screening (1)
- Second Order Conditions (1)
- Sediment gas storage (1)
- Sedimentation (1)
- Sedimentation Tank (1)
- Sedimentationsbecken (1)
- See (1)
- Sekretomanalyse (1)
- Sekundärstruktur (1)
- Selbstbestimmung (1)
- Selbstfahrende Elektroshuttle (1)
- Selbstorganisation (1)
- Selbstreguliertes-Lernen (1)
- Selektive Monoarylierung (1)
- Self-directed learning (1)
- Self-organization (1)
- Self-splitting objects (1)
- Self-supervised Learning (1)
- Semantic Communications (1)
- Semantic Desktop (1)
- Semantic Index (1)
- Semantic Wikis (1)
- Semantische Modellierung (1)
- Semantische Reasoner (1)
- Semantisches Datenmodell (1)
- Semi-Markov-Kette (1)
- Semi-infinite optimization (1)
- Semichinonat (1)
- Sendesignalvorverarbeitung (1)
- Sensing (1)
- Sensor Fusion (1)
- Sensoren (1)
- Sensors (1)
- Sequencing Batch Reactor (1)
- Sequenz (1)
- Sequenzieller Algorithmus (1)
- Serienfertigung (1)
- Serinprotease HtrA (1)
- Serinproteinasen (1)
- Serotyp (1)
- Serotyptrandformation (1)
- Serre functor (1)
- Serumalbumine (1)
- Service Access Points (1)
- Service-oriented Architecture (1)
- Services (1)
- Settlement Appropriateness and Thresholds (1)
- Shallow Water Equations (1)
- Shape Memory Alloy Hybrid Composite (1)
- Shape optimization (1)
- Shared Resource Modeling (1)
- Sheet extrusion (1)
- Shrinking smart (1)
- SiO2 (1)
- Sicherheitsanalyse (1)
- Sicherheitskonzept (1)
- Sicherheitskultur (1)
- Sicherheitstechnik (1)
- Siedlungsklima (1)
- Siedlungsplanung (1)
- Signaling (1)
- Signalisierung (1)
- Silanisierung (1)
- Silanization (1)
- Silberkomplexe (1)
- Silicium (1)
- Silicium / Amorpher Zustand (1)
- Siliciumcarbid (1)
- Siliciumdioxid (1)
- Silicon dioxide nanoparticles (1)
- Silicones (1)
- Silikonklebstoff (1)
- Silo (1)
- Similarity Join (1)
- Similarity Joins (1)
- Simplex-Algorithmus (1)
- Simulation acceleration (1)
- Simulationen (1)
- Simulationsdaten (1)
- Simulationsmodelle (1)
- Single Cell Analysis (1)
- Singly Occupied Molecular Orbital (SOMO) (1)
- Singular <Programm> (1)
- Singularity theory (1)
- Singularität (1)
- Singularitätentheorie (1)
- Sinkgeschwindigkeit (1)
- Skalar (1)
- Skelettmuskel (1)
- Skills (1)
- Slender body theory (1)
- Smart City (1)
- Smart Device (1)
- Smart Materials (1)
- Smart Mobile Device (1)
- Smart Production (1)
- Smart Textile (1)
- Smartphone (1)
- Smartphoneanalytik (1)
- Smartphones (1)
- Smartwatch (1)
- Sobolev spaces (1)
- Sobolev-Raum (1)
- Social movement (1)
- Socio-Semantic Web (1)
- Socs-3 (1)
- Sodium methacrylate (1)
- Soft Spaces (1)
- Software (1)
- Software Comprehension (1)
- Software Dependencies (1)
- Software Engineering (1)
- Software Evolution (1)
- Software Maintenance (1)
- Software Measurement (1)
- Software Testing (1)
- Software Visualization (1)
- Software engineering (1)
- Software transactional memory (1)
- Software-Architektur (1)
- Softwarearchitektur (1)
- Softwareergonomie (1)
- Softwaremetrie (1)
- Softwareproduktionsumgebung (1)
- Softwarespezifikation (1)
- Softwarewartung (1)
- Softwarewiederverwendung (1)
- Sol-Gel (1)
- Sol-Gel-Verfahren (1)
- Solvatochromie (1)
- Solvency II (1)
- Solvency-II-Richtlinie (1)
- Sommersmog (1)
- Sonotroden (1)
- Sorption (1)
- Sorptionisotherme (1)
- Soudness (1)
- Sound Simulation (1)
- Soziale Differenzierung (1)
- Soziale Infrastruktur (1)
- Soziale Stadt (1)
- Sozialer Wohnungsbau (1)
- Sozialraumanalyse (1)
- Soziologie (1)
- Spaltströmung (1)
- Spanende Bearbeitung (1)
- Spanische Kolonialzeit (1)
- Spanish Colonial Period (1)
- Spannbeton (1)
- Spannungs-Dehn (1)
- Spannungsanalyse (1)
- Spannungsfeld (1)
- Spannungsregelung (1)
- Spatial Econometrics (1)
- Spatial Statistics (1)
- Spatial regression models (1)
- Species sensitivity distribution (1)
- Spectral Method (1)
- Spectral theory (1)
- Spectrum Management System (1)
- Spectrum Sharing (1)
- Speech recognition (1)
- Speed (1)
- Speed management (1)
- Speicher (1)
- Spektralanalyse <Stochastik> (1)
- Spektroskopie (1)
- Spektrumnutzungsregeln (1)
- SphK2 (1)
- Spherical Fast Wavelet Transform (1)
- Spherical Location Problem (1)
- Sphärische Approximation (1)
- Sphärische Rohre (1)
- Spiders (1)
- Spiegel (1)
- Spiking Neural ADC (1)
- Spin Crossover (1)
- Spin trapping (1)
- Spinat (1)
- Spinfalle (1)
- Spinnen (1)
- Spiralrillenlager (1)
- Spiritual leadership (1)
- Spirituality (1)
- Spline-Approximation (1)
- Split Operator (1)
- Splitoperator (1)
- Sport (1)
- Sprachbewusstheit (1)
- Sprachdefinition (1)
- Sprache und Kommunikation (1)
- Sprachentwicklung (1)
- Sprachliche Bildung (1)
- Sprachliche Differenz (1)
- Sprachlicher Markt (1)
- Sprachliches Kapital (1)
- Sprachpolitik (1)
- Sprachprofile (1)
- Sprachvergleich (1)
- Spritzgusstechnologie (1)
- Sprung-Diffusions-Prozesse (1)
- Sprungkraft (1)
- Sprödbru (1)
- Spumaviren (1)
- Stabile Vektorbundle (1)
- Stabilität (1)
- Stable vector bundles (1)
- Stadtbahn (1)
- Stadtblock (1)
- Stadterneuerung (1)
- Stadtfest (1)
- Stadtklima (1)
- Stadtsimulation (1)
- Stahl/TP-FKV Verbindung (1)
- Stahlbau (1)
- Stahlbetondecke (1)
- Stahlbetonkonstruktionen (1)
- Stahlbetonplatten (1)
- Stahlfaserbeton (1)
- Stahlverbundbau (1)
- Stahlverbunddecke (1)
- Stahlverbundkonstruktion (1)
- Stahlversagen (1)
- Stamp forming (1)
- Standard basis (1)
- Standortprobleme (1)
- Stapedektomie (1)
- Stapedotomie (1)
- Stapelfaserorganobleche (1)
- State Estimation (1)
- Static Program Analysis (1)
- Static light scattering (1)
- Stationary Light (1)
- Stationäres Licht (1)
- Statistical Independence (1)
- Statistics (1)
- Statistische Schlussweise (1)
- Steady state (1)
- Stegplatte (1)
- Stent (1)
- Step-Scan FTIR-Technik (1)
- Step-scan-FTIR-Spektroskopie (1)
- Stereotyp (1)
- Stereotype (1)
- Sterische Hinderung (1)
- Sternpunkterdung (1)
- Steuer (1)
- Steuerungstheorie (1)
- Stickoxide (1)
- Stickstoff (1)
- Stickstoffaktivierung (1)
- Stickstoffmonoxid (1)
- Stickstoffmonoxidradikal (1)
- Stilbenderivate (1)
- Stilbene derivatives (1)
- Stimmungsbasierte Musikempfehlungen (1)
- Stochastic Dependence (1)
- Stochastic Impulse Control (1)
- Stochastic Network Calculus (1)
- Stochastic Processes (1)
- Stochastic optimization (1)
- Stochastische Inhomogenitäten (1)
- Stochastische Processe (1)
- Stochastische Zinsen (1)
- Stochastische optimale Kontrolle (1)
- Stochastischer Automat (1)
- Stochastischer Prozess (1)
- Stochastisches Modell (1)
- Stoffgesetz (1)
- Stoffwechsel (1)
- Stokes Equations (1)
- Stokes-Gleichung (1)
- Stop- und Spieloperator (1)
- Stormwater Treatment (1)
- Stormwater treatment (1)
- Stornierung (1)
- Stoßdämpfer (1)
- Strafstoß (1)
- Strahlentherapie (1)
- Strahlungstransport (1)
- Strategie (1)
- Strategieentwicklung (1)
- Strategische Umweltprüfung (1)
- Straßenentwässerung (1)
- Streaming (1)
- Streptococcus mitis (1)
- Streptococcus oralis (1)
- Stress (1)
- Stress management (1)
- Stressbewältigung (1)
- Stressmessung (1)
- Streulicht (1)
- Streulichtsensor (1)
- Structural Behaviour (1)
- Structural Reliability (1)
- Structure-property relationships (1)
- Struktur-Wirkungs-Abhängigkeit (1)
- Strukturation (1)
- Strukturgleichungsmodellierung (1)
- Strukturiertes Finanzprodukt (1)
- Strukturiertes Gitter (1)
- Strukturoptimierung (1)
- Strukturwandel (1)
- Strömung (1)
- Strömungsdynamik (1)
- Student types (1)
- Studie (1)
- Studium (1)
- Styrene (1)
- Städtebau (1)
- Städtebauförderung (1)
- Subgradient (1)
- Subjektive Perspektiven (1)
- Sublimation (1)
- Subset Simulationen (1)
- Substitutionsreaktion (1)
- Suburbane Räume (1)
- Success Run (1)
- Sulfaterkennung (1)
- Sulfonaterkennung (1)
- Sulfotransferasen (1)
- Superior olivary complex (1)
- Supply Chain (1)
- Supramolecular chemistry (1)
- Surface Reconstruction (1)
- Surface-Hopping (1)
- Survival Analysis (1)
- Susceptor (1)
- Suspension (1)
- Sustainability (1)
- Sustainable architecture (1)
- Suzuki-Miyaura-Reaktion (1)
- Swakopmund (1)
- Swelling equilibrium in aqueous solution (1)
- Swift Heavy Ion (1)
- Switched Linear System (1)
- Symbolic Methods (1)
- Symbolic execution (1)
- Symmetrie (1)
- Symmetriebrechung (1)
- Symmetry (1)
- Synchronisation zyklischer Prozesse (1)
- Synchronnetze (1)
- Synchronous Control Asynchronous Dataflow (1)
- Synthese (1)
- System (1)
- System Identification (1)
- SystemC (1)
- Systemarchitektur (1)
- Systematics (1)
- Systematik (1)
- Systemdesign (1)
- Systementwurf (1)
- Systemic Constructivist Approach (1)
- Systemidentifikation (1)
- Systemische Konstruktivistischen Ansatz (1)
- Systems Engineering (1)
- Systemtheorie (1)
- Systemumstellung (1)
- Sägezahneffekt (1)
- T cells (1)
- T-Lymphozyt (1)
- T-Zellen (1)
- TCDD (1)
- TDN (1)
- TDP1 (1)
- TEAC (1)
- TEOSQ (1)
- TIPARP (1)
- TMS (1)
- TPC Bauteile (1)
- TRPC6 (1)
- TRPV5 (1)
- TRPV6 (1)
- TSA (1)
- TTEthernet (1)
- TVET teachers’ education (1)
- Tablet-PC (1)
- Tagesrhythmus (1)
- Tail Dependence Koeffizient (1)
- Taktrückgewinnungsschaltungen (1)
- Talent (1)
- Talentauswahl (1)
- Talentidentifikation (1)
- Tandemreaktion (1)
- Task and Trajectory Planning (1)
- Task-based (1)
- Tastwahrnehmung (1)
- Taylor-Couette (1)
- Technik (1)
- Technikbegriff (1)
- Technikphilosophie (1)
- Technische-Zeichnung (1)
- Technologieakzeptanz (1)
- Teichonsäure (1)
- Teilgesättigte Böden (1)
- Teilhabe (1)
- Teilsignalisierung (1)
- Temperaturverteilung (1)
- Temporal Decoupling (1)
- Temporal Logic (1)
- Temporal Variational Autoencoders (1)
- Temporal data processing (1)
- Tensiometer (1)
- Tension-Stiffening (1)
- Tensor (1)
- Tensorfeld (1)
- Tesselation (1)
- Test for Changepoint (1)
- Testgüte (1)
- Tethered Machines (1)
- Tetrachlordibenzo-p-dioxin (1)
- Tetraeder (1)
- Tetraedergi (1)
- Tetrahedral Grid (1)
- Tetrahedral Mesh (1)
- Tetrahedran (1)
- Tetrahydrofuran (1)
- Tetrahydrofuranderivate (1)
- Tetrahydropyran (1)
- Tetrahydropyranderivate (1)
- Tetraphosphabicyclobutan (1)
- Textilbewehrung (1)
- Textual CBR (1)
- Texturanalyse (1)
- Texture Orientation (1)
- Texturrichtung (1)
- Thecla (1)
- Thekla (1)
- Thelotrema (1)
- Thematic analysis (1)
- Themenbasierte Empfehlungen von Ressourcen (1)
- Theoretische Physik (1)
- Thermal conductive polymer composites (1)
- Thermisch leitfähige Polymerkomposite (1)
- Thermische Harnstoffaufbereitung (1)
- Thermische Simulation (1)
- Thermodynamisches Gleichgewicht (1)
- Thermoelektrische Kühlwand (1)
- Thermolyse (1)
- Thermomechanik (1)
- Thermomechanische Behandlung (1)
- Thermophoresis (1)
- Thermoplast Blends (1)
- Thermoplast-Elastomer- und Duromer-Matrizes (1)
- Thermoplast-Tapelegeverfahren (1)
- Thermoplaste (1)
- Thermoplastic (1)
- Thermoplastische Faserverbundwerkstoffe (1)
- Thermoset (1)
- Thiazolidindione (1)
- Thin film approximation (1)
- Thiophen (1)
- Thylakoid (1)
- Tichonov-Regularisierung (1)
- Tiflis (1)
- Tight junction (1)
- Time series classification (1)
- Time-Series (1)
- Time-Triggered (1)
- Time-delay-Netz (1)
- Time-motion-Ultraschallkardiographie (1)
- Time-slotted (1)
- Tire-soil interaction (1)
- Titanium complex (1)
- ToF (1)
- Top-down (1)
- Topic-based Resource Recommendations (1)
- Topografische Gefährdungsanalyse (1)
- Topoisomerasegifte (1)
- Topoisomerasehemmstoffe (1)
- Topoisomerasen (1)
- Topologie (1)
- Topologiefehler (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Topology visualization (1)
- Toxifizierung (1)
- Toxikokinetik (1)
- Toxikologie (1)
- Toxizität (1)
- Tracking (1)
- Traditionelle Chinesische Lebensmittel (1)
- Traffic flow (1)
- Tragfähigkeit (1)
- Tragverhalten (1)
- Trainer im Hochleistungssport (1)
- Trajektorie <Kinematik> (1)
- Trajektorienplanung (1)
- Traktor (1)
- Trans-European Transport Networks (1)
- Transaction Level Modeling (TLM) (1)
- Transaction costs (1)
- Transaktionen (1)
- Transeuropäische Verkehrsnetze (1)
- Transfektion (1)
- Transferhydrierung (1)
- Transferred proteins (1)
- Transformation <Genetik> (1)
- Transformationslän (1)
- Transient modeling (1)
- Transient state (1)
- Transkriptomanalyse (1)
- Transparent-leitendes Oxid (1)
- Transparente Wärmedämmung (1)
- Transport Protocol (1)
- Transskriptionsaktivität (1)
- Traubenqualität (1)
- Traubensortierung (1)
- Traversability Analysis (1)
- Trendsport (1)
- Trendsports (1)
- Trennkanalisation (1)
- Trennschärfe <Statistik> (1)
- Triaxialgerät (1)
- Tribologie der Kunststoffe (1)
- Trifluormethylierung (1)
- Tripeldeckerartige Strukturen (1)
- Tripeldeckerkomplexe (1)
- Triphosphol (1)
- Triple-decker li (1)
- Triplettzustand (1)
- Triterpene (1)
- Trockengewicht (1)
- Trockenharnstoff (1)
- Tropfenkoaleszenz (1)
- Tropfenzerfall (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Tropischer Regenwald (1)
- Tryptophan-Halogenasen (1)
- Trägerbohlwand (1)
- Trägerfrequenzmessung (1)
- Tube Drawing (1)
- Tumor (1)
- Tumorassoziierter Trypsininhibitor (1)
- Tumorzytogenetik (1)
- Turnover <Ökologie> (1)
- Two component system (1)
- Two-Phase System (1)
- Two-Scale Convergence (1)
- Two-phase flow (1)
- Type building (1)
- Typenbildende Qualitative Inhaltsanalyse (1)
- Typologie (1)
- Tyrosyl-DNA-Phosphodiesterase 1 (TDP1) (1)
- UCP2 (1)
- UCP2-Protein (1)
- UDP-Glucuronosyltranferase (1)
- UDP-Glucuronosyltranferasen (1)
- UHPC (1)
- UML Activity (1)
- UMTS (1)
- UTRA (1)
- UV-Vis Spektroskopie (1)
- UV-Vis-Spektroskopie (1)
- UV/VIS Spektrometrie (1)
- Ubiquitous Computing (1)
- Ubiquitous system (1)
- Ultrafeinkörnig (1)
- Ultrafiltration (1)
- Ultrahochleistungsbeton (1)
- Ultrapräzisionsdrehmaschine (1)
- Ultraschall-Preformen (1)
- Ultraschalldispergierung (1)
- Ultraschallkardiographie (1)
- Ultraschallschweißen (1)
- Ultraviolettspektroskopie (1)
- Umgebungslärm (1)
- Umkehrosmose (1)
- Umweltanalytik (1)
- Umweltgerechtigkeit (1)
- Umweltinformatik (1)
- Umweltpsychologie (1)
- Umweltverträglichkeitsprüfung (1)
- Umweltwirkungen (1)
- Uncertainty Estimation (1)
- Undecaphosphor (1)
- Undersampling (1)
- Unobtrusive instrumentations (1)
- Unorganized Data (1)
- Unreinheitsfunktion (1)
- Unspezifische Adsorption (1)
- Unstrukturiertes Gitter (1)
- Unterabtastung (1)
- Unterdrückung (1)
- Unterholz (1)
- Untermannigfaltigkeit (1)
- Unterrichtsanalyse (1)
- Unterrichtsforschung (1)
- Unterrichtsinteraktion (1)
- Unterrichtsorganisation (1)
- Unterrichtssituation (1)
- Upper bound (1)
- Upwind-Verfahren (1)
- Uracil (1)
- Urban Flooding (1)
- Urban Water Supply (1)
- Urban design (1)
- Urban sprawl (1)
- UrbanSim (1)
- Usability (1)
- Usabiliy (1)
- Usage modeling (1)
- User Model (1)
- User-Centred Product Development (1)
- User-Experience (1)
- Ussing-Kammer (1)
- Ussing-chamber (1)
- Utility (1)
- V-Stoffe (1)
- VALBM (1)
- VELVET (1)
- VIACOBI (1)
- VOF Model (1)
- VOF Modell (1)
- VPA (1)
- VSCPT (1)
- VTK (1)
- Vakuole (1)
- Vakuumpumpe (1)
- Vakuumverdampfung (1)
- Validierung (1)
- Valproinsäure (1)
- Value at Risk (1)
- Value at risk (1)
- Value-at-Risk (1)
- Values in Action (VIA) (1)
- Vancoresmycin (1)
- Vapor-liquid equilibrium (1)
- Variabilität (1)
- Variation (1)
- Variational autoencoders (1)
- Variationsrechnung (1)
- Vector (1)
- Vector Field (1)
- Vectorfield approximation (1)
- Vegetationsentwicklung (1)
- Vektor (1)
- Vektor <Genetik> (1)
- Vektorfeldapproximation (1)
- Vektorfelder (1)
- Vektorkugelfunktionen (1)
- Verankerung (1)
- Verantwortung (1)
- Verantwortungsgemeinschaft (1)
- Verarbeitungstechnik (1)
- Verband der Automobilindustrie (1)
- Verbrennungsmotor (1)
- Verbund (1)
- Verbunddübel (1)
- Verbundguss (1)
- Verbundorganisation (1)
- Verbundtragfähigkeit (1)
- Verbundversagen (1)
- Verdampfung (1)
- Verdeckung (1)
- Verdichter (1)
- Verdunstungsleistung (1)
- Vererbung (1)
- Verfahren 8.3 (1)
- Vergütungsstahl (1)
- Verification (1)
- Verkehrsfläche (1)
- Verkehrsflächen (1)
- Verkehrsnetz (1)
- Verkehrspolitik (1)
- Verkehrsregelung (1)
- Verkehrssignalanlage (1)
- Verkehrsverbund (1)
- Verkehrsverbünde (1)
- Verkehrswissenschaft (1)
- Vermittlung (1)
- Verschleißmodelle (1)
- Verschwindungsatz (1)
- Versicherung (1)
- Versickerung (1)
- Versickerungsrate (1)
- Versorgungsaufgabe (1)
- Verstellleitgitter (1)
- Versuchsanlage (1)
- Verteilnetz (1)
- Verteilungskoeffizient (1)
- Vertical Flow Filter (1)
- Vertrautheit (1)
- Verwaltungsschale (1)
- Verzerrungstensor (1)
- Verzweigung <Mathematik> (1)
- Verzögerte Fluoreszenz (1)
- Vesikel (1)
- Vielfalt in der Schule (1)
- Vierte Nebengruppe (1)
- Vinyl-2-pyrrolidon (1)
- Vinylallene (1)
- Vinylcyclopropan (1)
- Vinylester (1)
- Virotherapie (1)
- Virtual Environments (1)
- Virtual Prototyping (1)
- Virtual measurement (1)
- Virtual spaces (1)
- Virtuelle Realität (1)
- Virulence (1)
- Virulenzfaktor (1)
- Virulenzfaktoren (1)
- Virusübertragung (1)
- Viscosity Adaptive Lattice Boltzmann Method (1)
- Viskoelastische Flüssigkeiten (1)
- Viskose Transportschemata (1)
- Viskosität (1)
- Visual Queries (1)
- Visualization Theory (1)
- Visuelle Wahrnehmung (1)
- Vitamin C (1)
- Vitamin C-Derivate (1)
- Vitamin-D (1)
- Vitamin-D-Rezeptor (1)
- Vliese (1)
- Vocational education and training (1)
- Volatilität (1)
- Volatilitätsarbitrage (1)
- Voltage Control (1)
- Volume rendering (1)
- Volumen-Rendering (1)
- Volumenänderung (1)
- Vorkonditionierer (1)
- Voronoi diagram (1)
- Vorschubantrieb (1)
- Vortex Separator (1)
- Vorverarbeitung (1)
- Vorwärts-Rückwärts-Stochastische-Differentialgleichung (1)
- Völklingen (1)
- W.M.Predtetschenski-A.I.Milinski (1)
- WCET (1)
- WLI (1)
- WMS (1)
- Wabenplatte (1)
- Waddington (1)
- Wahrnehmungsförderung (1)
- Wahrnehmungslernen (1)
- Wahrnehmungspsychologie (1)
- Wahrnehmungsschulung (1)
- Wahrnehmungstraining (1)
- Waldfragmentierung (1)
- Waldökosystem (1)
- Walkability (1)
- Wanderungsmotive (1)
- Wasserstoff-ATPase (1)
- Wasserstoffbrückenbindung (1)
- Wasserstoffionenkonzentration (1)
- Wassertransport (1)
- Wasserzementwert (1)
- Water (1)
- Water reservoir management (1)
- Water resources (1)
- Wave Based Method (1)
- Wavelet-Theorie (1)
- Wavelet-Theory (1)
- Weak Memory Model (1)
- Weakest-link model (1)
- Wearable Computing (1)
- Wedderburn number (1)
- Weinrebe (1)
- Weinwirtschaft (1)
- Weismann (1)
- Weiterentwicklung (1)
- Weißer Phosphor (1)
- Weißes Rauschen (1)
- Weißlichtinterferometer (1)
- Wellenlängenmodulation (1)
- Wellenverschleiß (1)
- Werkstoffermüdung (1)
- Werkstoffprüfung (1)
- Wert (1)
- Wetland Conservation (1)
- Wetting (1)
- White Noise (1)
- Wi-Fi (1)
- Wide-column stores (1)
- Wiederverwertung Altgummipartikel (1)
- Wilhelm Rieger (1)
- Winsor-System (1)
- Winsor-system (1)
- Wirbelabscheider (1)
- Wirbelabtrennung (1)
- Wirbelströmung (1)
- Wirbelsäule (1)
- Wirklichkeit (1)
- Wirkstoffe (1)
- Wirkungsanalyse (1)
- Wirkungsgrad (1)
- Wirtschaftsförderung (1)
- Wirtsspezifität (1)
- Wissen (1)
- Wissensbasiertes System (1)
- Wissenserwerb (1)
- Wissensgesellschaft (1)
- Wissensmanagement (1)
- Wissenstransfer (1)
- Wohngebäudebestandsmodell (1)
- Wohnimmobilien aus den 1970er Jahren (1)
- Wohnstandortverhalten der Bevölkerungsgruppe 60plus (1)
- Wohnungsnachfrage (1)
- Wohnungsunternehmen (1)
- Wolff (1)
- Wolfram (1)
- Worst-Case (1)
- Wortschatz (1)
- Wurzelexsudate (1)
- Wurzelreaktion (1)
- Wälzlager (1)
- Wärmeleitfähigkeit (1)
- Wärmeleitung (1)
- Wärmepumpe (1)
- Wärmeübergang (1)
- Wärmeübertragung (1)
- Wöhlerlinie (1)
- XDBMS (1)
- XEC (1)
- XFEM (1)
- XMCD (1)
- XML (1)
- XML query estimation (1)
- XML summary (1)
- Xerogel (1)
- Xerogele (1)
- Xiphinema index (1)
- Yaglom limits (1)
- Yaroslavskiy-Bentley-Bloch Quicksort (1)
- Yeast-Two-Hybrid-System (1)
- Zahnspulenwicklung (1)
- Zeitfestigkeit (1)
- Zeitintegrale Modelle (1)
- Zeitraffende Alterung (1)
- Zeitreihe (1)
- Zeitsynchronisierung (1)
- Zelle / Physiologie (1)
- Zellmigration (1)
- Zellskelett (1)
- Zellularer Ansatz (1)
- Zellzyklus (1)
- Zentralnervensystem (1)
- Zentrenprobleme (1)
- Zentrieren (1)
- Zentrifugalkraft (1)
- Zeolite MCM-71 (1)
- Zeolite SSZ-53 (1)
- Zeolite UTD-1 (1)
- Zeolith ITQ-21 (1)
- Zeolith M41S (1)
- Zeolith MCM-71 (1)
- Zeolith SSZ-53 (1)
- Zeolith UTD-1 (1)
- Zeolith Y (1)
- Zeolith ZSM12 (1)
- Zero-dimensional schemes (1)
- Zertifikat (1)
- Zertifizierung (1)
- Zielverfolgung (1)
- Zigarettenrauchen (1)
- Zigarrenrauchen (1)
- Zink (1)
- Zinkkomplexe (1)
- Zirconium complex (1)
- Zopfgruppe (1)
- Zuckertransporter (1)
- Zufälliges Feld (1)
- Zug- Druckversuch (1)
- Zugbeanspruchung (1)
- Zugesicherte Eigenschaft (1)
- Zugfestigkeit (1)
- Zugkraft (1)
- Zugkriechen (1)
- Zugversuch (1)
- Zustandsgleichung (1)
- Zustandsüberwachung (1)
- Zuverlässigkeitskonzept (1)
- Zwang (1)
- Zwei-Komponenten System (1)
- Zwei-Komponenten-System (1)
- Zweikomponentensystem (1)
- Zweiphasenströmung (1)
- Zweiphasensysteme (1)
- Zweiphotonenspektroskopie (1)
- Zweitspracherwerb (1)
- Zwischenmolekulare Kraft (1)
- Zyklische Belastung (1)
- Zyklischer Schwellversuch (1)
- Zyklischer Triaxialversuch (1)
- Zyklischer Wechselversuch (1)
- Zylinderblock (1)
- Zytokine (1)
- Zytotoxizität (1)
- [2.2.1]-bicyclic substituents (1)
- [2.2.1]-bicyclisch (1)
- abc transporter (1)
- abgeleitete Kategorie (1)
- abgeschlossene Population (1)
- absorption (1)
- accessibility (1)
- acetate (1)
- acetylcholine receptor (1)
- acidification (1)
- acoustic modeling (1)
- actively steered implement (1)
- activity coefficient (1)
- acyclische Cucurbiturile (1)
- adaptive Antennen (1)
- adaptive algorithm (1)
- adaptive antennas (1)
- additiv-subtraktive Prozesskette (1)
- adenylate (1)
- adhesion (1)
- adhesive (1)
- adhesive bonding (1)
- adhesive joints in concrete (1)
- adhesives cure-behaviour aluminium (1)
- adjoint method (1)
- adjungierte (1)
- adulthood (1)
- aerobe Oxidation (1)
- affective user interface (1)
- affine arithmetic (1)
- affinity chromatographie (1)
- affinity chromatography (1)
- aging (1)
- air-bearing (1)
- aktiver Hybridverbund (1)
- algebraic attack (1)
- algebraic correspondence (1)
- algebraic function fields (1)
- algebraic geometry (1)
- algebraic number fields (1)
- algebraic topology (1)
- algebraische Korrespondenzen (1)
- algebraische Topologie (1)
- algebroid curve (1)
- alkali (1)
- alkin (1)
- alpha shape method (1)
- alternating minimization (1)
- alternating optimization (1)
- aluminium (1)
- aluminum (1)
- amid (1)
- amide (1)
- amorphes Silizium (1)
- analoge Mikroelektronik (1)
- analysis of algorithms (1)
- androgene (1)
- angewandte Mathematik (1)
- angewandte Topologie (1)
- anharmonic CH modes (1)
- anharmonic vibrations (1)
- anion recognition (1)
- anisotropen Viskositätsmodell (1)
- anisotropic viscosity (1)
- anoxia (1)
- anserine (1)
- anthocyanidins (1)
- anthropogenic effects (1)
- anti-inflammatorisch (1)
- anti-inflammatory (1)
- antigenspezifische Immunsuppression (1)
- antioxidant gene expression (1)
- antioxidative Genexpression (1)
- apple polyphenols (1)
- applied mathematics (1)
- apprehension (1)
- aquatic (1)
- aqueous solution (1)
- arachidonic acid (1)
- arbitrary Lagrangian-Eulerian methods (ALE) (1)
- archimedean copula (1)
- artificial neural network (1)
- aryl amination (1)
- aryl hydrocarbon receptor (1)
- aryl-hydrocarbon-receptor (1)
- ascorbate (1)
- ascorbic acid (1)
- ascorbyl radical (1)
- asian option (1)
- aspartam (1)
- aspartame (1)
- assembly tasks (1)
- associations (1)
- assymmetric carboxylate stretch vibrations (1)
- asymptotic-preserving (1)
- auditorischer Hirnstamm (1)
- auto-pruning (1)
- automated theorem proving (1)
- automobile (1)
- automotive (1)
- automotive aftermarket (1)
- autonomous networking (1)
- average-case analysis (1)
- axis orientation (1)
- aziridine (1)
- barrier-free (1)
- basic carboxylates (1)
- basket option (1)
- batch equilibrium (1)
- beam refocusing (1)
- beating rate (1)
- behaviour-based system (1)
- benders decomposition (1)
- bending strip method (1)
- benzo[a]pyrene (1)
- benzol (1)
- berry (1)
- beta-Lactame (1)
- beta-lactam-resistance (1)
- beta-ungesättigte Aldehyde und Ketone (1)
- beta-ungesättigte Carbonylverbindungen (1)
- bi-direktional gekoppelte Abflusssimulation (1)
- biaryl (1)
- bicyclische heterocyclische Verbindungen (1)
- biegeschlaff (1)
- bifurcation (1)
- binary analysis (1)
- binary countdown protocol (1)
- binomial tree (1)
- bio psycho sozial (1)
- bioactive metabolites (1)
- bioavailability (1)
- biochemical characterisation (1)
- biology of knowledge (1)
- biomarker (1)
- biomechanics (1)
- bionische Aspekte (1)
- biosensors (1)
- bitvector (1)
- black bursts (1)
- blackout period (1)
- blind (1)
- bocses (1)
- body-IMU calibration (1)
- boundary value problem (1)
- bounded model checking (1)
- brand (1)
- brittle fracture (1)
- building (1)
- building physics (1)
- building trade (1)
- bursting disk (1)
- butterfly molecule (1)
- c-Abl (1)
- cake filtration (1)
- calving (1)
- canonical ideal (1)
- canonical module (1)
- capacitive measurement (1)
- carbon (1)
- carbon dioxide (1)
- carboxylate bridge (1)
- carboxylates (1)
- carboxylic acid (1)
- carcinogenesis (1)
- carnosine (1)
- carrier-grade point-to-point radio networks (1)
- catalyst (1)
- cdc2 (1)
- cell adhesion molecule (1)
- cell migration (1)
- cells on chips (1)
- centrifugal force (1)
- change detection (1)
- changing market coefficients (1)
- changing urbanisation patterns (1)
- character strengths (1)
- characteristic polynomial (1)
- characterization of Structures (1)
- charakteristische In-situ-Betondruckfestigkeit (1)
- charakteristische Werkstofffestigkeiten (1)
- chemical effect prediction (1)
- chemical reacting systems (1)
- chemically crosslinked hydrogels (1)
- chemiluminescence (1)
- chemisch vernetzte Hydrogele (1)
- chemoprevention (1)
- chemoresistance (1)
- cholesterische Phasen (1)
- chromium (1)
- chōra (1)
- classification of interference (1)
- click-reaction (1)
- climate (1)
- closure approximation (1)
- clustering methods (1)
- co-contraction (1)
- code coverage analysis (1)
- coexistence (1)
- coffee (1)
- cogging (1)
- cognition (1)
- cohesive cracks (1)
- cohesive elements (1)
- cohesive interface (1)
- collaborative information visualization (1)
- collaborative mobile sensing (1)
- collective intelligence (1)
- collision induced dissociation (1)
- colonization (1)
- colour stability (1)
- combination band (1)
- combinatorics (1)
- combustion engine (1)
- community assembly (1)
- complex (1)
- composite (1)
- composite materials (1)
- composite slab (1)
- composites (1)
- compound casting (1)
- computational biology (1)
- computational dynamics (1)
- computational finance (1)
- computational modelling (1)
- computer algebra (1)
- computer-based systems (1)
- computer-supported cooperative work (1)
- computeralgebra (1)
- computerbasiertes Training (1)
- concentration of musts (1)
- conceptual process design (1)
- concurrent (1)
- condition number (1)
- configurational mechanics (1)
- conflict (1)
- conserving time integration (1)
- consistent integration (1)
- constrained mechanical systems (1)
- constraint exploration (1)
- constructed wetlands (1)
- content-and-structure summary (1)
- context awareness (1)
- context management (1)
- context-aware topology control (1)
- continuous master theorem (1)
- continuum damage (1)
- continuum damage mechanics (1)
- continuum fracture mechanics (1)
- controller (1)
- convection (1)
- convergence behaviour (1)
- convex constraints (1)
- convex optimization (1)
- cooling tower (1)
- cooling tower shell (1)
- coordinated backhaul networks in rural areas (1)
- coordinative flexibility (1)
- coordinative stabilisation (1)
- core strengths (1)
- correlated errors (1)
- coupled problems (1)
- coupling methods (1)
- crack path tracking (1)
- crack propagation (1)
- crack-propagation-law (1)
- crank case (1)
- crash (1)
- crash application (1)
- crash hedging (1)
- crashworthiness (1)
- cre-Sequenz (1)
- credit risk (1)
- crop protection (1)
- cross coupling (1)
- cross section (1)
- crossphase modulation (1)
- crowd condition estimation (1)
- crowd density estimation (1)
- crowd scanning (1)
- crowd sensing (1)
- crowdsourcing (1)
- crystallization (1)
- cumulative IRMPD (1)
- curves and surfaces (1)
- cutting edges (1)
- cutting simulation (1)
- cyclic (1)
- cyclic nucleotide phosphodiesterase (1)
- cyclic peptides (1)
- cyclinabhängige Kinasen (1)
- cyclopentadienyl ligands (1)
- cylinder block (1)
- cytokine (1)
- cytotoxicity (1)
- damage tolerance (1)
- data annotation (1)
- data output unit (1)
- data race (1)
- data sets (1)
- data-flow (1)
- dataset (1)
- decarboxylative cross-coupling (1)
- decidability (1)
- decision support (1)
- decision support systems (1)
- decoding (1)
- default time (1)
- defect interaction (1)
- degenerations of an elliptic curve (1)
- degree of freedom (1)
- demografischer Wandel (1)
- dense univariate rational interpolation (1)
- density gradient theory (1)
- dependable systems (1)
- depth sensing (1)
- derived category (1)
- design (1)
- design automation (1)
- deterioration (1)
- determinant (1)
- deterministic arbitration (1)
- deuterierung (1)
- development (1)
- dialkali-halogen (1)
- diameter measurement (1)
- diameter sensor (1)
- diatoms (1)
- dielectric elastomers (1)
- diffusion coefficient (1)
- diffusion measurement (1)
- diffusion model (1)
- diffusion models (1)
- digital design (1)
- digital methodologies (1)
- digitale Methodik (1)
- digitaler Entwurf (1)
- dioxin-like compounds (1)
- directed graphs (1)
- discharge coefficient (1)
- dischargeable mass flow rate (1)
- discontinuous finite elements (1)
- discrepancy (1)
- dispersal (1)
- distributed (1)
- distributed real-time systems (1)
- distributed tasks (1)
- disulfide bond transfer (1)
- diurnal cycle (1)
- diversification (1)
- diversity (1)
- dna adducts (1)
- domain parametrization (1)
- domain switching (1)
- doping (1)
- double exponential distribution (1)
- downward continuation (1)
- driver assistance (1)
- driver status and intention prediction (1)
- drowsiness detection (1)
- drug metablism (1)
- durability (1)
- dyadisches Coping (1)
- dynamic (1)
- dynamic calibration (1)
- dynamic combinatorial chemistry (1)
- dynamic fracture mechanics (1)
- dynamic model (1)
- dysprosium (1)
- echtzeitsystem (1)
- ecology (1)
- economic development (1)
- ecosystem function (1)
- edge computing (1)
- effective refractive index (1)
- efficiency loss (1)
- elastoplasticity (1)
- electric field (1)
- electrical (1)
- electrical conductivity (1)
- electro-hydraulic systems (1)
- electrolyte solutions (1)
- electronically excited states (1)
- electroporation (1)
- elliptical distribution (1)
- embedded (1)
- embedded mixed-criticality systems (1)
- embedding (1)
- emergent aquatic insects (1)
- emomap (1)
- emotion visualization (1)
- empirical review (1)
- enamid (1)
- enantioselective catalysis P (1)
- end-to-end learning (1)
- endlosfaserverstärkte Thermoplaste (1)
- endocrine (1)
- endocrine disruptors (1)
- endokrine Disruptoren (1)
- endolithic (1)
- endomorphism ring (1)
- energetische Wohngebäudemodernisierung (1)
- engine process simulation (1)
- engineering (1)
- enrichment (1)
- ensemble (1)
- entrepreneurial orientation (1)
- entrepreneurship (1)
- enumerative geometry (1)
- environment perception (1)
- environmental noise (1)
- environmental risk assessment (1)
- epiphytes (1)
- equation of state (1)
- equilibrative nucleoside transporter (1)
- equilibrium strategies (1)
- equisingular families (1)
- erdschlusskompensiertes Netz (1)
- esterases (1)
- estrogene (1)
- evapotranspiration (1)
- event segmentation (1)
- evolutionary algorithm (1)
- exhaust temperature management (1)
- explainability (1)
- expression (1)
- face value (1)
- fallible knowledge (1)
- fatigue loading (1)
- fault-tolerant control (1)
- fehlertolerante Regelung (1)
- fermi resonance (1)
- ferroelectric fatigue (1)
- ferroelektrische Ermüdung (1)
- ferroelektrischer Perowskit (1)
- fiber reinforced silicon carbide (1)
- fibre damage (1)
- fibre lay-down dynamics (1)
- fibre reinforced plastic (1)
- fibre strength (1)
- fictitious configurations (1)
- filter (1)
- filter media resistance (1)
- filtration (1)
- financial mathematics (1)
- finite Elasto-Plastizität (1)
- finite difference schemes (1)
- finite elasto-plasticity (1)
- finite groups of Lie type (1)
- finite spin group (1)
- firewall (1)
- first hitting time (1)
- fish (1)
- flavour (1)
- flexible Fertigung (1)
- flexible multibody dynamics (1)
- float glass (1)
- floating potential (1)
- flood risk (1)
- flow chemistry (1)
- flow cytometry (1)
- flow visualization (1)
- fluid interfaces (1)
- fluid structure (1)
- fluid structure interaction (1)
- fluid-structure interaction (FSI) (1)
- fluorescence (1)
- flüssiges Polymer (1)
- flüssigkristalline Phasen (1)
- flüssigkristalliner Phasen (1)
- foamy virus (1)
- folding rocks (1)
- food (1)
- forest management (1)
- formal (1)
- formal analysis (1)
- formaldehyde (1)
- formale Analyse (1)
- formate (1)
- forward-shooting grid (1)
- foundational translation validation (1)
- fracture mechanics (1)
- fragmentation channel (1)
- free surface (1)
- free-form surface (1)
- free-living (1)
- freie Oberfläche (1)
- freshwater lentic systems (1)
- front end (1)
- front loader (1)
- fruit juice (1)
- fuel consumption (1)
- functional safety (1)
- functionally graded material (1)
- funktionsorientierten Oberflächencharakterisierung (1)
- furocoumarins (1)
- fuzzy Q-learning (1)
- fuzzy logic (1)
- gas bearing, aerostatic, porous, theoretical model (1)
- gas phase reaction (1)
- gas transfer at the water-atmosphere interface (1)
- gasoline engine (1)
- gasphase (1)
- gaussian filter (1)
- gebietszerlegung (1)
- gelonin (1)
- gemeinsame Aushärtung (1)
- gene silencing (1)
- gene therapy (1)
- generalized plasticity (1)
- generic character table (1)
- generic self-x sensor systems (1)
- generic sensor interface (1)
- genetic algorithms (1)
- genomische Diversität (1)
- genotoxicity (1)
- geographic information systems (1)
- geology (1)
- geometrically exact beams (1)
- geometrische Oberflächenbeschreibung (1)
- geomodelling (1)
- gewebeverstärkte Thermoplaste (1)
- gewebeverstärkter Thermoplaste (1)
- gitter (1)
- global tracking (1)
- glucocorticoid (1)
- glycine neurotransmission (1)
- glycolipid (1)
- good semigroup (1)
- governance (1)
- grape berry moth (1)
- grapevine moth (1)
- graph drawing algorithm (1)
- graph embedding (1)
- graph layout (1)
- graph p-Laplacian (1)
- gravitation (1)
- greenhouse gases (1)
- greywater (1)
- groundwater remediation (1)
- group action (1)
- groups of Lie type (1)
- großer Investor (1)
- großflächige Abscheidung (1)
- großräumige regionale Kooperation (1)
- hPRT-Genmutations-Assay (1)
- halogenases (1)
- hand pose, hand shape, depth image, convolutional neural networks (1)
- handover optimzaiion (1)
- haptic perception (1)
- haptotaxis (1)
- hardware (1)
- headed studs (1)
- hedging (1)
- hemodialysis (1)
- hemoglobin adduct (1)
- hetero-substituierte Diazapyridinophane (1)
- heterogene Werkstoffe (1)
- heterogeneous access management (1)
- heterogenous catalysis (1)
- heuristic (1)
- hexadiendiale (1)
- hierarchical matrix (1)
- hierarchical structure (1)
- higher order accurate conserving time integrators (1)
- higher-order continuum (1)
- hip strenght (1)
- historical documents (1)
- hochduktil (1)
- homobimetallische Komplexe (1)
- homogeneous catalysis (1)
- homolytische Substitution (1)
- hose lines (1)
- host preference (1)
- host-range (1)
- human body motion tracking (1)
- hybrid lightweight structures (1)
- hybrid material (1)
- hybrid materials (1)
- hybrid materials engineering (1)
- hybrid structure (1)
- hybride Leichtbaustrukturen (1)
- hydrides (1)
- hydrodynamics (1)
- hydrodynamische Injektion (1)
- hydrogen bonds (1)
- hydrogenation (1)
- hyperbolic systems (1)
- hyperelliptic function field (1)
- hyperelliptische Funktionenkörper (1)
- hypergraph (1)
- hyperspectal unmixing (1)
- hypocoercivity (1)
- hypolithic (1)
- hysteresis measurement (1)
- iB2C (1)
- iNOS (1)
- idealclass group (1)
- identity (1)
- image denoising (1)
- imaging (1)
- imidazolium salts (1)
- immobilization (1)
- immunotoxins (1)
- impact resistance (1)
- implement (1)
- implementation (1)
- impulse control (1)
- impurity functions (1)
- in-plane- und out-of-plane Kriechen (1)
- incompressible elasticity (1)
- individual (1)
- induzierbar (1)
- inelastic multibody systems (1)
- inelastische Mehrkörpersysteme (1)
- inertial measurement unit (1)
- inertial sensors (1)
- inference (1)
- infinite-dimensional analysis (1)
- infinite-dimensional manifold (1)
- inflation-linked product (1)
- information systems (1)
- infrarot (1)
- inhibition (1)
- injection molding (1)
- innerstädtische Einzelhandelslagen (1)
- insecticide tolerance (1)
- integer programming (1)
- integral constitutive equations (1)
- intellectual disability (1)
- intensity (1)
- inter-organisationale Netzwerke (1)
- interaction networks (1)
- interference (1)
- interference resistance (1)
- interkulturell (1)
- intermediate stops (1)
- intermediate-spin (1)
- intermtetallische Verbindung (1)
- internal seiche (1)
- interorganisationale Netzwerke (1)
- interpolation (1)
- interpretation (1)
- interval arithmetic (1)
- intrusion detection (1)
- inverse coordination (1)
- inverse optimization (1)
- inverse problem (1)
- inverses Pendel (1)
- ion exchanger (1)
- ion-sensitive field-effect transistor (1)
- ioneninduzierte Nukleation (1)
- ionic liquid (1)
- ioninduced nucleation (1)
- ionische Flüssigkeit (1)
- ionization (1)
- irrigation (1)
- isogeometric analysis (IGA) (1)
- isotropical (1)
- jenseits der dritten Generation (1)
- joint channel estimation (1)
- juice (1)
- jump table analysis (1)
- jump-diffusion process (1)
- kalman (1)
- katalytische Isomerisierung (1)
- katalytische Transferhydrierung (1)
- kernel (1)
- kidney (1)
- kinematic (1)
- kinematic model (1)
- kinetic equations (1)
- kinetic isotope effect (1)
- kinetischer Isotopeneffekt (1)
- klimaneutraler Wohngebäudebestand (1)
- knee joint stability (1)
- knock-out (1)
- konditionaler knock-out (1)
- konsistente Integration (1)
- kontinuierliches Verbundmittel (1)
- kontinuumsatomistischer Ansatz (1)
- koordinative Stabilisierung (1)
- lake classification (1)
- lake modeling (1)
- landsat (1)
- langfaserverstärkte Polymere (1)
- language definition (1)
- language modeling (1)
- language profiles (1)
- lanthanide (1)
- large investor (1)
- large neighborhood search (1)
- large scale integer programming (1)
- lattice Boltzmann (1)
- layout (1)
- leaf-cutting ants (1)
- leakage (1)
- lecithinabhängige Dehydrogenase (1)
- lecithine dependent dehydrogenase (1)
- letter (1)
- level K-algebras (1)
- level set method (1)
- lichens (1)
- life insurance (1)
- life-history (1)
- life-strategy (1)
- lifestyle (1)
- light scattering optimization (1)
- lightweight design (1)
- lightweight timber frame construction (1)
- limit theorems (1)
- linear code (1)
- linear motor (1)
- linear systems (1)
- linked data (1)
- lipases (1)
- lipid content (1)
- liquid-liquid equilibrium (1)
- liquid-liquid extraction (1)
- liver (1)
- load-displacement-law (1)
- loader (1)
- loadpoint shift (1)
- local-global conjectures (1)
- localizing basis (1)
- logic (1)
- logic synthesis (1)
- long short-term memory (1)
- long tail (1)
- longevity bonds (1)
- loss analysis (1)
- low-rank approximation (1)
- lung cancer (1)
- ländlich-periphere Räume (1)
- ländliche Regionen (1)
- mHealth (1)
- mRNA-Expression (1)
- machine code analysis (1)
- machine-checkable proof (1)
- macro derivative (1)
- macroinvertebrate community (1)
- macroinvertebrates (1)
- macrophytes (1)
- magnesium (1)
- magnetic field based localization (1)
- magnetism (1)
- magnetometer calibration (1)
- manganese (1)
- marine bacteria (1)
- marine biotechnology (1)
- market crash (1)
- market manipulation (1)
- markov model (1)
- martensite (1)
- martingale optimality principle (1)
- mass spectrometry (1)
- mass transfer kinetics (1)
- material characterisation (1)
- materielle Kräfte (1)
- mathematical modelling (1)
- mathematical morphology (1)
- mating disruption (1)
- matrix problems (1)
- matrix system (1)
- matrix visualization (1)
- matroid flows (1)
- mean-variance approach (1)
- mechanical properties (1)
- mechanism (1)
- mehreren Uebertragungszweigen (1)
- mercapturic acid (1)
- mercapturic acids (1)
- mesh deformation (1)
- mesoporous (1)
- mesoporöse Materialien (1)
- message-passing (1)
- meta-analysis (1)
- metadata (1)
- metaheuristics (1)
- metal fibre (1)
- metal organic frameworks (1)
- metalloproteases (1)
- metals (1)
- miRNA (1)
- mice (1)
- micro lead (1)
- micro-bending test (1)
- microclimate (1)
- microelectronics ontology (1)
- micromechanics (1)
- micromorphic continua (1)
- microstructures (1)
- microwave (1)
- migration (1)
- minimal polynomial (1)
- miniplant (1)
- mixed convection (1)
- mixed methods (1)
- mixed multiscale finite element methods (1)
- mixed-signal (1)
- mobile scale (1)
- mobility robustness optimization (1)
- mobilitätseingeschränkt (1)
- modal derivatives (1)
- model generation (1)
- model order reduction (1)
- model uncertainty (1)
- model-based fault diagnosis (1)
- modifizierte Teilsicherheitsbeiwerte (1)
- modularisation (1)
- moduli space (1)
- moisture transfer (1)
- moisture transport (1)
- molecular beam (1)
- molecular capsule (1)
- molecular capsules (1)
- molecular dynamics (1)
- molecular simulations (1)
- molekulare Chaperone (1)
- molekulare Kapsel (1)
- molekulare Simulation (1)
- molekularer Chiralitätswechselwirkungstensor (1)
- molybdate (1)
- moment (1)
- monotone Konvergenz (1)
- monotropic programming (1)
- mouse (1)
- muconaldehyde (1)
- multi scale (1)
- multi-asset option (1)
- multi-carrier (1)
- multi-class image segmentation (1)
- multi-core processors (1)
- multi-domain modeling and evaluation methodology (1)
- multi-gene analysis (1)
- multi-level Monte Carlo (1)
- multi-object tracking (1)
- multi-phase flow (1)
- multi-scale model (1)
- multi-user (1)
- multi-user detection (1)
- multicategory (1)
- multicore (1)
- multidimensional datasets (1)
- multifilament superconductor (1)
- multifunctionality (1)
- multigrid method (1)
- multileaf collimator (1)
- multinomial regression (1)
- multiobjective optimization (1)
- multipatch (1)
- multiplicative decomposition (1)
- multiplicative noise (1)
- multiplikative Zerlegung (1)
- multiscale analysis (1)
- multiscale denoising (1)
- multiscale methods (1)
- multitemporal (1)
- multithreading (1)
- multitype code coupling (1)
- multiuser detection (1)
- multiuser transmission (1)
- multivariate chi-square-test (1)
- multiway partitioning (1)
- murE (1)
- myasthenia gravis (1)
- n-Decane hydroconversion (1)
- nachwachsende Füllstoffe (1)
- nachwachsende Rohstoffe (1)
- naive diversification (1)
- nanocomposites (1)
- nanofiber (1)
- nanoparticle (1)
- nanoskalige Rezeptoren (1)
- native Aufreinigung (1)
- natural products (1)
- naturfaserverstärkte Kunststoffe (1)
- naturnahe Abwasserreinigungsverfahren (1)
- necrosis (1)
- negative refraction (1)
- neonatal rat ventricular cardiomyocytes (1)
- neonatale ventrikuläre Kardiomyozyten der Ratte (1)
- nestable tangibles (1)
- network flows (1)
- network synthesis (1)
- netzgenerierung (1)
- neural networks (1)
- neuromuscular aktivity (1)
- neuromuskuläre Aktivität (1)
- neurotrophin 3 (1)
- nicht-newtonsche Strömungen (1)
- nichtlineare Druckkorrektor (1)
- nichtlineare Modellreduktion (1)
- nichtlineare Netzwerke (1)
- nickel (1)
- niob (1)
- nitric oxide (1)
- non square linear system solving (1)
- non-conventional (1)
- non-desarguesian plane (1)
- non-equilibrium thermodynamics (1)
- non-newtonian flow (1)
- nonconvex optimization (1)
- nonlinear circuits (1)
- nonlinear diffusion filtering (1)
- nonlinear elasticity (1)
- nonlinear elastodynamics (1)
- nonlinear model reduction (1)
- nonlinear pressure correction (1)
- nonlinear term structure dependence (1)
- nonlinear vibration analysis (1)
- nonlocal filtering (1)
- nonnegative matrix factorization (1)
- nonwovens (1)
- normalization (1)
- nucleofection (1)
- nucleoside (1)
- null model (1)
- number fields (1)
- numerical irreducible decomposition (1)
- numerical methods (1)
- numerical model (1)
- numerical time integration (1)
- numerische Dynamik (1)
- numerische Strömungssimulation (1)
- numerisches Verfahren (1)
- nutrient removal (1)
- obere Eckbewehrung (1)
- objectoriented (1)
- objektimmanente Kriterien (1)
- oblique derivative (1)
- odour mixtures (1)
- oedometer (1)
- office market (1)
- ohne Querkraftbewehrung (1)
- olefin metathesis (1)
- operations (1)
- optical code multiplex (1)
- optical imaging (1)
- optically active (1)
- optimal (1)
- optimal capital structure (1)
- optimal consumption and investment (1)
- optiman stopping (1)
- optimization (1)
- optimization correctness (1)
- option pricing (1)
- option valuation (1)
- optisch aktiv (1)
- optisch aktiver Titankomplex (1)
- optisch aktiver Zirkoniumkomplex (1)
- optische Parameter (1)
- orbit (1)
- organic micropollutants (1)
- organic nanoparticles (1)
- organische Nanopartikel (1)
- organotypische Kokultur (1)
- oscillating magnetic fields (1)
- other-channel interference (1)
- out-of-order (1)
- output feedback approximation (1)
- ovarian carcinoma (1)
- overtone (1)
- oxidative Cyclisierung (1)
- oxidative DNA Damage (1)
- oxidative DNA Schäden (1)
- oxidative DNA-Schäden (1)
- oxidative damage (1)
- oxo centered transition metal complexes (1)
- oxygen consumption (1)
- p300 (1)
- p53 (1)
- p53-Gen (1)
- pKAP298 (1)
- parallel (1)
- parametric design (1)
- paramtrisches Design (1)
- partial differential equation (1)
- partial hydrolysis (1)
- partial information (1)
- participatory sensing (1)
- particle dynamics (1)
- particle finite element method (1)
- particle size distribution (1)
- particle-in-cell (1)
- particular (1)
- partition coefficient (1)
- path (1)
- path cost models (1)
- path relinking (1)
- path tracking (1)
- path-dependent options (1)
- pattern (1)
- pattern recognition (1)
- pbp (1)
- penalty methods (1)
- penalty-free formulation (1)
- peptide (1)
- peripheral blood mononuclear cells (1)
- periphere Region (1)
- permeability (1)
- pesticides (1)
- pesticides and wastewater (1)
- petroleum exploration (1)
- phase behavior (1)
- phase field modeling (1)
- phenothiazine (1)
- philosophy of technology (1)
- phonologische Bewusstheit (1)
- phosphorous-metall-complexes (1)
- phosphorus (1)
- photochemistry (1)
- photonic crystals (1)
- photonic crystals filter (1)
- photonic structures (1)
- photonics (1)
- phylogeny (1)
- phylogeography (1)
- piezoelectricity (1)
- pigments (1)
- pivot sampling (1)
- planar polynomial (1)
- planning (1)
- planning systems (1)
- planning theory (1)
- plant-herbivore interactions (1)
- plasma sheaths (1)
- plasma source ion implantation (1)
- plasma-based ion implantation and deposition (1)
- plasticity (1)
- platin (1)
- platinum (1)
- pnc (1)
- pneumatische Abstandsmessung (1)
- point cloud (1)
- point defects (1)
- political ecology (1)
- polycyclische Aromaten (1)
- polyelectrolyte (1)
- polymer (1)
- polymer blends (1)
- polymer compound (1)
- polymer morphology (1)
- polymer nanocomposites (1)
- polymer solution (1)
- polymere Verbundwerkstoffe (1)
- polyphenol (1)
- population balance modelling (1)
- population genetics (1)
- poroelasticity (1)
- porous media (1)
- portfolio (1)
- portfolio decision (1)
- portfolio-optimization (1)
- poröse Medien (1)
- position detection (1)
- posterior collapse (1)
- potential (1)
- precipitation (1)
- preconditioners (1)
- preprocessing (1)
- pressure correction (1)
- pressure difference (1)
- pressure drop (1)
- pressure relief (1)
- preventive maintenance (1)
- primal-dual algorithm (1)
- primäre Rattenhepatozyten (1)
- probabilistic model checking (1)
- probabilistic modeling (1)
- probabilistic timed automata (1)
- probability distribution (1)
- probability of dangerous failure on demand (1)
- probe pruning (1)
- projective surfaces (1)
- proof generating optimizer (1)
- propagating discontinuities (1)
- property checking (1)
- property cheking (1)
- protein (1)
- protein adducts (1)
- protein analysis (1)
- protein conjugate (1)
- proximation (1)
- proxy modeling (1)
- prädiktive Regelung (1)
- public health (1)
- pulsed and stirred columns (1)
- pulsierte und gerührte Kolonen (1)
- pyrrolizidine alkaloids (1)
- quadrinomial tree (1)
- quality assurance (1)
- quantitative analysis (1)
- quantum gas (1)
- quasi-Monte Carlo (1)
- quasi-variational inequalities (1)
- quasihomogeneity (1)
- quasiregular group (1)
- quasireguläre Gruppe (1)
- quorum sensing (1)
- radiation therapy (1)
- radikalische Addition (1)
- radio frequency identification (1)
- radiotherapy (1)
- rainfall rate (1)
- rank-one convexity (1)
- rare disasters (1)
- rat (1)
- rat liver cell systems (1)
- rate of convergence (1)
- raum-zeitliche Analyse (1)
- ray casting (1)
- ray tracing (1)
- reaction coordinate (1)
- reaction cross section (1)
- reaction kinetics (1)
- reactive extraction (1)
- reactive oxygen species (1)
- reactivity (1)
- readout system (1)
- reaktionskinetik (1)
- real quadratic number fields (1)
- real-tiem (1)
- real-time PCR (1)
- real-time scheduling (1)
- real-time tasks (1)
- reasoning (1)
- received signal processing (1)
- receiver orientation (1)
- receptors for anions (1)
- reconstruction (1)
- reconstructions (1)
- reduktive Decyanierung (1)
- redundant constraint (1)
- reflectionless boundary condition (1)
- reflexionslose Randbedingung (1)
- regelbarer Ortsnetztransformator (1)
- regime-shift model (1)
- regional planning (1)
- regularity (1)
- regularization methods (1)
- reinforced concrete (1)
- reinforced plastics (1)
- reinforced thermoplastics (1)
- reinforcement learning (1)
- relative effect potencies (1)
- relative toxic potencies (1)
- relaxed memory models (1)
- remote sensing (1)
- repeated batch cultivation (1)
- resilience (1)
- resistance (1)
- respiratory chain (1)
- ressourcenorientierte Abwasserbewirtschaftung (1)
- retention soil filter (1)
- retroviral vector (1)
- retroviraler Vektor (1)
- retrovirus (1)
- reverse (1)
- reverse logistics (1)
- reverse osmosis (1)
- rezyklierte Gesteinskörnungen (1)
- rhabdomyolysis (1)
- rheology (1)
- ribosome-inactivating proteins (1)
- ring element (1)
- riparian food web (1)
- risk analysis (1)
- risk management (1)
- risk measures (1)
- risk reduction (1)
- river typology system (1)
- robustness (1)
- roll-over Cyclometallierung (1)
- root-reactions (1)
- round robin test (1)
- runtime monitoring (1)
- rupture disk (1)
- ruthenium-vinylidene (1)
- räumliche Ökonometrie (1)
- safety and security (1)
- safety-related systems (1)
- salt (1)
- sampling (1)
- satisfiability (1)
- sawtooth effect (1)
- scalar and vectorial wavelets (1)
- scalar field (1)
- scaled boundary isogeometric analysis (1)
- scaled boundary parametrizations (1)
- scattered light (1)
- scene flow (1)
- seasonal variability (1)
- second class group (1)
- secondary structure (1)
- secondary structure prediction (1)
- segmentation (1)
- seismic tomography (1)
- selbstgesteuertes Lernen (1)
- selektives Laserschmelzen (1)
- self calibration (1)
- self-optimizing networks (1)
- self-regulation (1)
- semiclassical (1)
- semigroup of values (1)
- semiklassisch (1)
- semiprobabilistische Bemessung (1)
- semisprays (1)
- sensitization effect (1)
- sensors (1)
- sepsis (1)
- sequential circuit (1)
- serum albumin (1)
- service area (1)
- shared production (1)
- sheaf theory (1)
- sho1 (1)
- short scales (1)
- shrinking cities (1)
- siRNA (1)
- signalling cascades (1)
- silica (1)
- silicon nanowire (1)
- similarity measures (1)
- simplex algorithm (1)
- simulation (1)
- singularities (1)
- skeletal muscle cells (1)
- sliding wear (1)
- small-multiples node-link visualization (1)
- smart decline (1)
- social cohesion (1)
- social-ecological systems (1)
- société des transports en commun (1)
- sodium sulfate (1)
- software (1)
- software architecture (1)
- software comprehension (1)
- software engineering (1)
- software engineering task (1)
- software reuse (1)
- solares Bauen (1)
- solid interfaces (1)
- solid urea (1)
- solid-born noise (1)
- solid-dosing-system (1)
- solubility (1)
- solutions containing electrolytes (1)
- solvation (1)
- sonotrodes (1)
- sorption isotherm (1)
- soziale Infrastruktur (1)
- spanende Bearbeitung (1)
- spare part (1)
- sparse interpolation of multivariate rational functions (1)
- sparse multivariate polynomial interpolation (1)
- sparse-to-dense (1)
- sparsity (1)
- spatial econometrics (1)
- spatial statistics (1)
- spectroscopy (1)
- sperrige Alkylcyclopentadienyl-Liganden (1)
- spherical approximation (1)
- spi (1)
- spin (1)
- spin flip (1)
- spin trapping (1)
- spiral-groove (1)
- sputtering process (1)
- srtm (1)
- stabile Transfektion (1)
- stability (1)
- stabilization (1)
- stable transfection (1)
- star-shaped domain (1)
- static and fatigue tests (1)
- static instrumentation (1)
- static software structure (1)
- statin (1)
- stationary sensing (1)
- stationär (1)
- stationärer Einzelhandel (1)
- statistics (1)
- steel fibre (1)
- steel fibre reinforced concrete (1)
- sternpunktisoliertes Netz (1)
- stochastic arbitrage (1)
- stochastic coefficient (1)
- stochastic optimal control (1)
- stochastic processes (1)
- stochastische Arbitrage (1)
- stop- and play-operator (1)
- storage (1)
- strain localization (1)
- stratifolds (1)
- stream pollution (1)
- streams (1)
- streptococcus (1)
- streptococcus pneumoniae (1)
- structural dynamics (1)
- structural summary (1)
- structural tensors (1)
- strukturelle Verformung (1)
- students (1)
- städtebauliche Projektentwicklung (1)
- städtische Regionen (1)
- subgradient (1)
- subjective evaluation (1)
- subjectivity (1)
- sugar transporter (1)
- sulfonate recognition (1)
- sulfonic (1)
- sulfur dioxide (1)
- summer smog (1)
- superposed fluids (1)
- surface measures (1)
- surface morphology (1)
- surface pre-treatment (1)
- surface tension (1)
- surface-hopping (1)
- surrender options (1)
- surrogate algorithm (1)
- suzuki coupling (1)
- swelling behavior (1)
- swelling pressure (1)
- swelling strain (1)
- symbolic simulation (1)
- symmetrc carboxylate stretch vibrations (1)
- symmetry (1)
- synchronization (1)
- synchronization of cyclic processes (1)
- synchronous (1)
- system architecture (1)
- syzygies (1)
- tabletop (1)
- tail dependence coefficient (1)
- taktile und haptische Wahrnehmung (1)
- target group (1)
- target sensitivity (1)
- task sequence (1)
- tax (1)
- technische und berufliche Aus- und Weiterbildung Lehrer lernen (1)
- technology mapping (1)
- teilgesättigt (1)
- tensions (1)
- tensor (1)
- tensorfield (1)
- terrain rendering (1)
- tetrachlorodibenzo-p-dioxin (1)
- tetrachlorodibenzodioxins (1)
- tetragonale Verzerrung (1)
- texture orientation (1)
- therapy (1)
- thermal analysis (1)
- thermisch (1)
- thermodynamic model (1)
- thermoplastische Bandhalbzeuge (1)
- thermoplastische Elastomere (TPE) (1)
- thermoplastische Verbundwerkstoffe (1)
- thiazolidinediones (1)
- thiazolium (1)
- thiol-disulfide exchange (1)
- three membered metall cluster (1)
- tight junction (1)
- time delays (1)
- time of flight mass spectrometry (1)
- time utility functions (1)
- time-dependent (1)
- time-varying flow fields (1)
- timeliness (1)
- tipping points (1)
- tooth-coil (1)
- top-down (1)
- topoisomerases (1)
- topological asymptotic expansion (1)
- topological incongruence (1)
- topological insulator (1)
- topologische Inkongruenz (1)
- toric geometry (1)
- torische Geometrie (1)
- total suspended solids (1)
- total variation (1)
- total variation spatial regularization (1)
- touch surfaces (1)
- toxic equivalency factor (TEF) concept (1)
- toxicity (1)
- tracking (1)
- trade-off (1)
- traffic safety (1)
- transactions (1)
- transfection (1)
- transfer film (1)
- transfer hydrogenation (1)
- transient (1)
- transition metal (1)
- transition metal complexes (1)
- transition metals (1)
- translation contract (1)
- translation invariant spaces (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- transmission conditions (1)
- transmisson (1)
- transport (1)
- transport association (1)
- trasmit signal processing (1)
- triboligy (1)
- tribologische Systeme (1)
- trimethylsilyl (1)
- tropical ecology (1)
- tropical geometry (1)
- tropical mountain reservoirs (1)
- tropical rainforest (1)
- tropischer Regenwald (1)
- trunk control (1)
- truss model (1)
- turbocharging (1)
- turning (1)
- two component system (1)
- ultrasonic welding (1)
- ultrasound signals (1)
- unbeschränktes Potential (1)
- unbounded potential (1)
- ungesättigte Fettsäuren (1)
- unimodular certification (1)
- unimodularity (1)
- universal (1)
- unnatürliche Aminosäuren (1)
- unsaturated soils (1)
- urban design (1)
- urban modelling (1)
- urban planning (1)
- urban policy (1)
- urban simulation (1)
- urban stormwater quality (1)
- urea (1)
- user interface (1)
- user-centered design (1)
- vacuum distillation (1)
- value semigroup (1)
- valuing contracts (1)
- variable neighborhood search (1)
- variable selection (1)
- variational methods (1)
- variational model (1)
- vector bundles (1)
- vector field visualization (1)
- vector spherical harmonics (1)
- vectorfield (1)
- vectorial wavelets (1)
- vehicle routing (1)
- verlaufsorientiert (1)
- vermaschtes Netz (1)
- vertical velocity (1)
- vertikale Elementfugen (1)
- vertikale Geschwindigkeiten (1)
- vesicle (1)
- viral vector (1)
- viraler Vektor (1)
- virtual reality (1)
- virtual training (1)
- virus-transmission (1)
- viscoelastic fluids (1)
- viscoelastic modeling (1)
- viscosity model (1)
- visual analytics (1)
- visual structure (1)
- volatility arbitrage (1)
- voltage sensitive dye (1)
- vortex seperation (1)
- w/z-Wert (1)
- wahrscheinlichkeitsbasierte Modellverifikation (1)
- wastewater infrastructure (1)
- wastewater treatment (1)
- water (1)
- waveguides (1)
- wavelength multiplex (1)
- weak localization (1)
- wear (1)
- wearable systems (1)
- weighing (1)
- weighted finite-state transducers (1)
- weißer Phosphor (1)
- weld (1)
- well-posedness (1)
- wheel side-slip estimation (1)
- whole genome microarray analysis (1)
- wine (1)
- wireless communications system (1)
- wireless networks (1)
- wireless sensor network (1)
- wireless signal (1)
- wirklichkeitsnahe numerische Simulation (1)
- wissenschaftliche Weiterbildung (1)
- worker assistance (1)
- worst-case (1)
- worst-case scenario (1)
- woven fabric reinforced polypropylene (1)
- wässrige Lösung (1)
- xai (1)
- zeitabhängige Strömungen (1)
- zementgebundene Feinkornsysteme (1)
- zerstörungsfreie Qualitätssicherung (1)
- zinc (1)
- zoledronic acid (1)
- zytogenetik (1)
- Ähnlichkeit (1)
- Äquisingularität (1)
- ÖPNV (1)
- ÖPNV-Beschleunigung (1)
- Ödometer (1)
- Öffentlicher Personennahverkehr (1)
- Öffentlichkeitsbeteiligung (1)
- Ökodesign (1)
- Ökologie (1)
- Ökonometrie (1)
- Ökosystem (1)
- Ökotoxizität (1)
- Örtliches Konzept (1)
- Östrogene (1)
- Überdeckung (1)
- Überflutungsrisiko (1)
- Überflutungsvorsorge (1)
- Übergangsbedingungen (1)
- Übergangsmetall (1)
- Übergangsmetallcyclopentadienyl-Komplexe (1)
- Übergangsmetallkomplexe (1)
- Übersetzung (1)
- ältere Menschen (1)
- ästhetische Differenz (1)
- überregionale Partnerschaft (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Chemie (389)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (370)
- Kaiserslautern - Fachbereich Mathematik (292)
- Kaiserslautern - Fachbereich Informatik (235)
- Kaiserslautern - Fachbereich Biologie (134)
- Kaiserslautern - Fachbereich Bauingenieurwesen (94)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (92)
- Kaiserslautern - Fachbereich ARUBI (71)
- Kaiserslautern - Fachbereich Sozialwissenschaften (64)
- Kaiserslautern - Fachbereich Raum- und Umweltplanung (37)
Wissenschaftliche Evidenz zum Thema "(Un-)Wirksamkeit von Strafe" wird in der Öffentlichkeit kontrovers diskutiert. In der vorliegenden Arbeit wird vermutet, dass es bei der Rezeption zu motivierten Verzerrungen, genauer gesagt, zu einer Bedrohung der moralischen Wertvorstellungen kommt, infolgedessen wissenschaftliche Erkenntnisse diskreditiert und abgelehnt werden. Hierfür wurde zunächst die moralische Begründung von persönlichen Strafeinstellungen systematisch untersucht. Dabei zeigte sich, dass eine deliktspezifische (nach Kriminalitätsbereichen aufgeteilte) Betrachtung erforderlich ist. Regressionsanalysen konnten die qualitativen Unterschiede hinsichtlich der zugrundeliegenden Wertvorstellungen belegen. Die Art und Weise der Operationalisierung erwies sich hierbei als entscheidend für den "moralischen Gehalt" der individuellen Strafeinstellung.
Eine Bedrohung moralischer Wertvorstellungen durch einen wissenschaftlichen Artikel, der sich gegen die Wirksamkeit von harten Strafen (deliktspezifisch) ausspricht, konnte nicht nachgewiesen werden. Weitere Moderationsanalysen offenbarten jedoch unabhängig von den persönlichen Wertvorstellungen eine motivierte Verzerrung bei der Rezeption des Artikels: TeilnehmerInnen, die an die Wirksamkeit von Strafe glaubten, bewerteten einen Artikel, der dies widerlegte, schlechter als diejenigen, die einen Artikel lasen, der dies bestätigte. Diese Verzerrungen liegen jedoch nicht in einer moralischen Bedrohung begründet, sondern lassen sich vielmehr als Strategie zur Auflösung von Dissonanz bei Konfrontation mit einstellungsinkonsistenten Inhalten verstehen. Kern des Problems sind offenbar nicht die untersuchten persönlichen Wertvorstellungen und deren Bedrohung, sondern das hohe Ausmaß der Diskrepanz zwischen öffentlicher und wissenschaftlicher Meinung. Diese Erkenntnis ist insofern von Bedeutung, als dass sie ein Bewusstsein für die Gefahren (z.B. eines Bumerang-Effekts) bei der Kommunikation der Inhalte schafft.
Insgesamt betrachtet tragen die Ergebnisse zu einem tieferen Verständnis persönlicher Strafeinstellungen bei und liefern Ansatzpunkte für die Entwicklung von Strategien zur überzeugenden Kommunikation kriminologischer Erkenntnisse bezüglich der Sinnhaftigkeit von Strafe.
Riesling wird neben seiner mannigfaltigen Variabilität im Aromaprofil, das unter anderem durch die unterschiedliche Bodenbeschaffenheit entsteht, vor allem in Deutschland auch wegen seiner Kältetoleranz und Anpassungsfähigkeit geschätzt. Er gilt zudem auch als alterungsfähiger Wein, allerdings kommt es bei zu starker Sonnenexposition der Rebe und langer bzw. warmer Lagerung vermehrt zur Bildung von 1,1,6-Trimethyl-1,2-dihydronaphthalin (TDN). Dieser von Carotinoiden abstammende Aromastoff verursacht die sogenannte „Petrolnote“, die vor allem in wärmeren Anbauregionen zum sortentypischen Bukett des Rieslings gehört. Deutsche Rieslingweine zeichnen sich dagegen überwiegend durch einen säurebetonten, fruchtigen Charakter aus, bei dem das Auftreten einer Petrolnote vor allem im Fall von Jungweinen als unpassende Fehlnote empfunden wird.
Das Ziel der vorliegenden Arbeit war deswegen, die sensorische Relevanz von TDN zu evaluieren und Maßnahmen zu realisieren, die geeignet sind, die Konzentrationen an freiem und gebundenem TDN zu verringern und dadurch das Auftreten der Petrolnote zu vermeiden.
Dafür wurde zunächst in Kapitel 6.1 die Empfindlichkeit von Verbrauchern und geschulten Prüfern gegenüber TDN sowie die Konzentration bestimmt, ab der die Petrolnote zu einer Ablehnung des Weins durch Verbraucher führt. Während geschulte Prüfer Rieslingweine bereits ab einem TDN-Gehalt von 2,3 µg/L unterscheiden konnten, lag die Wahrnehmungsschwelle von 156 Verbrauchern mit 14,7 µg/L um ein Mehrfaches darüber, und wurde außerdem durch das Geschlecht der Probanden beeinflusst. Die Petrolnote führte ab TDN-Gehalten von 60 µg/L bei einjährigem und 91 µg/L bei achtjährigem Riesling zur Ablehnung des Weins. Die Konzentration an freiem TDN in 261 Rieslingweinen aus drei verschiedenen Weinwettbewerben überstieg bei rund der Hälfte der Weine die Wahrnehmungsschwelle von geschulten Prüfern, während die Wahrnehmungsschwelle von Verbrauchern nur von 15% der Weine überschritten wurde. Gleichzeitig lag bei keinem der Weine der TDN-Gehalt über der Ablehnungsschwelle.
Durch die Evaluierung der instrumentellen Analyseparameter in Kapitel 6.2 wurde für die Untersuchung von freiem TDN und weiteren Aromastoffen eine Methode entwickelt, die es ermöglicht, nicht nur die TDN-Konzentrationen zu erfassen, sondern auch eine umfassende Qualitätsbewertung der Versuchsweine durchzuführen. Parallel dazu wurde eine Schnellmethode zur Erfassung der Gehalte an gebundenem TDN und Vitispiran implementiert, um auch die Effektivität der in dieser Arbeit durchgeführten weinbaulichen und oenologischen Praktiken im Hinblick auf das TDN-Potential zu beurteilen.
Kapitel 6.3 und 6.4 beschreiben weinbauliche Maßnahmen, die in mehrjährigen Studien auf ihre Eignung zur Reduzierung der TDN-Konzentration untersucht wurden. Während bei den Weinen, die aus Beeren unterschiedlicher Größe hergestellt wurden, keine signifikanten Unterschiede über die Jahrgänge hinweg beobachtet wurden, konnte durch die Variation der Rebunterlagen der Gehalt an gebundenem TDN um rund 30% reduziert werden. Ausgangspunkt einer weiteren Versuchsreihe waren acht verschiedene Rieslingklone auf derselben Unterlage, welche anschließend auf ihren TDN Gehalt untersucht wurden. Dabei wurden deutliche Differenzen in der Disposition einiger Klone zu höheren Gehalten an gebundenem TDN festgestellt. Hier ergab sich eine positive Korrelation zwischen der Lockerbeerigkeit der Trauben und der Menge an gebundenem TDN in den produzierten Weinen – je kompakter die Traube, desto weniger gebundenes TDN und gebundenes Vitispiran wurde gebildet. Die höhere Sonnenexposition der Beeren, die diesen Effekt hervorruft, beeinflusste auch die Gehalte an gebundenem TDN und Vitispiran in Weinen, die von Reben geerntet wurden, welche zu unterschiedlichen Zeitpunkten und in variierender Intensität entblättert wurden. Dabei führt sowohl eine maximale Entblätterung in der Traubenzone wie auch die Laubentfernung einen Monat nach der Blüte zu einer Erhöhung der Konzentration an gebundenem TDN und Vitispiran von rund 50%. Entblätterungsmaßnahmen zur Blüte oder zur Véraison, die der Regulierung der Erntemenge und der Traubengesundheit dienen, führten dagegen zu keinem Anstieg im Vergleich zur nicht-entblätterten Kontrolle.
Wie in Kapitel 6.5 ausgeführt wird, resultiert ein hoher Pressdruck beim Keltern sowie ein niedriger Stickstoffgehalt des Mosts in einer Zunahme des gebundenen TDN von 50 100%. Höhere Säuregehalte während der Lagerung verursachten in mehreren Versuchsreihen nicht nur eine höhere Freisetzungsrate von TDN, sondern auch einen verstärkten Abbau anderer Aromastoffe wie Ester, β-Damascenon oder Linalool. Dagegen hatte ein niedriger pH-Wert während der Gärung kaum Einfluss auf den Hefemetabolismus und die dadurch gebildeten Aromastoffe. Die Erhöhung der Gärtemperatur von 12 auf 24 °C hatte jedoch eine Zunahme von honig- oder petrolartigen Noten in den Rieslingweinen zur Folge. Die Verwendung unterschiedlicher Hefestämme führte zu einer Variation der Konzentrationen an gebundenem TDN zwischen 70 und 147 µg/L, abhängig vom Hefestamm und dem Jahrgang. Zwei der untersuchten neun Hefen brachten Weine mit bis zu 40% geringeren Gehalten an gebundenem TDN in Mosten mit hohem Stickstoffgehalt hervor, während drei weitere Hefen besser für den Einsatz in nährstoffarmen Most geeignet waren. Bei der Lagerung der Weine spielte die Lagertemperatur eine entscheidende Rolle in Bezug auf den Gehalt an freiem TDN, gefolgt vom Material des Flaschenverschlusses und der Flaschenorientierung.
Mittels geeigneter Filtermaterialien, die in Kapitel 7 beschrieben sind, wurde der Gehalt an freiem Wein um bis zu 80% reduziert, ohne die meisten der anderen Aromastoffe signifikant zu beeinflussen.
Somit wurde durch diese Arbeit ein vielfältiger Maßnahmenkatalog für die Weinwirtschaft entwickelt, der geeignet ist, den Anforderungen des fortschreitenden Klimawandels entgegenzutreten und die herausragende Position des Rieslings in Deutschland zu sichern.
Der direkte Einstieg in die Phosphorchemie ist durch Reaktion einer Rohlösung von [Cp=Ru(CO)2H] (1) (Cp= = C5H3(SiMe3)2) mit P4 möglich. Dabei erhält man den Komplex [Cp=Ru(CO)2PH2] (6a) mit freier Phosphanidogruppe, der selbst nicht isoliert, dessen Existenz jedoch spektroskopisch und durch Folgereaktionen eindeutig belegt werden kann. Das freie Elektronenpaar des Phosphors kann sowohl durch [M(CO)5(thf)] (M = Cr, Mo, W) (7,8,9) als auch durch [Cp*Re(CO)2(thf)] (14) nach Verlust von deren thf-Liganden komplexiert werden. Das bei der Reaktion mit 14 als Nebenprodukt entstandene [{Cp*(OC)2Re}2(µ-CO)] (17) konnte erstmals röntgenstrukturanalytisch untersucht werden. Die Cothermolyse von [{Cp=Ru(CO)2}2] (4) mit P4 liefert [Cp=Ru(Eta5-P5)] (18) und [{Cp=Ru}2(µ-Eta;2:2-P2)2] (19), die beide röntgenstrukturanalytisch charakterisiert werden konnten. 18 erweist sich als erstes Pentaphosphametallocen, das eine annähernd ideal ekliptische Konformation aufweist. Im Zweikernkomplex 19 konnte ein Komplex der 8. Gruppe mit zwei separaten P2-Brückenliganden eindeutig belegt werden. Die Cothermolyse von 4 mit [Cp*Fe(Eta5-P5)] liefert neben 18 und 19 die Homo- und Heterotrimetallcluster [{Cp=Ru}n{Cp*Fe}3-nP5] (n = 1, 2, 3) mit verzerrt dreiecksdodekaedrischen Gerüststrukturen. Die Röntgenstrukturanalysen von [{Cp=Ru}3P5] (24), [{Cp=Ru}2{Cp*Fe}P5] (25) und [{Cp*Fe}2{Cp=Ru}P5] (26) zeigen, daß die Metallfragmente {Cp*Fe} und {Cp=Ru} nahezu ohne Auswirkungen auf die Gerüststruktur ausgetauscht werden können. Der Vergleich zu den Cp-Derivaten zeigt die annähernde Identität der Verbindungen. Bei 26 bleibt die Stellung des Cp=-Liganden am Ruthenium nach Einbau eines Eisenfragmentes erhalten, während in [{Cp*Fe}2{CpRu}P5] (26a) ein Wechsel in der Stellung des Cp-Liganden stattfindet. Die Photolyse von [Cp=Ru(CO)2H] (1) in THF ergibt die spektroskopisch identifizierten Verbindungen [{Cp=Ru(CO)}2(µ-H)2] (28), [{Cp=Ru(CO)}2{Cp=Ru(CO)H}] (29) und [{Cp=Ru}4(µ3-CO)4] (30). Das in geringen Mengen anfallende 28 konnte als dimere Struktur mit zwei verbrückenden Wasserstoffatomen identifiziert werden. Für Komplex 29 wurde eine Struktur als triangularer Metalldreiring postuliert, der den Verbindungen 33 und 34 sehr ähnlich ist. Bei 29 handelt es sich um einen 46 VE-Komplex mit einer Ru-Ru-Doppelbindung im Dreiring. Verbindung 30 mit tetraedrischem Ru4-Gerüst kann in hoher Ausbeute dargestellt werden. 30 ist luft- und wasserstabil und überaus reaktionsträge. Führt man die Photolyse von 1 in Hexan als nichtkoordinierendem Lösungsmittel durch, so erhält man neben 28 und 29 die Komplexe [{Cp=Ru(µ-CO)}2{Cp=Ru(CO)H}] (33), [{{{Cp=Ru}2(µ-CO)}{Cp=RuH}}(µ3-CO)2] (34) sowie die unbekannte Verbindung 35. Die Röntgenstrukturanalysen von 33 und 34 zeigen zwei mit 46 VE elektronendefiziente, triangulare Ru3-Cluster mit einer M-M-Doppelbindung im Ring, welche beide der Vorgabe der Magischen Zahlen entsprechen. 33 reagiert bereitwillig mit P4 und ergibt [{Cp=Ru}(µ-Eta4:1:1-P4){Ru(CO)Cp=}] (40). Die RSA von 40 zeigt, daß Ru1 um 25,3° aus der Ebene der vier Phosphoratome P1-P4 abgewinkelt und kein planares Tetraphospharuthenol verwirklicht ist. Die Aufsicht auf die P4Ru-Ebene läßt jedoch deutlich das Bestreben erkennen, die Konstitution einer pentagonalen Pyramide auszubilden.
Destructive diseases of the lung like lung cancer or fibrosis are still often lethal. Also in case of fibrosis in the liver, the only possible cure is transplantation.
In this thesis, we investigate 3D micro computed synchrotron radiation (SR\( \mu \)CT) images of capillary blood vessels in mouse lungs and livers. The specimen show so-called compensatory lung growth as well as different states of pulmonary and hepatic fibrosis.
During compensatory lung growth, after resecting part of the lung, the remaining part compensates for this loss by extending into the empty space. This process is accompanied by an active vessel growing.
In general, the human lung can not compensate for such a loss. Thus, understanding this process in mice is important to improve treatment options in case of diseases like lung cancer.
In case of fibrosis, the formation of scars within the organ's tissue forces the capillary vessels to grow to ensure blood supply.
Thus, the process of fibrosis as well as compensatory lung growth can be accessed by considering the capillary architecture.
As preparation of 2D microscopic images is faster, easier, and cheaper compared to SR\( \mu \)CT images, they currently form the basis of medical investigation. Yet, characteristics like direction and shape of objects can only properly be analyzed using 3D imaging techniques. Hence, analyzing SR\( \mu \)CT data provides valuable additional information.
For the fibrotic specimen, we apply image analysis methods well-known from material science. We measure the vessel diameter using the granulometry distribution function and describe the inter-vessel distance by the spherical contact distribution. Moreover, we estimate the directional distribution of the capillary structure. All features turn out to be useful to characterize fibrosis based on the deformation of capillary vessels.
It is already known that the most efficient mechanism of vessel growing forms small torus-shaped holes within the capillary structure, so-called intussusceptive pillars. Analyzing their location and number strongly contributes to the characterization of vessel growing. Hence, for all three applications, this is of great interest. This thesis provides the first algorithm to detect intussusceptive pillars in SR\( \mu \)CT images. After segmentation of raw image data, our algorithm works automatically and allows for a quantitative evaluation of a large amount of data.
The analysis of SR\( \mu \)CT data using our pillar algorithm as well as the granulometry, spherical contact distribution, and directional analysis extends the current state-of-the-art in medical studies. Although it is not possible to replace certain 3D features by 2D features without losing information, our results could be used to examine 2D features approximating the 3D findings reasonably well.
The various uses of fiber-reinforced composites, for example in the enclosures of planes, boats and cars, generates the demand for a detailed analysis of these materials. The final goal is to optimize fibrous materials by the means of “virtual material design”. New fibrous materials are virtually created as realizations of a stochastic model and evaluated with physical simulations. In that way, materials can be optimized for specific use cases, without constructing expensive prototypes or performing mechanical experiments. In order to design a practically fabricable material, the stochastic model is first adapted to an existing material and then slightly modified. The virtual reconstruction of the existing material requires a precise knowledge of the geometry of its microstructure. The first part of this thesis describes a fiber quantification method by the means of local measurements of the fiber radius and orientation. The combination of a sparse chord length transform and inertia moments leads to an efficient and precise new algorithm. It outperforms existing approaches with the possibility to treat different fiber radii within one sample, with high precision in continuous space and comparably fast computing time. This local quantification method can be directly applied on gray value images by adapting the directional distance transforms on gray values. In this work, several approaches of this kind are developed and evaluated. Further characterization of the fiber system requires a segmentation of each single fiber. Using basic morphological operators with specific structuring elements, it is possible to derive a probability for each pixel describing if the pixel belongs to a fiber core in a region without overlapping fibers. Tracking high probabilities leads to a partly reconstruction of the fiber cores in non crossing regions. These core parts are then reconnected over critical regions, if they fulfill certain conditions ensuring the affiliation to the same fiber. In the second part of this work, we develop a new stochastic model for dense systems of non overlapping fibers with a controllable level of bending. Existing approaches in the literature have at least one weakness in either achieving high volume fractions, producing non overlapping fibers, or controlling the bending or the orientation distribution. This gap can be bridged by our stochastic model, which operates in two steps. Firstly, a random walk with the multivariate von Mises-Fisher orientation distribution defines bent fibers. Secondly, a force-biased packing approach arranges them in a non overlapping configuration. Furthermore, we provide the estimation of all parameters needed for the fitting of this model to a real microstructure. Finally, we simulate the macroscopic behavior of different microstructures to derive their mechanical and thermal properties. This part is mostly supported by existing software and serves as a summary of physical simulation applied to random fiber systems. The application on a glass fiber reinforced polymer proves the quality of the reconstruction by our stochastic model, as the effective properties match for both the real microstructure and the realizations of the fitted model. This thesis includes all steps to successfully perform virtual material design on various data sets. With novel and efficient algorithms it contributes to the science of analysis and modeling of fiber reinforced materials.
The fifth-generation (5G) of wireless networks promises to bring new advances, such as a huge increase in mobile data rates, a plunge in communications latency, and an increase in the quality of experience perceived by users that can cope with the ever-increasing demand in Internet traffic. However, the high cost of capital and operational expenditure (CAPEX/OPEX) of the new 5G network and the lack of a killer application hinder its rapid adoption. In this context, Mobile Network Operators (MNOs) have turned their attention to the following idea: opening up their infrastructure so that vertical businesses can leverage the new 5G network to improve their primary businesses and develop new ones. However, deploying multiple isolated vertical applications on top of the same infrastructure poses unique challenges that must be addressed. In this thesis, we provide critical contributions to developing 5G networks to accommodate different vertical applications in an isolated, flexible, and automated manner. This thesis contributions spawn on three main areas: (i) the development of an integrated fronthaul and backhaul network, (ii) the development of a network slicing overbooking algorithm, and (iii) the development of a method to mitigate the noisy neighbors' problem in a vRAN deployment.
Photolysiert man [{Cp R Ru(CO)2}2] (Cp R = Cp", Cp*) (1a,b), in Gegenwart von weißem Phosphor, so können keine phosphorhaltigen Produkte isoliert werden. Setzt man [{Cp"Ru(CO)2}2] (1a) mit weißem Phosphor bei 190 °C in Dekalin um, dann lassen sich als einzige Verbindungen das Pentaphospharuthenocen-Derivat [Cp"Ru(h 5 -P5)] (3a, 6 % Ausbeute) und [Cp"2Ru2P4] (4a, 17 % Ausbeute) säulenchromatographisch abtrennen. Für 4a wird auf Grund seiner spektroskopischen Eigenschaften eine den röntgenstruktur-analytisch charakterisierten pseudo-Tripeldecker-Komplexen [{Cp R Fe}2(micro-h 4:4 -P4)] (Cp R = Cp" [35] , Cp"' [11] ) analoge Struktur mit s-cis-Tetraphosphabutadiendiyl-"Mitteldeck" vorgeschlagen; es sind jedoch auch zwei micro-h 2:2 -P2-Liganden denkbar. Die Cothermolyse von [{Cp"Ru(CO)2}2] (1a) und [Cp*Fe(h 5 -P5)] (2b) ergibt ein breites Produktbild. Während [Cp"Ru(h 5 -P5)] (3a), das chromatographisch nicht von 2b abgetrennt werden konnte, und [Cp"2Ru2P4] (4a, 6 % Ausbeute) in vergleichsweise geringen Mengen entstehen, können [{Cp"Ru}3P5] (6) in 17 % Ausbeute, [{Cp"Ru}2{Cp*Fe}P5] (7) in 7 % Ausbeute, [{Cp*Fe}2{Cp"Ru}P5] (8) in 22 % Ausbeute und [{Cp"Ru}3{Cp*Fe}(P2)2] (9) in 14 % Ausbeute isoliert werden. Für 9 wird, basierend auf den NMR-spektroskopischen Befunden, eine zu [{CpFe}4(P2)2] (10) [24] analoge Struktur mit einem hier verzerrten Dreiecksdodekaedergerüst vorge-schlagen, dessen vier Ecken der Konnektivität fünf von drei Rutheniumatomen und einem Eisenatom besetzt sind und dessen vier Ecken der Konnektivität vier zwei micro-h 2:2:1:1 -P2-Einheiten einnehmen. Es handelt sich bei 9 wie bei 10 um Cluster vom hypercloso-Typ (n+1 = 8 GEP). [30,31] Röntgenstrukturanalysen zeigen, daß 6, 7 und 8 ebenfalls verzerrt dreiecksdodekaedrische Gerüststrukturen besitzen. Damit ist auch die Struktur der bereits früher synthetisierten und spektroskopisch charakterisierten Komplexe [{Cp R Fe}3P5] (Cp R = Cp*, Cp*') (11b,c) [8] geklärt, deren NMR-Daten auf eine enge Verwandtschaft insbesondere mit 6 hinweisen. Gegenüber 9 bzw. 10 [24] mit einem M4P4-Gerüst (M = allgem. Übergangsmetallatom) ist in den Clustern 6, 7, 8 und 11b,c [8] mit M3P5-Gerüst formal ein 13 VE-Metallkomplex-fragment (Konnektivität fünf; 1 GE) durch ein Phosphoratom (3 GE) ersetzt, wodurch der Übergang zum closo-Strukturtyp des Dreiecksdodekaeders (n+1 = 9 GEP) [30,31] vollzogen wird. Die Cluster 6, 7, 8 und 11b,c [8] enthalten eine bisher unbekannte Koordinationsform der P5-Einheit. Bei der thermischen Umsetzung von [{Cp*Ru(CO)2}2] (1b) mit [Cp*Ru(h 5 -P5)] (3b) erhält man als Hauptprodukt den Dreikernkomplex [{Cp*Ru}3(P4)(P)] (17b, 62 % Ausbeute). Als einziges Nebenprodukt kann [Cp*2Ru2P4] (4b, 5 % Ausbeute) säulen-chromatographisch isoliert werden. Die NMR-Daten von 17b lassen in Analogie zum röntgenstrukturanalytisch charakterisierten [{Cp*'Fe}3(P4)(P){Mo(CO)5}] (18c) [8] auf eine cubanartige Struktur schließen, in der die fünf Phosphoratome in Form einer Isotetraphosphid-Einheit und eines einzelnen Phosphoratoms vorliegen. Die Gesamtzahl von 64 Valenzelektronen ist im Einklang mit drei Metall-Metall-Bindungen [31] . Für 4b ist wie für das voranstehend besprochene Cp"-Derivat 4a eine Pseudo- Tripeldecker-Struktur mit s-cis-Tetraphosphabutadiendiyl-"Mitteldeck" oder mit zwei micro-h 2:2 -P2-Liganden zu diskutieren. Thermolysiert man [{Cp* Ru(CO)2}2] (1c) mit 3b, so erhält man die Komplexe [Cp*'nCp*2-nRu2P4] (n = 0,1,2) (4b,d,c) und [{Cp*'Ru}n{Cp*Ru}3-n(P4)(P)] (n = 1,2,3) (17e,d,c) jeweils als nicht auftrennbare Gemische von analog aufgebauten Verbindungen, die sich nur im Zahlenverhältnis der verschiedenen Cyclopentadienyl-Liganden Cp*' und Cp* unterscheiden. In geringem Umfang beobachtet man dabei auch eine cyclo-P5-Übertragung unter Bildung des literaturbekannten [Cp*'Ru(h 5 -P5)] [7,10] , das im Gemisch mit nicht abreagiertem 3b anfällt. Durch Umsetzung mit [W(CO)5(thf)] gelingt die Komplexierung des dreiecks-dodekaedrischen Clusters [{Cp*Fe}2{Cp"Ru}P5] (8) zum Monoaddukt [{Cp*Fe}2{Cp"Ru}P5{W(CO)5}] (19), während beim Komplexierungsversuch des ebenfalls dreiecksdodekaedrischen Clusters [{Cp*'Fe}3P5] (11c) mit [Mo(CO)5(thf)] [8] eine Gerüstumlagerung zur cubanartig aufgebauten Verbindung [{Cp*'Fe}3(P4)(P){Mo(CO)5}] (18c) erfolgt. Der Strukturvorschlag für 19 basiert auf dem 31 P-NMR-Spektrum. Versuche zur Oxidation von 8 mit gelbem Schwefel bei Raumtemperatur führen zu unspezifischer Zersetzung bzw. Folgereaktionen von 8. Orientierende Versuche mit grauem Selen als milderem Oxidationsmittel deuten darauf hin, daß unter geeigneten Reaktionsbedingungen eine einfache Selenierung am endständigen Phosphoratom des P5-Liganden von 8 erfolgt. Bei der Umsetzung von [{Cp"Ru}3{Cp*Fe}(P2)2] (9) mit gelbem Schwefel können je nach Reaktionsbedingungen bis zu drei Phosphoratome oxidiert werden. Vollständige Sulfurierung wie im Falle von [{CpFe}4(P2S2)2] [24] wird nicht beobachtet. Die Sulfurierung ist regioselektiv. Für einen bestimmten Sulfurierungsgrad wird jeweils nur ein Produkt erhalten. Die Strukturvorschläge für die Cluster 21 23 werden anhand ihrer 31 P-NMR-spektroskopischen Daten abgeleitet.
[Halb]- trocken im Unterstockbereich?
„Untersuchungen meteorologisch-hydrologischer Messgrößen im Weinbau als Anpassungsstrategie an den Klimawandel sowie für eine nachhaltige Wassernutzung von Vitis vinifera [cv. Riesling]. “
Christian Ihrig & Sascha Henninger
RPTU Kaiserslautern
Der vom Menschen verursachte Klimawandel beeinflusst sowohl langfristige Klimaprozesse, als auch das aktuelle, kurzfristige Wettergeschehen in allen Regionen der Erde. Er äußert sich in einer Vielzahl an Phänomenen, die sich je nach Klimagebiet unterschiedlich manifestieren lassen oder auch unterschiedliche Auswirkungen mit sich bringen. Diese Forschungsarbeit beschäftigt sich mit dem Wasserhaushalt von Weinreben im Rahmen des rezenten Klimawandels. Ziel dieses Projektes ist es, mittels meteorologisch-hydrologischen Messgrößen eine Anpassungsstrategie zu generieren, die auf alle Weinbauregionen in Rheinland-Pfalz übertragen werden kann, um Winzer*innen die Möglichkeit zu eröffnen, auf natürlicher Art und Weise der Rebe Wasser zugänglich zu machen.
Durch die Zunahme abiotischer Schäden (z.B. Niederschlag) und die Veränderung der Vegetationszeit sowie die Zunahme invasiver Schaderreger ist vermehrt eine Steigerung der Vulnerabilität des Ökosystems „Wingert“ zu erkennen. Winzer*innen werden aufgrund der Zunahme von Extremwetterereignissen (Hitze-/Dürrephasen) zur langfristigen Bewässerung ihrer Weinbauflächen gezwungen. Große Mengen Wasser werden bereits vereinzelt in Weinbergsregionen gepumpt, was langfristig hinsichtlich eines sinkenden Grundwasserspiegels einen fatalen Fehler darstellt. Die ressourcenschonende Gestaltung des Wasserhaushaltes sollte daher in den Mittelpunkt der Weinbauforschung gestellt werden. Weinbauer*innen sind an regional-/ lokalklimatischen Lösungsmöglichkeiten und Anpassungsstrategien interessiert, um Risiken für die Anbaufrucht reduzieren und auf die lokalklimatischen Auswirkungen des Klimawandels reagieren zu können. Um gegen dieses Risiko anzugehen und den Produktionsausfall zu minimieren, muss die Anpassungsfähigkeit in Sachen Wasserhaushalt der Reben bekräftigt werden. Demzufolge wird das Mikroklima in der Weinbauregion Rheinhessen mittels des Einsatzes der Scholander-Druckkammer untersucht. Die Bestimmung des Wasserstatus hinsichtlich der exakten Bewässerungssteuerung von Weinreben hat sich durch das frühmorgendliche Blattwasser- (Ψpd) und mittägliche Stammwasserpotential (Ψstem) bewährt. Physiologische Prozesse, wie die stomatäre Leitfähigkeit der Blattschließzellen sowie das vegetative Wachstum, aber auch die Photosynthese, sind direkt oder indirekt an Ψpd + Ψstem gekoppelt. Darüber hinaus lässt sich der Wasserhaushalt durch ein an Trockenstandorten angepasstes Bodenpflegesystem, wie zum Beispiel einer flächendeckenden Bodenabdeckung mittels Holzhäcksel, deutlich verbessern. Des Weiteren wird das Mikroklima im Weinberg durch die Laubwandstruktur mitbestimmt, was durch eine gesteigerte Photosyntheseleistung der Laubwand, eine optimale Belüftung und Belichtung gewährleistet wird. Im praktischen Weinbau wird dies durch die Höhe der Laubwand realisiert. Um dem Herbizid im Unterstockraum durch das anstehende Glyphosatverbot eine Alternative zu bieten, entwickelt die Landmaschinenbranche bereits heute alternative Arbeitsgeräte, die eine Möglichkeit darstellen, dem Wuchs des Unkrautes im Unterstockbereich entgegenzuwirken.
Daher ist es von gesteigertem Interesse zu analysieren, inwiefern sich eine Bodenabdeckung im Unterstockbereich von einer flächendeckenden bzw. moderaten Tropfbewässerung in Flachlage unterscheidet. Darüber hinaus sollen Möglichkeiten zur Reduzierung des Wasserverbrauchs und zur Reifeverzögerung (Verminderung des Botrytisbefalls, Verlängerung der Reifedauer, Vermeidung eines zu hohen Alkoholgehaltes) durch eine kürzere Laubwandhöhe beim Riesling in Flachlage in diesem Projekt erprobt werden. Als Versuchsvarianten dienen vier Variationen, um abgrenzbare und eindeutige Ergebnisse erzielen zu können (V1: Tropfbewässerung; V2: Unterstockabdeckung Holzhäcksel; V3: Flächendeckende Holzhäcksel; V4: Kontrollvariante).
A building-block model reveals new insights into the biogenesis of yeast mitochondrial ribosomes
(2020)
Most of the mitochondrial proteins in yeast are encoded in the nuclear genome, get synthesized by cytosolic ribosomes and are imported via TOM and TIM23 into the matrix or other subcompartments of mitochondria. The mitochondrial DNA in yeast however also encodes a small set of 8 proteins from which most are hydrophobic membrane proteins and build core components of the OXPHOS complexes. They get synthesized by mitochondrial ribosomes which are descendants of bacterial ribosomes and still have some similarities to them. On the other hand, mitochondrial ribosomes experienced various structural and functional changes during evolution that specialized them for the synthesis of the mitochondrial encoded membrane proteins. The mitoribosome contains mitochondria-specific ribosomal proteins and replaced the bacterial 5S rRNA by mitochondria-specific proteins and rRNA extensions. Furthermore, the mitoribosome is tethered to the inner mitochondrial membrane to facilitate a co-translational insertion of newly synthesized proteins. Thus, also the assembly process of mitoribosomes differs from that of bacteria and is to date not well understood.
Therefore, the biogenesis of mitochondrial ribosomes in yeast should be investigated. To this end, a strain was generated in which the gene of the mitochondrial RNA-polymerase RPO41 is under control of an inducible GAL10-promoter. Since the scaffold of ribosomes is built by ribosomal RNAs, the depletion of the RNA-polymerase subsequently leads to a loss of mitochondrial ribosomes. Reinduction of Rpo41 initiates the assembly of new mitoribosomes, which makes this strain an attractive model to study mitoribosome biogenesis.
Initially, the effects of Rpo41 depletion on cellular and mitochondrial physiology was investigated. Upon Rpo41 depletion, growth on respiratory glycerol medium was inhibited. Furthermore, mitochondrial ribosomal 21S and 15S rRNA was diminished and mitochondrial translation was almost completely absent. Also, mitochondrial DNA was strongly reduced due to the fact that mtDNA replication requires RNA primers that get synthesized by Rpo41.
Next, the effect of reinduction of Rpo41 on mitochondria was tested. Time course experiments showed that mitochondrial translation can partially recover from 48h Rpo41 depletion within a timeframe of 4.5h. Sucrose gradient sedimentation experiments further showed that the mitoribosomal constitution was comparable to wildtype control samples during the time course of 4.5h of reinduction, suggesting that the ribosome assembly is not fundamentally altered in Gal-Rpo41 mitochondria. In addition, the depletion time was found to be critical for recovery of mitochondrial translation and mitochondrial RNA levels. It was observed that after 36h of Rpo41 depletion, the rRNA levels and mitochondrial translation recovered to almost 100%, but only within a time course of 10h.
Finally, mitochondria from Gal-Rpo41 cells isolated after different timepoints of reinduction were used to perform complexome profiling and the assembly of mitochondrial protein complexes was investigated. First, the steady state conditions and the assembly process of mitochondrial respiratory chain complexes were monitored. The individual respiratory chain complexes and the super-complexes of complex III, complex IV and complex V were observed. Furthermore, it was seen that they recovered from Rpo41 depletion within 4.5h of reinduction. Complexome profiles of the mitoribosomal small and large subunit discovered subcomplexes of mitoribosomal proteins that were assumed to form prior to their incorporation into assembly intermediates. The complexome profiles after reinduction indeed showed the formation of these subcomplexes before formation of the fully assembled subunit. In the mitochondrial LSU one subcomplex builds the membrane facing protuberance and a second subcomplex forms the central protuberance. In contrast to the preassembled subcomplexes, proteins that were involved in early assembly steps were exclusively found in the fully assembled subunit. Proteins that assemble at the periphery of the mitoribosome during intermediate and late assembly steps where found in soluble form suggesting a pool of unassembled proteins that supply assembly intermediates with proteins.
Taken together, the findings of this thesis suggest a so far unknow building-block model for mitoribosome assembly in which characteristic structures of the yeast mitochondrial ribosome form preassembled subcomplexes prior to their incorporation into the mitoribosome.
A Consistent Large Eddy Approach for Lattice Boltzmann Methods and its Application to Complex Flows
(2015)
Lattice Boltzmann Methods have shown to be promising tools for solving fluid flow problems. This is related to the advantages of these methods, which are among others, the simplicity in handling complex geometries and the high efficiency in calculating transient flows. Lattice Boltzmann Methods are mesoscopic methods, based on discrete particle dynamics. This is in contrast to conventional Computational Fluid Dynamics methods, which are based on the solution of the continuum equations. Calculations of turbulent flows in engineering depend in general on modeling, since resolving of all turbulent scales is and will be in near future far beyond the computational possibilities. One of the most auspicious modeling approaches is the large eddy simulation, in which the large, inhomogeneous turbulence structures are directly computed and the smaller, more homogeneous structures are modeled.
In this thesis, a consistent large eddy approach for the Lattice Boltzmann Method is introduced. This large eddy model includes, besides a subgrid scale model, appropriate boundary conditions for wall resolved and wall modeled calculations. It also provides conditions for turbulent domain inlets. For the case of wall modeled simulations, a two layer wall model is derived in the Lattice Boltzmann context. Turbulent inlet conditions are achieved by means of a synthetic turbulence technique within the Lattice Boltzmann Method.
The proposed approach is implemented in the Lattice Boltzmann based CFD package SAM-Lattice, which has been created in the course of this work. SAM-Lattice is feasible of the calculation of incompressible or weakly compressible, isothermal flows of engineering interest in complex three dimensional domains. Special design targets of SAM-Lattice are high automatization and high performance.
Validation of the suggested large eddy Lattice Boltzmann scheme is performed for pump intake flows, which have not yet been treated by LBM. Even though, this numerical method is very suitable for this kind of vortical flows in complicated domains. In general, applications of LBM to hydrodynamic engineering problems are rare. The results of the pump intake validation cases reveal that the proposed numerical approach is able to represent qualitatively and quantitatively the very complex flows in the intakes. The findings provided in this thesis can serve as the basis for a broader application of LBM in hydrodynamic engineering problems.
Beamforming performs spatial filtering to preserve the signal from given directions of interest while suppressing interfering signals and noise arriving from other directions.
For example, a microphone array equipped with beamforming algorithm could preserve the sound coming from a target speaker and suppress sounds coming from other speakers.
Beamformer has been widely used in many applications such as radar, sonar, communication, and acoustic systems.
A data-independent beamformer is the beamformer whose coefficients are independent on sensor signals, it normally uses less computation since the coefficients are computed once. Moreover, its coefficients are derived from the well-defined statistical models, then it produces less artifacts. The major drawback of this beamforming class is its limitation to the interference suppression.
On the other hand, an adaptive beamformer is a beamformer whose coefficients depend on or adapt to sensor signals. It is capable of suppressing the interference better than a data-independent beamforming but it suffers from either too much distortion of the signal of interest or less noise reduction when the updating rate of coefficients does not synchronize with the changing rate of the noise model. Besides, it is computationally intensive since the coefficients need to be updated frequently.
In acoustic applications, the bandwidth of signals of interest extends over several octaves, but we always expect that the characteristic of the beamformer is invariant with regard to the bandwidth of interest. This can be achieved by the so-called broadband beamforming.
Since the beam pattern of conventional beamformers depends on the frequency of the signal, it is common to use a dense and uniform array for the broadband beamforming to guarantee some essential performances together, such as frequency-independence, less sensitive to white noise, high directivity factor or high front-to-back ratio. In this dissertation, we mainly focus on the sparse array of which the aim is to use fewer sensors in the array,
while simultaneously assuring several important performances of the beamformer.
In the past few decades, many design methodologies for sparse arrays have been proposed and were applied in a variety of practical applications.
Although good results were presented, there are still some restrictions, such as the number of sensors is large, the designed beam pattern must be fixed, the steering ability is limited and the computational complexity is high.
In this work, two novel approaches for the sparse array design taking a hypothesized uniform array as a basis are proposed, that is, one for data-independent beamformers and the another for adaptive beamformers.
As an underlying component of the proposed methods, the dissertation introduces some new insights into the uniform array with broadband beamforming. In this context, a function formulating the relations between the sensor coefficients and its beam pattern over frequency is proposed. The function mainly contains the coordinate transform and inverse Fourier transform.
Furthermore, from the bijection of the function and broadband beamforming perspective, we propose the lower and upper bounds for the inter-distance of sensors. Within these bounds, the function is a bijective function that can be utilized to design the uniform array with broadband beamforming.
For data-independent beamforming, many studies have focused on optimization procedures to seek the sparse array deployment. This dissertation presents an alternative approach to determine the location of sensors.
Starting with a weight spectrum of a virtual dense and uniform array, some techniques are used, such as analyzing a weight spectrum to determine the critical sensors, applying the clustering technique to group the sensors into different groups and selecting representative sensors for each group.
After the sparse array deployment is specified, the optimization technique is applied to find the beamformer coefficients. The proposed method helps to save the computation time in the design phase and its beamformer performance outperforms other state-of-the-art methods in several aspects such as the higher white noise gain, higher directivity factor or more frequency-independence.
For adaptive beamforming, the dissertation attempts to design a versatile sparse microphone array that can be used for different beam patterns.
Furthermore, we aim to reduce the number of microphones in the sparse array while ensuring that its performance can continue to compete with a highly dense and uniform array in terms of broadband beamforming.
An irregular microphone array in a planar surface with the maximum number of distinct distances between the microphones is proposed.
It is demonstrated that the irregular microphone array is well-suited to sparse recovery algorithms that are used to solve underdetermined systems with subject to sparse solutions. Here, a sparse solution is the sound source's spatial spectrum that need to be reconstructed from microphone signals.
From the reconstructed sound sources, a method for array interpolation is presented to obtain an interpolated dense and uniform microphone array that performs well with broadband beamforming.
In addition, two alternative approaches for generalized sidelobe canceler (GSC) beamformer are proposed. One is the data-independent beamforming variant, the other is the adaptive beamforming variant. The GSC decomposes beamforming into two paths: The upper path is to preserve the desired signal, the lower path is to suppress the desired signal. From a beam pattern viewpoint, we propose an improvement for GSC, that is, instead of using the blocking matrix in the lower path to suppress the desired signal, we design a beamformer that contains the nulls at the look direction and at some other directions. Both approaches are simple beamforming design methods and they can be applied to either sparse array or uniform array.
Lastly, a new technique for direction-of-arrival (DOA) estimation based on the annihilating filter is also presented in this dissertation.
It is based on the idea of finite rate of innovation to reconstruct the stream of Diracs, that is, identifying an annihilating filter/locator filter for a few uniform samples and the position of the Diracs are then related to the roots of the filter. Here, an annihilating filter is the filter that suppresses the signal, since its coefficient vector is always orthogonal to every frame of signal.
In the DOA context, we regard an active source as a Dirac associated with the arrival direction, then the directions of active sources can be derived from the roots of the annihilating filter. However,
the DOA obtained by this method is sensitive to noise and the number of DOAs is limited.
To address these issues, the dissertation proposes a robust method to design the annihilating filter and to increase the degree-of-freedom of the measurement system (more active sources can be detected) via observing multiple data frames.
Furthermore, we also analyze the performance of DOA with diffuse noise and propose an extended multiple signal classification algorithm that takes diffuse noise into account. In the simulation,
it shows, that in the case of diffuse noise, only the extended multiple signal classification algorithm can estimate the DOAs properly.
The growing computational power enables the establishment of the Population Balance Equation (PBE)
to model the steady state and dynamic behavior of multiphase flow unit operations. Accordingly, the twophase
flow
behavior inside liquid-liquid extraction equipment is characterized by different factors. These
factors include: interactions among droplets (breakage and coalescence), different time scales due to the
size distribution of the dispersed phase, and micro time scales of the interphase diffusional mass transfer
process. As a result of this, the general PBE has no well known analytical solution and therefore robust
numerical solution methods with low computational cost are highly admired.
In this work, the Sectional Quadrature Method of Moments (SQMOM) (Attarakih, M. M., Drumm, C.,
Bart, H.-J. (2009). Solution of the population balance equation using the Sectional Quadrature Method of
Moments (SQMOM). Chem. Eng. Sci. 64, 742-752) is extended to take into account the continuous flow
systems in spatial domain. In this regard, the SQMOM is extended to solve the spatially distributed
nonhomogeneous bivariate PBE to model the hydrodynamics and physical/reactive mass transfer
behavior of liquid-liquid extraction equipment. Based on the extended SQMOM, two different steady
state and dynamic simulation algorithms for hydrodynamics and mass transfer behavior of liquid-liquid
extraction equipment are developed and efficiently implemented. At the steady state modeling level, a
Spatially-Mixed SQMOM (SM-SQMOM) algorithm is developed and successfully implemented in a onedimensional
physical spatial domain. The integral spatial numerical flux is closed using the mean mass
droplet diameter based on the One Primary and One Secondary Particle Method (OPOSPM which is the
simplest case of the SQMOM). On the other hand the hydrodynamics integral source terms are closed
using the analytical Two-Equal Weight Quadrature (TEqWQ). To avoid the numerical solution of the
droplet rise velocity, an analytical solution based on the algebraic velocity model is derived for the
particular case of unit velocity exponent appearing in the droplet swarm model. In addition to this, the
source term due to mass transport is closed using OPOSPM. The resulting system of ordinary differential
equations with respect to space is solved using the MATLAB adaptive Runge–Kutta method (ODE45). At
the dynamic modeling level, the SQMOM is extended to a one-dimensional physical spatial domain and
resolved using the finite volume method. To close the mathematical model, the required quadrature nodes
and weights are calculated using the analytical solution based on the Two Unequal Weights Quadrature
(TUEWQ) formula. By applying the finite volume method to the spatial domain, a semi-discreet ordinary
differential equation system is obtained and solved. Both steady state and dynamic algorithms are
extensively validated at analytical, numerical, and experimental levels. At the numerical level, the
predictions of both algorithms are validated using the extended fixed pivot technique as implemented in
PPBLab software (Attarakih, M., Alzyod, S., Abu-Khader, M., Bart, H.-J. (2012). PPBLAB: A new
multivariate population balance environment for particulate system modeling and simulation. Procedia
Eng. 42, pp. 144-562). At the experimental validation level, the extended SQMOM is successfully used
to model the steady state hydrodynamics and physical and reactive mass transfer behavior of agitated
liquid-liquid extraction columns under different operating conditions. In this regard, both models are
found efficient and able to follow liquid extraction column behavior during column scale-up, where three
column diameters were investigated (DN32, DN80, and DN150). To shed more light on the local
interactions among the contacted phases, a reduced coupled PBE and CFD framework is used to model
the hydrodynamic behavior of pulsed sieve plate columns. In this regard, OPOSPM is utilized and
implemented in FLUENT 18.2 commercial software as a special case of the SQMOM. The dropletdroplet
interactions
(breakage
and
coalescence)
are
taken
into
account
using
OPOSPM,
while
the
required
information
about
the
velocity
field
and
energy
dissipation
is
calculated
by
the
CFD
model.
In
addition
to
this,
the proposed coupled OPOSPM-CFD framework is extended to include the mass transfer. The
proposed framework is numerically tested and the results are compared with the published experimental
data. The required breakage and coalescence parameters to perform the 2D-CFD simulation are estimated
using PPBLab software, where a 1D-CFD simulation using a multi-sectional gird is performed. A very
good agreement is obtained at the experimental and the numerical validation levels.
The present thesis describes the development and the evaluation of a design procedure of inducer with arbitrary meridional and blade shape. This special type of pump impeller, which is usually mounted upstream of a main pump impeller, is employed in many applications demanding the realization of low NPSH values. An inducer basically increases suction performance by producing mostly a small pressure rise while allowing for a greater degree of cavitation, that is the formation of vapor bubbles, at its inlet than a conventional pump impeller. This is achieved by specially designed blade channels promoting the collapse of the produced vapor bubbles.
The main focus of the present thesis is the description of the design method, which enables the generation of the three-dimensional blade geometry. The method is based on a parametric representation of the geometry considering the particular requirements for inducers and the publicly available design practice. Within this approach the sequence of design steps is adapted from the classical design process of mixed flow and radial impellers. As a consequence leading and trailing edge blade angles are determined based on simplifications and certain empirical assumptions for multiple blade sections and are used to design the blade camber curves. Along the camber curves the blade profile is generated following a thickness distribution that has to be prescribed. A special feature of the newly developed method is that arbitrary shaped, asymmetric thickness distributions can be realized.
Due to the detailed description of the design and calculation steps a fully comprehensible procedure is outlined, which covers the development of inducer bladings from an initial set of duty parameters to the final three-dimensional blade geometry.
The components involved in the design procedure are tested by designing two exemplary inducers and they are assessed by comparison with numerical simulations. Functioning of these inducers in the real application is finally demonstrated with water tests.
The main result of this dissertation is a design software for inducers allowing for the design of three-dimensional, asymmetrically profiled bladings. The developed software is free of commercial third-party libraries. As a consequence a program is available that can be modified and extended as desired. As potential future development goals inducers with splitter and tandem blades as well as an integrated design of inducer and impeller are proposed.
Medical cyber-physical systems (MCPS) emerged as an evolution of the relations between connected health systems, healthcare providers, and modern medical devices. Such systems combine independent medical devices at runtime in order to render new patient monitoring/control functionalities, such as physiological closed loops for controlling drug infusion or optimization of alarms. Despite the advances regarding alarm precision, healthcare providers still struggle with alarm flooding caused by the limited risk assessment models. Furthermore, these limitations also impose severe barriers on the adoption of automated supervision through autonomous actions, such as safety interlocks for avoiding overdosage. The literature has focused on the verification of safety parameters to assure the safety of treatment at runtime and thus optimize alarms and automated actions. Such solutions have relied on the definition of actuation ranges based on thresholds for a few monitored parameters. Given the very dynamic nature of the relevant context conditions (e.g., the patient’s condition, treatment details, system configurations, etc.), fixed thresholds are a weak means for assessing the current risk. This thesis presents an approach for enabling dynamic risk assessment for cooperative MCPS based on an adaptive Bayesian Networks (BN) model. The main aim of the approach is to support continuous runtime risk assessment of the current situation based on relevant context and system information. The presented approach comprises (i) a dynamic risk analysis constituent, which corresponds to the elicitation of relevant risk parameters, risk metric building, and risk metric management; and (ii) a runtime risk classification constituent, which aims to analyze the current situation risk, establish risk classes, and identify and deploy mitigation measures. The proposed approach was evaluated and its feasibility proved by means of simulated experiments guided by an international team of medical experts with a focus on the requirements of efficacy, efficiency, and availability of patient treatment.
This work deals with the simulation of the micro-cutting process of titanium. For this
purpose, a suitable crystal-plastic material model is developed and efficient implemen-
tations are investigated to simulate the micro-cutting process. Several challenges arise
for the material model. On the one hand, the low symmetry hexagonal close-packed
crystal structure of titanium has to be considered. On the other hand, large defor-
mations and strains occur during the machining process. Another important part is
the algorithm for the determination of the active slip systems, which has a significant
influence on the stability of the simulation. In order to obtain a robust implemen-
tation, different aspects, such as the algorithm for the determination of the active
slip systems, the method for mesh separation between chip and workpiece as well as
the hardening process are investigated, and different approaches are compared. The
developed crystal-plastic material model and the selected implementations are first
validated and investigated using illustrative examples. The presented simulations of
the micro-cutting process show the influence of different machining parameters on the
process. Finally, the influence of a real microstructure on the plastic deformation and
the cutting force during the process is shown.
A prime motivation for using XML to directly represent pieces of information is the ability of supporting ad-hoc or 'schema-later' settings. In such scenarios, modeling data under loose data constraints is essential. Of course, the flexibility of XML comes at a price: the absence of a rigid, regular, and homogeneous structure makes many aspects of data management more challenging. Such malleable data formats can also lead to severe information quality problems, because the risk of storing inconsistent and incorrect data is greatly increased. A prominent example of such problems is the appearance of the so-called fuzzy duplicates, i.e., multiple and non-identical representations of a real-world entity. Similarity joins correlating XML document fragments that are similar can be used as core operators to support the identification of fuzzy duplicates. However, similarity assessment is especially difficult on XML datasets because structure, besides textual information, may exhibit variations in document fragments representing the same real-world entity. Moreover, similarity computation is substantially more expensive for tree-structured objects and, thus, is a serious performance concern. This thesis describes the design and implementation of an effective, flexible, and high-performance XML-based similarity join framework. As main contributions, we present novel structure-conscious similarity functions for XML trees - either considering XML structure in isolation or combined with textual information -, mechanisms to support the selection of relevant information from XML trees and organization of this information into a suitable format for similarity calculation, and efficient algorithms for large-scale identification of similar, set-represented objects. Finally, we validate the applicability of our techniques by integrating our framework into a native XML database management system; in this context we address several issues around the integration of similarity operations into traditional database architectures.
This thesis presents a novel, generic framework for information segmentation in document images.
A document image contains different types of information, for instance, text (machine printed/handwritten), graphics, signatures, and stamps.
It is necessary to segment information in documents so that to process such segmented information only when required in automatic document processing workflows.
The main contribution of this thesis is the conceptualization and implementation of an information segmentation framework that is based on part-based features.
The generic nature of the presented framework makes it applicable to a variety of documents (technical drawings, magazines, administrative, scientific, and academic documents) digitized using different methods (scanners, RGB cameras, and hyper-spectral imaging (HSI) devices).
A highlight of the presented framework is that it does not require large training sets, rather a few training samples (for instance, four pages) lead to high performance, i.e., better than previously existing methods.
In addition, the presented framework is simple and can be adapted quickly to new problem domains.
This thesis is divided into three major parts on the basis of document digitization method (scanned, hyper-spectral imaging, and camera captured) used.
In the area of scanned document images, three specific contributions have been realized.
The first of them is in the domain of signature segmentation in administrative documents.
In some workflows, it is very important to check the document authenticity before processing the actual content.
This can be done based on the available seal of authenticity, e.g., signatures.
However, signature verification systems expect pre-segmented signature image, while signatures are usually a part of document.
To use signature verification systems on document images, it is necessary to first segment signatures in documents.
This thesis shows that the presented framework can be used to segment signatures in administrative documents.
The system based on the presented framework is tested on a publicly available dataset where it outperforms the state-of-the-art methods and successfully segmented all signatures, while less than half of the found signatures are false positives.
This shows that it can be applied for practical use.
The second contribution in the area of scanned document images is segmentation of stamps in administrative documents.
A stamp also serves as a seal for documents authenticity.
However, the location of stamp on the document can be more arbitrary than a signature depending on the person sealing the document.
This thesis shows that a system based on our generic framework is able to extract stamps of any arbitrary shape and color.
The evaluation of the presented system on a publicly available dataset shows that it is also able to segment black stamps (that were not addressed in the past) with a recall and precision of 83% and 73%, respectively.
%Furthermore, to segment colored stamps, this thesis presents a novel feature set which is based on intensity gradient, is able to extract unseen, colored, arbitrary shaped, textual as well as graphical stamps, and outperforms the state-of-the-art methods.
The third contribution in the scanned document images is in the domain of information segmentation in technical drawings (architectural floorplans, maps, circuit diagrams, etc.) containing usually a large amount of graphics and comparatively less textual components. Further, as in technical drawings, text is overlapping with graphics.
Thus, automatic analysis of technical drawings uses text/graphics segmentation as a pre-processing step.
This thesis presents a method based on our generic information segmentation framework that is able to detect the text, which is touching graphical components in architectural floorplans and maps.
Evaluation of the method on a publicly available dataset of architectural floorplans shows that it is able to extract almost all touching text components with precision and recall of 71% and 95%, respectively.
This means that almost all of the touching text components are successfully extracted.
In the area of hyper-spectral document images, two contributions have been realized.
Unlike normal three channels RGB images, hyper-spectral images usually have multiple channels that range from ultraviolet to infrared regions including the visible region.
First, this thesis presents a novel automatic method for signature segmentation from hyper-spectral document images (240 spectral bands between 400 - 900 nm).
The presented method is based on a part-based key point detection technique, which does not use any structural information, but relies only on the spectral response of the document regardless of ink color and intensity.
The presented method is capable of segmenting (overlapping and non-overlapping) signatures from varying backgrounds like, printed text, tables, stamps, logos, etc.
Importantly, the presented method can extract signature pixels and not just the bounding boxes.
This is substantial when signatures are overlapping with text and/or other objects in image. Second, this thesis presents a new dataset comprising of 300 documents scanned using a high-resolution hyper-spectral scanner. Evaluation of the presented signature segmentation method on this hyper-spectral dataset shows that it is able to extract signature pixels with the precision and recall of 100% and 79%, respectively.
Further contributions have been made in the area of camera captured document images. A major problem in the development of Optical Character Recognition (OCR) systems for camera captured document images is the lack of labeled camera captured document images datasets. In the first place, this thesis presents a novel, generic, method for automatic ground truth generation/labeling of document images. The presented method builds large-scale (i.e., millions of images) datasets of labeled camera captured / scanned documents without any human intervention. The method is generic and can be used for automatic ground truth generation of (scanned and/or camera captured) documents in any language, e.g., English, Russian, Arabic, Urdu. The evaluation of the presented method, on two different datasets in English and Russian, shows that 99.98% of the images are correctly labeled in every case.
Another important contribution in the area of camera captured document images is the compilation of a large dataset comprising 1 million word images (10 million character images), captured in a real camera-based acquisition environment, along with the word and character level ground truth. The dataset can be used for training as well as testing of character recognition systems for camera-captured documents. Various benchmark tests are performed to analyze the behavior of different open source OCR systems on camera captured document images. Evaluation results show that the existing OCRs, which already get very high accuracies on scanned documents, fail on camera captured document images.
Using the presented camera-captured dataset, a novel character recognition system is developed which is based on a variant of recurrent neural networks, i.e., Long Short Term Memory (LSTM) that outperforms all of the existing OCR engines on camera captured document images with an accuracy of more than 95%.
Finally, this thesis provides details on various tasks that have been performed in the area closely related to information segmentation. This includes automatic analysis and sketch based retrieval of architectural floor plan images, a novel scheme for online signature verification, and a part-based approach for signature verification. With these contributions, it has been shown that part-based methods can be successfully applied to document image analysis.
For many years real-time task models have focused the timing constraints on execution windows defined by earliest start times and deadlines for feasibility.
However, the utility of some application may vary among scenarios which yield correct behavior, and maximizing this utility improves the resource utilization.
For example, target sensitive applications have a target point where execution results in maximized utility, and an execution window for feasibility.
Execution around this point and within the execution window is allowed, albeit at lower utility.
The intensity of the utility decay accounts for the importance of the application.
Examples of such applications include multimedia and control; multimedia application are very popular nowadays and control applications are present in every automated system.
In this thesis, we present a novel real-time task model which provides for easy abstractions to express the timing constraints of target sensitive RT applications: the gravitational task model.
This model uses a simple gravity pendulum (or bob pendulum) system as a visualization model for trade-offs among target sensitive RT applications.
We consider jobs as objects in a pendulum system, and the target points as the central point.
Then, the equilibrium state of the physical problem is equivalent to the best compromise among jobs with conflicting targets.
Analogies with well-known systems are helpful to fill in the gap between application requirements and theoretical abstractions used in task models.
For instance, the so-called nature algorithms use key elements of physical processes to form the basis of an optimization algorithm.
Examples include the knapsack problem, traveling salesman problem, ant colony optimization, and simulated annealing.
We also present a few scheduling algorithms designed for the gravitational task model which fulfill the requirements for on-line adaptivity.
The scheduling of target sensitive RT applications must account for timing constraints, and the trade-off among tasks with conflicting targets.
Our proposed scheduling algorithms use the equilibrium state concept to order the execution sequence of jobs, and compute the deviation of jobs from their target points for increased system utility.
The execution sequence of jobs in the schedule has a significant impact on the equilibrium of jobs, and dominates the complexity of the problem --- the optimum solution is NP-hard.
We show the efficacy of our approach through simulations results and 3 target sensitive RT applications enhanced with the gravitational task model.
This thesis is concerned with the modeling of the solid-solid phase transformation, such as the martensitic transformation. The allotropes austenite and martensite are important for industry applications. As a result of its ductility, austenite is desired in the bulk, as opposed to martensite, which desired in the near surface region. The phase field method is used to model the phase transformation by minimizing the free energy. It consists of a mechanical part, due to elastic strain and a chemical part, due to the martensitic transformation. The latter is temperature dependent. Therefore, a temperature dependent separation potential is presented here. To accommodate multiple orientation variants, a multivariant phase field model is employed. Using the Khachaturyan approach, the effective material parameters can be used to describe a constitutive model. This however, renders the nodal residual vector and elemental tangent matrix directly dependent on the phase, making a generalization complicated. An easier approach is the use of the Voigt/Taylor homogenization, in which the energy and their derivatives are interpolated creating an interface for material law of the individual phases.
Numerical Godeaux surfaces are minimal surfaces of general type with the smallest possible numerical invariants. It is known that the torsion group of a numerical Godeaux surface is cyclic of order \(m\leq 5\). A full classification has been given for the cases \(m=3,4,5\) by the work of Reid and Miyaoka. In each case, the corresponding moduli space is 8-dimensional and irreducible.
There exist explicit examples of numerical Godeaux surfaces for the orders \(m=1,2\), but a complete classification for these surfaces is still missing.
In this thesis we present a construction method for numerical Godeaux surfaces which is based on homological algebra and computer algebra and which arises from an experimental approach by Schreyer. The main idea is to consider the canonical ring \(R(X)\) of a numerical Godeaux surface \(X\) as a module over some graded polynomial ring \(S\). The ring \(S\) is chosen so that \(R(X)\) is finitely generated as an \(S\)-module and a Gorenstein \(S\)-algebra of codimension 3. We prove that the canonical ring of any numerical Godeaux surface, considered as an \(S\)-module, admits a minimal free resolution whose middle map is alternating. Moreover, we show that a partial converse of this statement is true under some additional conditions.
Afterwards we use these results to construct (canonical rings of) numerical Godeaux surfaces. Hereby, we restrict our study to surfaces whose bicanonical system has no fixed component but 4 distinct base points, in the following referred to as marked numerical Godeaux surfaces.
The particular interest of this thesis lies on marked numerical Godeaux surfaces whose torsion group is trivial. For these surfaces we study the fibration of genus 4 over \(\mathbb{P}^1\) induced by the bicanonical system. Catanese and Pignatelli showed that the general fibre is non-hyperelliptic and that the number \(\tilde{h}\) of hyperelliptic fibres is bounded by 3. The two explicit constructions of numerical Godeaux surfaces with a trivial torsion group due to Barlow and Craighero-Gattazzo, respectively, satisfy \(\tilde{h} = 2\).
With the method from this thesis, we construct an 8-dimensional family of numerical Godeaux surfaces with a trivial torsion group and whose general element satisfy \(\tilde{h}=0\).
Furthermore, we establish a criterion for the existence of hyperelliptic fibres in terms of a minimal free resolution of \(R(X)\). Using this criterion, we verify experimentally the
existence of a numerical Godeaux surface with \(\tilde{h}=1\).
Data is the new gold and serves as a key to answer the five W’s (Who, What, Where, When, Why) and How’s of any business. Companies are now mining data more than ever and one of the most important aspects while analyzing this data is to detect anomalous patterns to identify critical patterns and points. To tackle the vital aspects of timeseries analysis, this thesis presents a novel hybrid framework that stands on three pillars: Anomaly Detection, Uncertainty Estimation,
and Interpretability and Explainability.
The first pillar is comprised of contributions in the area of time-series anomaly detection. Deep Anomaly Detection for Time-series (DeepAnT), a novel deep learning-based anomaly detection method, lies at the foundation of the proposed hybrid framework and addresses the inadequacy of traditional anomaly detection methods. To the best of the author’s knowledge, Convolutional Neural Network (CNN) was used for the first time in Deep Anomaly Detection for Time-series (DeepAnT) to robustly detect multiple types of anomalies in the tricky
and continuously changing time-series data. To further improve the anomaly detection performance, a fusion-based method, Fusion of
Statistical and Deep Learning for Anomaly Detection (FuseAD) is proposed. This method aims to combine the strengths of existing wellfounded
statistical methods and powerful data-driven methods.
In the second pillar of this framework, a hybrid approach that combines the high accuracy of the deterministic models with the posterior distribution approximation of Bayesian neural networks is proposed.
In the third pillar of the proposed framework, mechanisms to enable both HOW and WHY parts are presented.
Embedded systems have become ubiquitous in everyday life, and especially in the automotive industry. New applications challenge their design by introducing a new class of problems that are based on a detailed analysis of the environmental situation. Situation analysis systems rely on models and algorithms of the domain of computational geometry. The basic model is usually an Euclidean plane, which contains polygons to represent the objects of the environment. Usual implementations of computational geometry algorithms cannot be directly used for safety-critical systems. First, a strict analysis of their correctness is indispensable and second, nonfunctional requirements with respect to the limited resources must be considered. This thesis proposes a layered approach to a polygon-processing system. On top of rational numbers, a geometry kernel is formalised at first. Subsequently, geometric primitives form a second layer of abstraction that is used for plane sweep and polygon algorithms. These layers do not only divide the whole system into manageable parts but make it possible to model problems and reason about them at the appropriate level of abstraction. This structure is used for the verification as well as the implementation of the developed polygon-processing library.
In the filling process of a car tank, the formation of foam plays an unwanted role, as it may prevent the tank from being completely filled or at least delay the filling. Therefore it is of interest to optimize the geometry of the tank using numerical simulation in such a way that the influence of the foam is minimized. In this dissertation, we analyze the behaviour of the foam mathematically on the mezoscopic scale, that is for single lamellae. The most important goals are on the one hand to gain a deeper understanding of the interaction of the relevant physical effects, on the other hand to obtain a model for the simulation of the decay of a lamella which can be integrated in a global foam model. In the first part of this work, we give a short introduction into the physical properties of foam and find that the Marangoni effect is the main cause for its stability. We then develop a mathematical model for the simulation of the dynamical behaviour of a lamella based on an asymptotic analysis using the special geometry of the lamella. The result is a system of nonlinear partial differential equations (PDE) of third order in two spatial and one time dimension. In the second part, we analyze this system mathematically and prove an existence and uniqueness result for a simplified case. For some special parameter domains the system can be further simplified, and in some cases explicit solutions can be derived. In the last part of the dissertation, we solve the system using a finite element approach and discuss the results in detail.
The detection and characterisation of undesired lead structures on shaft surfaces is a concern in production and quality control of rotary shaft lip-type sealing systems. The potential lead structures are generally divided into macro and micro lead based on their characteristics and formation. Macro lead measurement methods exist and are widely applied. This work describes a method to characterise micro lead on ground shaft surfaces. Micro lead is known as the deviation of main orientation of the ground micro texture from circumferential direction. Assessing the orientation of microscopic structures with arc minute accuracy with regard to circumferential direction requires exact knowledge of both the shaft’s orientation and the direction of surface texture. The shaft’s circumferential direction is found by calibration. Measuring systems and calibration procedures capable of calibrating shaft axis orientation with high accuracy and low uncertainty are described. The measuring systems employ areal-topographic measuring instruments suited for evaluating texture orientation. A dedicated evaluation scheme for texture orientation is based on the Radon transform of these topographies and parametrised for the application. Combining the calibration of circumferential direction with the evaluation of texture orientation the method enables the measurement of micro lead on ground shaft surfaces.
1,3-Diynes are frequently found as an important structural motif in natural products, pharmaceuticals and bioactive compounds, electronic and optical materials and supramolecular molecules. Copper and palladium complexes are widely used to prepare 1,3-diynes by homocoupling of terminal alkynes; albeit the potential of nickel complexes towards the same is essentially unexplored. Although a detailed study on the reported nickel-acetylene chemistry has not been carried out, a generalized mechanism featuring a nickel(II)/nickel(0) catalytic cycle has been proposed. In the present work, a detailed mechanistic aspect of the nickel-mediated homocoupling reaction of terminal alkynes is investigated through the isolation and/or characterization of key intermediates from both the stoichiometric and the catalytic reactions. A nickel(II) complex [Ni(L-N4Me2)(MeCN)2](ClO4)2 (1) containing a tetradentate N,N′-dimethyl-2,11-diaza[3.3](2,6)pyridinophane (L-N4Me2) as ligand was used as catalyst for homocoupling of terminal alkynes by employing oxygen as oxidant at room temperature. A series of dinuclear nickel(I) complexes bridged by a 1,3-diyne ligand have been isolated from stoichiometric reaction between [Ni(L-N4Me2)(MeCN)2](ClO4)2 (1) and lithium acetylides. The dinuclear nickel(I)-diyne complexes [{Ni(L-N4Me2)}2(RC4R)](ClO4)2 (2) were well characterized by X-ray crystal structures, various spectroscopic methods, SQUID and DFT calculation. The complexes not only represent as a key intermediate in aforesaid catalytic reaction, but also describe the first structurally characterized dinuclear nickel(I)-diyne complexes. In addition, radical trapping and low temperature UV-Vis-NIR experiments in the formation of the dinuclear nickel(I)-diyne confirm that the reactions occurring during the reduction of nickel(II) to nickel(I) and C-C bond formation of 1,3-diyne follow non-radical concerted mechanism. Furthermore, spectroscopic investigation on the reactivity of the dinuclear nickel(I)-diyne complex towards molecular oxygen confirmed the formation of a mononuclear nickel(I)-diyne species [Ni(L-N4Me2)(RC4R)]+ (4) and a mononuclear nickel(III)-peroxo species [Ni(L-N4Me2)(O2)]+ (5) which were converted to free 1,3-diyne and an unstable dinuclear nickel(II) species [{Ni(L-N4Me2)}2(O2)]2+ (6). A mononuclear nickel(I)-alkyne complex [Ni(L-N4Me2)(PhC2Ph)](ClO4).MeOH (3) and the mononuclear nickel(III)-peroxo species [Ni(L-N4Me2)(O2)]+ (5) were isolated/generated and characterized to confirm the formulation of aforementioned mononuclear nickel(I)-diyne and mononuclear nickel(III)-peroxo species. Spectroscopic experiments on the catalytic reaction mixture also confirm the presence of aforesaid intermediates. Results of both stoichiometric and catalytic reactions suggested an intriguing mechanism involving nickel(II)/nickel(I)/nickel(III) oxidation states in contrast to the reported nickel(II)/nickel(0) catalytic cycle. These findings are expected to open a new paradigm towards nickel-catalyzed organic transformations.
Nowadays, accounting, charging and billing users' network resource consumption are commonly used for the purpose of facilitating reasonable network usage, controlling congestion, allocating cost, gaining revenue, etc. In traditional IP traffic accounting systems, IP addresses are used to identify the corresponding consumers of the network resources. However, there are some situations in which IP addresses cannot be used to identify users uniquely, for example, in multi-user systems. In these cases, network resource consumption can only be ascribed to the owners of these hosts instead of corresponding real users who have consumed the network resources. Therefore, accurate accountability in these systems is practically impossible. This is a flaw of the traditional IP address based IP traffic accounting technique. This dissertation proposes a user based IP traffic accounting model which can facilitate collecting network resource usage information on the basis of users. With user based IP traffic accounting, IP traffic can be distinguished not only by IP addresses but also by users. In this dissertation, three different schemes, which can achieve the user based IP traffic accounting mechanism, are discussed in detail. The inband scheme utilizes the IP header to convey the user information of the corresponding IP packet. The Accounting Agent residing in the measured host intercepts IP packets passing through it. Then it identifies the users of these IP packets and inserts user information into the IP packets. With this mechanism, a meter located in a key position of the network can intercept the IP packets tagged with user information, extract not only statistic information, but also IP addresses and user information from the IP packets to generate accounting records with user information. The out-of-band scheme is a contrast scheme to the in-band scheme. It also uses an Accounting Agent to intercept IP packets and identify the users of IP traffic. However, the user information is transferred through a separated channel, which is different from the corresponding IP packets' transmission. The Multi-IP scheme provides a different solution for identifying users of IP traffic. It assigns each user in a measured host a unique IP address. Through that, an IP address can be used to identify a user uniquely without ambiguity. This way, traditional IP address based accounting techniques can be applied to achieve the goal of user based IP traffic accounting. In this dissertation, a user based IP traffic accounting prototype system developed according to the out-of-band scheme is also introduced. The application of user based IP traffic accounting model in the distributed computing environment is also discussed.
The present situation of control engineering in the context of automated production can be described as a tension field between its desired outcome and its actual consideration. On the one hand, the share of control engineering compared to the other engineering domains has significantly increased within the last decades due to rising automation degrees of production processes and equipment. On the other hand, the control engineering domain is still underrepresented within the production engineering process. Another limiting factor constitutes a lack of methods and tools to decrease the amount of software engineering efforts and to permit the development of innovative automation applications that ideally support the business requirements.
This thesis addresses this challenging situation by means of the development of a new control engineering methodology. The foundation is built by concepts from computer science to promote structuring and abstraction mechanisms for the software development. In this context, the key sources for this thesis are the paradigm of Service-oriented Architecture and concepts from Model-driven Engineering. To mold these concepts into an integrated engineering procedure, ideas from Systems Engineering are applied. The overall objective is to develop an engineering methodology to improve the efficiency of control engineering by a higher adaptability of control software and decreased programming efforts by reuse.
Cloud Computing, or the Cloud, became one of the most used technologies in today's world, right after its possibilities had been figured out. It is a renowned technology that enables ubiquitous access to tasks that need collaboration or remote monitoring. It is widely used in daily lives as well as the industry. The paradigm uses Internet Technologies which rely on best-effort communication. Best-effort communication limits the applicability of the technology in the domains where the timing is critical. Edge Computing is a paradigm that is seen as a complementary technology to the Cloud. It is expected to solve the Quality of Service (QoS) and latency problems that are raised due to the increased count of connected devices, and the physical distance between the infrastructure and devices. The Edge Computing adds a new tier between Information Technology (IT) and Operational Technology (OT) and brings the computing power close to the source of the data. Computing power near devices reduces the dependency to the Internet; hence, in case of a network failure, the computation can still continue. Close proximity deployments also enable the application of Edge Computing in the areas where real-timeliness is necessary. Computation and communication in Edge Computing are performed via Edge Servers. This thesis suggests a standardized and hardware-independent software reference architecture for Edge Servers that can be realized as a framework on servers, to be used on domains where the timing is critical. The suggested architecture is scalable, extensible, modular, multi-user supported, and decentralized. In decentralized systems, several precautions must be taken into consideration, such as latencies, delays, and available resources of the neighbouring servers. The resulting architecture evaluates these factors and enables real-time execution. It also hides the complexity of low-level communication and automates the collaboration between Edge Servers to enable seamless offloading in case of a need due to lack of resources. The thesis also validates an exemplary instance of the architecture with at framework, called Real-Time Execution Framework (RTEF), with multiple scenarios. The tasks used are resource-demanding and requested to be executed on an Edge Server in an Edge Network comprising multiple Edge Servers. The servers can make decisions by evaluating their availabilities, and determine the optimal location to execute the task, without causing deadline misses. Even under a heavy load, the decisions made by the servers to execute the tasks on time were correct, and the concept is proven.
A Multi-Phase Flow Model Incorporated with Population Balance Equation in a Meshfree Framework
(2011)
This study deals with the numerical solution of a meshfree coupled model of Computational Fluid Dynamics (CFD) and Population Balance Equation (PBE) for liquid-liquid extraction columns. In modeling the coupled hydrodynamics and mass transfer in liquid extraction columns one encounters multidimensional population balance equation that could not be fully resolved numerically within a reasonable time necessary for steady state or dynamic simulations. For this reason, there is an obvious need for a new liquid extraction model that captures all the essential physical phenomena and still tractable from computational point of view. This thesis discusses a new model which focuses on discretization of the external (spatial) and internal coordinates such that the computational time is drastically reduced. For the internal coordinates, the concept of the multi-primary particle method; as a special case of the Sectional Quadrature Method of Moments (SQMOM) is used to represent the droplet internal properties. This model is capable of conserving the most important integral properties of the distribution; namely: the total number, solute and volume concentrations and reduces the computational time when compared to the classical finite difference methods, which require many grid points to conserve the desired physical quantities. On the other hand, due to the discrete nature of the dispersed phase, a meshfree Lagrangian particle method is used to discretize the spatial domain (extraction column height) using the Finite Pointset Method (FPM). This method avoids the extremely difficult convective term discretization using the classical finite volume methods, which require a lot of grid points to capture the moving fronts propagating along column height.
A Multi-Sensor Intelligent Assistance System for Driver Status Monitoring and Intention Prediction
(2017)
Advanced sensing systems, sophisticated algorithms, and increasing computational resources continuously enhance the advanced driver assistance systems (ADAS). To date, despite that some vehicle based approaches to driver fatigue/drowsiness detection have been realized and deployed, objectively and reliably detecting the fatigue/drowsiness state of driver without compromising driving experience still remains challenging. In general, the choice of input sensorial information is limited in the state-of-the-art work. On the other hand, smart and safe driving, as representative future trends in the automotive industry worldwide, increasingly demands the new dimensional human-vehicle interactions, as well as the associated behavioral and bioinformatical data perception of driver. Thus, the goal of this research work is to investigate the employment of general and custom 3D-CMOS sensing concepts for the driver status monitoring, and to explore the improvement by merging/fusing this information with other salient customized information sources for gaining robustness/reliability. This thesis presents an effective multi-sensor approach with novel features to driver status monitoring and intention prediction aimed at drowsiness detection based on a multi-sensor intelligent assistance system -- DeCaDrive, which is implemented on an integrated soft-computing system with multi-sensing interfaces in a simulated driving environment. Utilizing active illumination, the IR depth camera of the realized system can provide rich facial and body features in 3D in a non-intrusive manner. In addition, steering angle sensor, pulse rate sensor, and embedded impedance spectroscopy sensor are incorporated to aid in the detection/prediction of driver's state and intention. A holistic design methodology for ADAS encompassing both driver- and vehicle-based approaches to driver assistance is discussed in the thesis as well. Multi-sensor data fusion and hierarchical SVM techniques are used in DeCaDrive to facilitate the classification of driver drowsiness levels based on which a warning can be issued in order to prevent possible traffic accidents. The realized DeCaDrive system achieves up to 99.66% classification accuracy on the defined drowsiness levels, and exhibits promising features such as head/eye tracking, blink detection, gaze estimation that can be utilized in human-vehicle interactions. However, the driver's state of "microsleep" can hardly be reflected in the sensor features of the implemented system. General improvements on the sensitivity of sensory components and on the system computation power are required to address this issue. Possible new features and development considerations for DeCaDrive are discussed as well in the thesis aiming to gain market acceptance in the future.
The simulation of cutting process challenges established methods due to large deformations and topological changes. In this work a particle finite element method (PFEM) is presented, which combines the benefits of discrete modeling techniques and methods based on continuum mechanics. A crucial part of the PFEM is the detection of the boundary of a set of particles. The impact of this boundary detection method on the structural integrity is examined and a relation of the key parameter of the method to the eigenvalues of strain tensors is elaborated. The influence of important process parameters on the cutting force is studied and a comparison to an empirical relation is presented.
The dissertation is concerned with the numerical solution of Fokker-Planck equations in high dimensions arising in the study of dynamics of polymeric liquids. Traditional methods based on tensor product structure are not applicable in high dimensions for the number of nodes required to yield a fixed accuracy increases exponentially with the dimension; a phenomenon often referred to as the curse of dimension. Particle methods or finite point set methods are known to break the curse of dimension. The Monte Carlo method (MCM) applied to such problems are 1/sqrt(N) accurate, where N is the cardinality of the point set considered, independent of the dimension. Deterministic version of the Monte Carlo method called the quasi Monte Carlo method (QMC) are quite effective in integration problems and accuracy of the order of 1/N can be achieved, up to a logarithmic factor. However, such a replacement cannot be carried over to particle simulations due to the correlation among the quasi-random points. The method proposed by Lecot (C.Lecot and F.E.Khettabi, Quasi-Monte Carlo simulation of diffusion, Journal of Complexity, 15 (1999), pp.342-359) is the only known QMC approach, but it not only leads to large particle numbers but also the proven order of convergence is 1/N^(2s) in dimension s. We modify the method presented there, in such a way that the new method works with reasonable particle numbers even in high dimensions and has better order of convergence. Though the provable order of convergence is 1/sqrt(N), the results show less variance and thus the proposed method still slightly outperforms standard MCM.
This thesis is concerned with a phase field model for martensitic transformations in metastable austenitic steels. Within the phase field approach an order parameter is introduced to indicate whether the present phase is austenite or martensite. The evolving microstructure is described by the evolution of the order parameter, which is assumed to follow the time-dependent Ginzburg-Landau equation. The elastic phase field model is enhanced in two different ways to take further phenomena into account. First, dislocation movement is considered by a crystal plasticity setting. Second, the elastic model for martensitic transformations is combined with a phase field model for fracture. Finite element simulations are used to study the single effects separately which contribute to the microstructure formation.
Human interferences within the Earth System are accelerating, leading to major impacts and feedback that we are just beginning to understand. Summarized under the term 'global change' these impacts put human and natural systems under ever-increasing stress and impose a threat to human well-being, particularly in the Global South. Global governance bodies have acknowledged that decisive measures have to be taken to mitigate the causes and to adapt to these new conditions. Nevertheless, neither current international nor national pledges and measures reach the effectiveness needed to sustain global human well-being under accelerating global change. On the contrary, competing interests are not only paralyzing the international debate but also playing an increasingly important role in debates over social fragmentation and societal polarization on national and local scales. This interconnectedness of the natural and the social system and its impact on social phenomena such as cooperation and conflicts need to be understood better, to strengthen social resilience to future disturbances, drive societal transformation towards socially desirable futures while at the same time avoiding path dependencies along continuing colonial continuities. As a case example, this thesis provides insights into southwestern Amazonia, where the intertwined challenges of human contribution to global change in all its dimensions, as well as human adaptation and mitigation attempts to the imposed changes become exaggeratedly visible. As such, southwestern Amazonia with its high social, economic, and biological diversity is a good example to study the deep interrelations of humans with nature and the consequences these relations have on social cohesion amid an ecological crisis.
Therefore, this thesis takes a social-ecological perspective on conflicts and social cohesion. Social cohesion is in a wider sense understood as the way "how members of a society, group, or organization relate to each other and work together" (Dany and Dijkzeul 2022, p. 12). In particular in contexts of violence, conflicts, and fragility, little has been investigated on the role of social cohesion to govern public goods and build resilience for (future) environmental crises. At the same time, governments and international decision-makers more and more acknowledge the role of social cohesion _ comprising both relations between social groups and between groups and the state _ to build upon resilience against crises. Facing uncertainty in how natural and social systems react to certain disturbances and shocks, the governance of potential tipping points, is an additional challenge for the governance of social-ecological systems (SES). Therefore, this thesis asks: "How does governance shape pathways towards cooperative or conflictive social-ecological tipping points?" The results of this thesis can be distinguished into theoretical/conceptual results and empirical results. Initial systematic literature research on the nexus of climate change, land use, and conflict revealed, an extensive body of literature on direct effects, for example, drought-related land use conflicts, with diverging opinions on whether global warming increases the risk for conflicts or not. Adding the perspective of indirect implications, we further identified research gaps, and also a lack of policy recognition, concerning the negative externalities on land use and conflict through climate mitigation and adaptation measures. On a conceptual note, taking a social cohesion perspective into the analysis is beneficial to shift the focus from the problem-oriented perspective of vulnerabilities and conflicts to global change and potential resulting conflicts to a solution-oriented perspective of enhancing agency and resilience to strengthen collaboration. The developed Social Cohesion Conceptual Model and the related analytical framework facilitate the incorporation of societal dynamics into the analysis of SES dynamics. In addition, the elaborated Tipping Multiverse Framework took up this idea and enhanced it with a more detailed perspective on the soil ecosystem and the household livelihood system to identify entry points to potential social-ecological tipping cascades. As such, the Tipping Multiverse Framework offered two matrices that can advance the understanding of regional SES by identifying core processes, functioning, and links in each TE and thus provide entry points to identify potential tipping cascades across SES sub-systems. The exemplified application of these two frameworks on southwestern Amazonia shows the analytical potential of both proposed frameworks in advancing the understanding of social-ecological tipping points and potential tipping cascades in a regional SES.
On an empirical note, zooming in on questions of governance by applying a political ecology lens to human security, we find that 'glocal' resource governance often reproduces, amplifies, or creates power imbalances and divisions on and between different scales. Our results show that the winners of resource extraction are mostly found at the national and international scale while local communities receive little benefit and are left vulnerable to externalities. Hence, our study contributes to the existing research by stressing the importance of one underlying question: "governance by whom and for whom?" This question raised the demand to understand the underlying dynamics of resource governance and resulting conflicts. Therefore, we aimed at analyzing how (environmental) institutions influence the major drivers of social-ecological conflicts over land in and around three protected areas, Tambopata (Peru), the Extractive Reserve Chico Mendes (Brazil), and Manuripi (Bolivia). We found that state institutions, in particular, have the following effects on key conflict drivers: Overlapping responsibilities of governance institutions and limited enforcement of regulations protecting and empowering rural and disadvantaged populations, enabling external actors to (illegally) access and control resources in the protected areas. Consequently, the already fragile social contract between the residents of the protected area and its surrounding areas and the central state is further weakened by the expanding influence of criminal organizations that oppose the state's authority. For state institutions to avoid aggravating these conflict drivers but instead better manage them or even contribute to conflict prevention and mitigation, a transformation from reactive to reflexive institutions and the development of new reflexive governance competencies is needed.
This need for reflexive governance becomes particularly visible when sudden disturbances or shocks impact the SES. Our analysis of the impacts of the COVID-19 pandemic on the interconnections of land use change, ecosystem services, human agency, conflict, and cooperation that the pandemic has had a severe influence on the human security of marginalized social groups in southwestern Amazonia. Civil society actions have been an essential strategy in the fight against COVID-19, not just in the health sector but also in the economic, political, social, and cultural realms. However, our research also showed that the pandemic has consolidated and partly renewed criminal structures, while the already weak state has fallen further behind due to additional tasks managing the pandemic and other disasters such as floods.
In conclusion, it can be said that the reflexivity of governance is crucial to foster cooperation and preventing conflicts in the realm of social-ecological systems. By not only reacting to already occurring changes but also reflecting upon potential future changes, governance can shape transformation pathways away from the detrimental and towards life-sustaining pathways. It can do so, by exercising agency across scales to avoid the crossing of detrimental social-ecological tipping points but rather to trigger life-sustaining tipping points that contribute to global social-ecological well-being.
Ecotoxicology is the science that researches effects of toxicants on biological entities. Following the famous toxicological principle formulated 1538 by von Hohenheim, known as Paracelsus, thereby generally all chemicals are able to act as toxicants. Unlike human toxicology that focuses on toxic effects on individuals and populations of one species, Homo sapiens, ecotoxicology is not constrained in its scope of biological entities. It is interested in toxic effects on individuals and populations of any species (excluding humans), and on communities and entire ecosystems (Walker et al., 2012; Köhler & Triebskorn, 2013; Newman 2014). One example of where the ecological foundation of ecotoxicology manifests itself are indirect effects, which are effects on biological entities that are not directly caused by chemicals but instead are mediated by ecological interactions and environmental conditions (Walker et al., 2012). With this large scope, ecotoxicology is an inter- and multidisciplinary science that links chemical, biological and environmental knowledge.
With millions of species and at least 100,000 chemicals that potentially interact with them in the environment (Wang et al., 2021), ecotoxicology has a large ground to cover. Among these sheer numbers, there are some groups that are of special importance regarding their potential environmental impact. Pesticides are one group of chemicals that have a large, if not the largest, ecotoxicological relevance: they are toxic for biological entities, sometimes in very low concentrations , and they are used in large amounts and globally (Bernhardt et al., 2017). The high toxicity of pesticides, much higher than that of most other groups of chemicals, is a result of their intended use: they are designed to reduce detrimental effects of, e.g., insects, plants or fungi on agriculture by controlling respective populations, often, and in the sense of their Latin name, through induced lethality (Walker et al., 2012). However, they act not specific enough to be toxic only for the intended species that are considered pests, but also show toxicity towards species living in habitats next to pesticide-treated areas. The widespread agricultural use of pesticides, on the other hand, is a result of their work-and-cost-efficiency for securing yields, but also results in exposure of ecosystems at a global scale (Sharma et al., 2019). In summary, pesticides can be abstractly seen as toxicity intentionally applied to agricultural areas, unintentionally also exposing organisms in non-agricultural areas to toxicity.
The risks of pesticide use for ecosystems have led major jurisdictions, like the United States of America (US) and the European Union (EU), to enact elaborated regulatory processes that require a registration of pesticides prior use (EFSA, 2013; EPA, 2011; Stehle & Schulz, 2015b). A by-product of these registration processes are regulatory threshold levels (RTL) which can be used for scientific risk analysis outside the regulatory process (Stehle & Schulz, 2015a). The RTL for an organism group is basically derived from the most sensitive effect concentrations found in standardized toxicity tests for species representative for the group, multiplied by a safety factor, although specifics differ among regulatory processes. Conceptually, they mark the threshold that separates environmental concentrations associated with acceptable risk (concentrations below the RTL) from concentrations associated with unacceptable risk (concentrations above the RTL).
Due to the high degree of procedural standardization in the derivation of RTLs, they have been found as a good measure to make the toxicities of different pesticides comparable, and they were employed in a series of studies to characterize environmental pesticide concentrations (e.g., Stehle & Schulz, 2015a; Stehle et al., 2018; Wolfram et al., 2018; Wolfram et al., 2021; Schulz et al., 2021, also, in Appendix B; Bub et al., 2023, also, in Appendix C). RTL reflect, for instance, that insecticides show regulatory unacceptable concentrations towards fish between 3 ng/L (deltamethrin, a pyrethroid) and 110 mg/L (imidacloprid, a neonicotinoid), a range of nine orders of magnitude. At the same, imidacloprid is very toxic to pollinators (RTL of 1.52 ng/organism), while more than 95% of all of the insecticides, with regulatory unacceptable concentrations among insecticides ranging as high as 1,6 mg/organism, indicating a toxicity six orders of magnitude lower than that of imidacloprid.
At large-scales, ecotoxicology deals with pesticide impacts on a national (e.g., Bub et al., 2023; Douglas & Tooker, 2015; Hallmann et al., 2014; Schulz et al., 2021; Stehle et al., 2019; Wolfram et al., 2018), continental (Wolfram et al., 2021) or the global scale (Stehle & Schulz, 2015a; Stehle et al., 2018). This maximization of considered scale is in line with the general tendency of ecotoxicology towards larger scales, but generally requires new methodological and conceptual approaches. Historically, individual chemicals and groups of chemicals have been identified that mark, caused by their immense release into the environment, main disruptors of processes in the Earth system, like greenhouses gases for the climate change, chlorofluorocarbons for the depletion of the atmosphere’s ozone layer, dichlorodiphenyl-trichloroethane and other organochlorides for bioaccumulation in food webs and declines in bird populations, etc., but for other phenomena, like declines in biodiversity or numbers of insect species (Outhwaite et al., 2020; Seibold et al., 2019; Vörösmarty et al., 2010), the active part of chemical pollution is only understood to a much lesser extent. There are indicators that pesticides may play a major role
This dissertation contributes to the research of large-scale risks of pesticide use, and of large-scale ecotoxicology in general, in several ways (Figure 1). In Chapter 2, it presents a labeled property graph, the MAGIC graph (Meta-Analysis of the Global Impact of Chemicals graph), as a solution to the methodological issues that arise when increasing amounts of data from more and more sources are combined for analysis (Bub et al., 2019; also, in Appendix A). The MAGIC graph is able to link chemical information from different sources, even if these sources use different nomenclatures. This enables analyses that incorporate toxicological data, like thousands of RTLs (for different organism groups and jurisdictions) for hundreds of pesticides, and information on pesticide use and chemical classes. The MAGIC graph is implemented in a way that allows it to be organically extended by additional chemical, biological and environmental data, and eventually scaled to all chemicals of environmental interest.
Chapter 3 shows, how the combination of the linked pesticide data with a systemic consideration of pesticide use supports the interpretation of pesticide risks in the US (Schulz et al., 2021; also, in Appendix B). This systemic approach includes a new measure, the total applied toxicity (TAT), which integrates used pesticide amounts and pesticide toxicities, and the consideration of pesticide use as a complex system whose state and evolution can be visualized in phase-space plots. The combination of the described methods and concepts led to a novel view on pesticide risks in the US and can provide a framework for future ecotoxicological research at large scales.
Chapter 4 displays the results of the methods and concepts of the US pesticide risk analysis applied to Germany (Bub et al., 2023; also, in Appendix C). A pesticide risk analysis of Germany is of special importance in the context of the EU’s goal to drastically reduce pesticide risks (European Commission, 2020) and Germany being one of the important agricultural producers in the EU. A comparison of the results for Germany to those for the US did also allow to evaluate the impact of scale and differing RTLs, information that can help other ecotoxicological large-scale assessments. Chapter 5 adds a conclusion and an outlook.
In this thesis, we investigate a statistical model for precipitation time series recorded at a single site. The sequence of observations consists of rainfall amounts aggregated over time periods of fixed duration. As the properties of this sequence depend strongly on the length of the observation intervals, we follow the approach of Rodriguez-Iturbe et. al. [1] and use an underlying model for rainfall intensity in continuous time. In this idealized representation, rainfall occurs in clusters of rectangular cells, and each observations is treated as the sum of cell contributions during a given time period. Unlike the previous work, we use a multivariate lognormal distribution for the temporal structure of the cells and clusters. After formulating the model, we develop a Markov-Chain Monte-Carlo algorithm for fitting it to a given data set. A particular problem we have to deal with is the need to estimate the unobserved intensity process alongside the parameter of interest. The performance of the algorithm is tested on artificial data sets generated from the model. [1] I. Rodriguez-Iturbe, D. R. Cox, and Valerie Isham. Some models for rainfall based on stochastic point processes. Proc. R. Soc. Lond. A, 410:269-288, 1987.
The main goal of this work is to model size effects, as they occur in materials with an intrinsic microstructure at the consideration of specimens that are not by orders larger than this microstructure. The micromorphic continuum theory as a generalized continuum theory is well suited to account for the occuring size effects. Thereby additional degrees of freedoms capture the independent deformations of these microstructures, while they provide additional balance equation. In this thesis, the deformational and configurational mechanics of the micromorphic continuum is exploited in a finite-deformation setting. A constitutive and numerical framework is developed, in which also the material-force method is advanced. Furthermore the multiscale modelling of thin material layers with a heterogeneous substructure is of interest. To this end, a computational homogenization framework is developed, which allows to obtain the constitutive relation between traction and separation based on the properties of the underlying micromorphic mesostructure numerically in a nested solution scheme. Within the context of micromorphic continuum mechanics, concepts of both gradient and micromorphic plasticity are developed by systematically varying key ingredients of the respective formulations.
The interest of the exploration of new hydrocarbon fields as well as deep geothermal reservoirs is permanently growing. The analysis of seismic data specific for such exploration projects is very complex and requires the deep knowledge in geology, geophysics, petrology, etc from interpreters, as well as the ability of advanced tools that are able to recover some particular properties. There again the existing wavelet techniques have a huge success in signal processing, data compression, noise reduction, etc. They enable to break complicate functions into many simple pieces at different scales and positions that makes detection and interpretation of local events significantly easier.
In this thesis mathematical methods and tools are presented which are applicable to the seismic data postprocessing in regions with non-smooth boundaries. We provide wavelet techniques that relate to the solutions of the Helmholtz equation. As application we are interested in seismic data analysis. A similar idea to construct wavelet functions from the limit and jump relations of the layer potentials was first suggested by Freeden and his Geomathematics Group.
The particular difficulty in such approaches is the formulation of limit and
jump relations for surfaces used in seismic data processing, i.e., non-smooth
surfaces in various topologies (for example, uniform and
quadratic). The essential idea is to replace the concept of parallel surfaces known for a smooth regular surface by certain appropriate substitutes for non-smooth surfaces.
By using the jump and limit relations formulated for regular surfaces, Helmholtz wavelets can be introduced that recursively approximate functions on surfaces with edges and corners. The exceptional point is that the construction of wavelets allows the efficient implementation in form of
a tree algorithm for the fast numerical computation of functions on the boundary.
In order to demonstrate the
applicability of the Helmholtz FWT, we study a seismic image obtained by the reverse time migration which is based on a finite-difference implementation. In fact, regarding the requirements of such migration algorithms in filtering and denoising the wavelet decomposition is successfully applied to this image for the attenuation of low-frequency
artifacts and noise. Essential feature is the space localization property of
Helmholtz wavelets which numerically enables to discuss the velocity field in
pointwise dependence. Moreover, the multiscale analysis leads us to reveal additional geological information from optical features.
The present thesis describes the development and validation of a viscosity adaption method for the numerical simulation of non-Newtonian fluids on the basis of the Lattice Boltzmann Method (LBM), as well as the development and verification of the related software bundle SAM-Lattice.
By now, Lattice Boltzmann Methods are established as an alternative approach to classical computational fluid dynamics
methods. The LBM has been shown to be an accurate and efficient tool for the numerical simulation of weakly compressible or incompressible fluids. Fields of application reach from turbulent simulations through thermal problems to acoustic calculations among others. The transient nature of the method and the need for a regular grid based, non body conformal discretization makes the LBM ideally suitable for simulations involving complex solids. Such geometries are common, for instance, in the food processing industry, where fluids are mixed by static mixers or agitators. Those fluid flows are often laminar and non-Newtonian.
This work is motivated by the immense practical use of the Lattice Boltzmann Method, which is limited due to stability issues. The stability of the method is mainly influenced by the discretization and the viscosity of the fluid. Thus, simulations of non-Newtonian fluids, whose kinematic viscosity depend on the shear rate, are problematic. Several authors have shown that the LBM is capable of simulating those fluids. However, the vast majority of the simulations in the literature are carried out for simple geometries and/or moderate shear rates, where the LBM is still stable. Special care has to be taken for practical non-Newtonian Lattice Boltzmann simulations in order to keep them stable. A straightforward way is to truncate the modeled viscosity range by numerical stability criteria. This is an effective approach, but from the physical point of view the viscosity bounds are chosen arbitrarily. Moreover, these bounds depend on and vary with the grid and time step size and, therefore, with the simulation Mach number, which is freely chosen at the start of the simulation. Consequently, the modeled viscosity range may not fit to the actual range of the physical problem, because the correct simulation Mach number is unknown a priori. A way around is, to perform precursor simulations on a fixed grid to determine a possible time step size and simulation Mach number, respectively. These precursor simulations can be time consuming and expensive, especially for complex cases and a number of operating points. This makes the LBM unattractive for use in practical simulations of non-Newtonian fluids.
The essential novelty of the method, developed in the course of this thesis, is that the numerically modeled viscosity range is consistently adapted to the actual physically exhibited viscosity range through change of the simulation time step and the simulation Mach number, respectively, while the simulation is running. The algorithm is robust, independent of the Mach number the simulation was started with, and applicable for stationary flows as well as transient flows. The method for the viscosity adaption will be referred to as the "viscosity adaption method (VAM)" and the combination with LBM leads to the "viscosity adaptive LBM (VALBM)".
Besides the introduction of the VALBM, a goal of this thesis is to offer assistance in the spirit of a theory guide to students and assistant researchers concerning the theory of the Lattice Boltzmann Method and its implementation in SAM-Lattice. In Chapter 2, the mathematical foundation of the LBM is given and the route from the BGK approximation of the Boltzmann equation to the Lattice Boltzmann (BGK) equation is delineated in detail.
The derivation is restricted to isothermal flows only. Restrictions of the method, such as low Mach number flows are highlighted and the accuracy of the method is discussed.
SAM-Lattice is a C++ software bundle developed by the author and his colleague Dipl.-Ing. Andreas Schneider. It is a highly automated package for the simulation of isothermal flows of incompressible or weakly compressible fluids in 3D on the basis of the Lattice Boltzmann Method. By the time of writing of this thesis, SAM-Lattice comprises 5 components. The main components are the highly automated lattice generator SamGenerator and the Lattice Boltzmann solver SamSolver. Postprocessing is done with ParaSam, which is our extension of the
open source visualization software ParaView. Additionally, domain decomposition for MPI
parallelism is done by SamDecomposer, which makes use of the graph partitioning library MeTiS. Finally, all mentioned components can be controlled through a user friendly GUI (SamLattice) implemented by the author using QT, including features to visually track output data.
In Chapter 3, some fundamental aspects on the implementation of the main components, including the corresponding flow charts will be discussed. Actual details on the implementation are given in the comprehensive programmers guides to SamGenerator and SamSolver.
In order to ensure the functionality of the implementation of SamSolver, the solver is verified in Chapter 4 for Stokes's First Problem, the suddenly accelerated plate, and for Stokes's Second Problem, the oscillating plate, both for Newtonian fluids. Non-Newtonian fluids are modeled in SamSolver with the power-law model according to Ostwald de Waele. The implementation for non-Newtonian fluids is verified for the Hagen-Poiseuille channel flow in conjunction with a convergence analysis of the method. At the same time, the local grid refinement as it is implemented in SamSolver, is verified. Finally, the verification of higher order boundary conditions is done for the 3D Hagen-Poiseuille pipe flow for both Newtonian and non-Newtonian fluids.
In Chapter 5, the theory of the viscosity adaption method is introduced. For the adaption process, a target collision frequency or target simulation Mach number must be chosen and the distributions must be rescaled according to the modified time step size. A convenient choice is one of the stability bounds. The time step size for the adaption step is deduced from the target collision frequency \(\Omega_t\) and the currently minimal or maximal shear rate in the system, while obeying auxiliary conditions for the simulation Mach number. The adaption is done in the collision step of the Lattice Boltzmann algorithm. We use the transformation matrices of the MRT model to map from distribution space to moment space and vice versa. The actual scaling of the distributions is conducted on the back mapping, because we use the transformation matrix on the basis of the new adaption time step size. It follows an additional rescaling of the non-equilibrium part of the distributions, because of the form of the definition for the discrete stress tensor in the LBM context. For that reason it is clear, that the VAM is applicable for the SRT model as well as the MRT model, where there is virtually no extra cost in the latter case. Also, in Chapter 5, the multi level treatment will be discussed.
Depending on the target collision frequency and the target Mach number, the VAM can be used to optimally use the viscosity range that can be modeled within the stability bounds or it can be used to drastically accelerate the simulation. This is shown in Chapter 6. The viscosity adaptive LBM is verified in the stationary case for the Hagen-Poiseuille channel flow and in the transient case for the Wormersley flow, i.e., the pulsatile 3D Hagen-Poiseuille pipe flow. Although, the VAM is used here for fluids that can be modeled with the power-law approach, the implementation of the VALBM is straightforward for other non-Newtonian models, e.g., the Carreau-Yasuda or Cross model. In the same chapter, the VALBM is validated for the case of a propeller viscosimeter developed at the chair SAM. To this end, the experimental data of the torque on the impeller of three shear thinning non-Newtonian liquids serve for the validation. The VALBM shows excellent agreement with experimental data for all of the investigated fluids and in every operating point. For reasons of comparison, a series of standard LBM simulations is carried out with different simulation Mach numbers, which partly show errors of several hundred percent. Moreover, in Chapter 7, a sensitivity analysis on the parameters used within the VAM is conducted for the simulation of the propeller viscosimeter.
Finally, the accuracy of non-Newtonian Lattice Boltzmann simulations with the SRT and the MRT model is analyzed in detail. Previous work for Newtonian fluids indicate that depending on the numerical value of the collision frequency \(\Omega\), additional artificial viscosity is introduced due to the finite difference scheme, which negatively influences the accuracy. For the non-Newtonian case, an error estimate in the form of a functional is derived on the basis of a series expansion of the Lattice Boltzmann equation. This functional can be solved analytically for the case of the Hagen-Poiseuille channel flow of non-Newtonian fluids. The estimation of the error minimum is excellent in regions where the \(\Omega\) error is the dominant source of error as opposed to the compressibility error.
Result of this dissertation is a verified and validated software bundle on the basis of the viscosity adaptive Lattice Boltzmann Method. The work restricts itself on the simulation of isothermal, laminar flows with small Mach numbers. As further research goals, the testing of the VALBM with minimal error estimate and the investigation of the VALBM in the case of turbulent flows is suggested.
The subject of this thesis is the probabilistic reliability assessment of notched metallic components under periodic constant-amplitude loads with respect to the failure mode of high-cycle fatigue. The latter refers to the crack initiation within the considered component caused by a high number, typically millions, of load cycles characterized by their small magnitude in terms of the material's static strength. In order to estimate the probability of failure due to high-cycle fatigue for a specified component under given loads, a new empirical model based on weakest-link theory is developed which describes a probabilistic and component specific constant-life diagram with respect to the anticipated design life. A conventional, non-probabilistic constant-life diagram reflects a discrete design boundary in terms of mean stress and stress amplitude, typically based on test results with respect to unnotched coupons made from the material of interest. Its application to the design of a notched component is established by identifying the stress conditions at the component's hot spot with those acting in the smooth coupons during the tests, and comparing those hot-spot conditions with the design boundary described in the constant-life diagram. Disregarded influences, such as notch and statistical size effect have to be incorporated by respective correction factors. The proposed probabilistic model on the other hand describes a continuous field of failure probabilities in the design stress plane, taking into account not only the hot-spot stresses, but the entire cyclic stress field acting throughout the component. In this way, the methodology directly accounts for notch and statistical size effects. Responsible for providing this greater scope is the weakest-link concept, which represents a non-local stochastic approach for quantifying the failure probability of loaded solids. The four model parameters can be calibrated with fatigue test data sets containing entirely unrelated test results on arbitrary specimen geometries, obliterating the constraining need for test data following staircase or probit schemes. This work contains the formulation, analysis, validation and application of the proposed model. After its introduction and a comparison with existing methods, it is analyzed in terms of its numerical properties when applied to finite element models, its efficient calibration and the corresponding model uncertainty. The validation is split into two parts. In a first analysis, the model is fitted to test data, containing results on several types of notched specimens, reflecting predominantly elastic material behavior. In a second step, this restriction is lifted and the model is used in order to predict the failure behavior of notched test specimens experiencing notch root plasticity due to high mean stresses. In both validation studies, the derived model predictions are, for the most part, well in line with the experimentally observed failure behavior of the test specimens. Finally, the applicability of the proposed probabilistic methodology in a design context is demonstrated on the example of a gas turbine compressor blade and the corresponding compressor stage.
This dissertation is intended to transport the theory of Serre functors into the context of A-infinity-categories. We begin with an introduction to multicategories and closed multicategories, which form a framework in which the theory of A-infinity-categories is developed. We prove that (unital) A-infinity-categories constitute a closed symmetric multicategory. We define the notion of A-infinity-bimodule similarly to Tradler and show that it is equivalent to an A-infinity-functor of two arguments which takes values in the differential graded category of complexes of k-modules, where k is a commutative ground ring. Serre A-infinity-functors are defined via A-infinity-bimodules following ideas of Kontsevich and Soibelman. We prove that a unital closed under shifts A-infinity-category over a field admits a Serre A-infinity-functor if and only if its homotopy category admits an ordinary Serre functor. The proof uses categories and Serre functors enriched in the homotopy category of complexes of k-modules. Another important ingredient is an A-infinity-version of the Yoneda Lemma.
In dieser Arbeit wurden für die Moleküle B3, B3- und C3+ mit dem MR-CI-Verfahren hochgenaue Potentialfächen für ein oder mehrere elektronische Zustände berechnet. Alle drei Moleküle besitzen elektronisch entartete Jahn-Teller-Zustände. Im Gegesatz zu den früher untersuchten Alkalitrimeren liegt hier die konische Durchschneidung so tief, dass sie bei der Schwingungsanalyse berücksichtigt werden muss und daher eine diabatische Behandlung erfordlich ist. Für den X<-1E'-Übergang im B3 konnte die Übereinstimmung des berechneten Spektrum mit dem gemessenen durch den Einsatz des größeren Basissatzes VQZ, im Vergleich zu den bereits veröffentlichten Ergebnissen, nochmals deutlich verbessert werden. Für den berechneten 00-Übergang ist im gemessen Spektrum kein Übergang zu beobachten. Neben der guten Übereinstimmung der anderen Peaks wird diese These auch die T00 Energie gestützt. Die einfache Progression des experimentellen X<-2E'-Übergangs im B3 konnte ebenfalls in gute Übereinstimmung berechnet werden. Die einfache und kurze Progression ergibt sich aus der Tatsache, dass praktisch keine Jahn-Teller-Verzerrung vorhanden ist, und beide Teilflächen fast deckungsgleich sind. Für den X<-1E'-Übergang des B3- wurde ebenfalls ein Spektrum simuliert, allerdings findet sich keine Übereinstimmung zu den gemessenen Übergängen. Da die beobachtete Elektronenablöseenergie nur unwesentlich oberhalb der Elektronenanregungsenergie liegt und im Hinblick auf die starken X<-2E'-Absorptionen des B3 in der gleichen Messung bleibt offen, welche Strukturn im Experiment zu sehen sind. Zum C3+ wurde eine Schwingungsanalyse für den E'-Grundzustand durchgeführt. Experimentelle Vergleichswerte fehlen in diesem Fall. Allerdings konnte die bereits seit mehr als einem Jahrzehnt diskutierte Höhe der Isomerisierungsenergie zwischen gewinkelter und linearer Geometrie sehr genau, auf nun 6.8 +- 0.5 kcal/mol festgelegt werden. Bei vibronischer Betrachtung unter Einbeziehung der Nullpunktsenergien reduziert sich diese auf bzw. 4.8 kcal/mol. Ausserdem wurde die Existenz eines linearen Minimums bestätigt. C3+ liefert auch ein sehr schönes Beispiel für die Verschränkung verschiedener lokaler und globaler Schwingungszustände, was zu einer irregulären Abfolge von Zuständen führt. Für die Reaktivität des C3+ wurde beobachtet, dass es unterhalb von 50 K die höchste Reaktivität besitzt und darüber deutlich abnimmt. Auf dieses Verhalten liefert die Schwingungsanalyse keine Antwort, da bis selbst zur Raumtemperatur keine thermische Schwingungsanregung statt finden kann.
Im Mittelpunkt der Untersuchungen standen Bismut-Aren-Komplexe der Reihen [(C6H6)BiCl3-n]n+ (n = 0 – 3) und [(MemC6H6-m)BiCl3] (m = 0 – 3). Außerdem wurden die leichteren Homologen [(C6H6)SbCl3] und [(C6H6)AsCl3] untersucht, um Gruppentrends zu erkennen. Es wurden auch die zu den Komplexen [(C6H6)BiCl3-n]n+ (n = 1 – 3) isoelektronischen Blei-Aren-Komplexe mit in die Betrachtungen einbezogen. Von prinzipiellem Interesse ist der Komplex [(C6H6)2Pb]2+, der als Prototyp eines bent-Sandwich-Bis(aren)hauptgruppenelement-Komplexes aufzufassen ist. Die Strukturen der Neutralkomplexe und Komplexkationen wurden auf MP2(fc)/6-31+G(d)(C,H);SBKJC(d)(Bi,Cl)-Niveau optimiert. Die Wechselwirkungsenergie in [(C6H6)BiCl3] beträgt –23 kJ/mol (MP4). Betrachtungen der Elektronenlokalisierungsfunktion und der Molekülorbitale trugen zum Verständnis der Bindungsverhältnisse in den untersuchten Aren-Komplexen bei. Eine Kristallstrukturanalyse bestätigte die gefundenen Trends der Rechnungen. Die Aren-Komplexe mit leichteren Zentralatomen sind weniger stabil. Außerdem steigt die Stabilität mit der Basizität des Aren-Liganden leicht und mit der Ladung der Komplexe stark an. Zur Untersuchung (B3LYP/6-311+G(d)) der Bindungsverhältnisse im P4-Ring des Tetrakis(amino)-1l5,3l5-Tetraphosphets und der P-P-Bindung seines [2+2]-Cycloreversionsprodukts wurden die Elektronenlokalisierungsfunktionen, wichtige Molekülorbitale, Bindungsordnungen, berechnete chemische Verschiebungen der Phosphorkerne und kernunabhängige chemische Verschiebungen betrachtet. Danach kann für die P-P-Bindung im P4-Ring des Tetraphosphets ein beträchtlicher p-Bindungsanteil angenommen werden. Die P-P-Bindung im [2+2]-Cycloreversionsprodukt kann als elektronisch ungewöhnliche, durch Coulombkräfte verkürzte Doppelbindung verstanden werden. Bei präparativen Arbeiten zu funktionalisierten Aminoarsanen wurde vor einiger Zeit ein Tetrakis(amino)diarsan mit einer außerordentlich langen As-As-Bindung (2,673(3) Å) erhalten. Quantenchemische Rechnungen (B3LYP/6-31+G(d)) bestätigen und unterstützen diesen Befund und räumen letzte Zweifel im Zusammenhang mit der kristallstrukturanalytischen Bestimmung der Bindungslänge aus.
Abbildung von Arbeitszufriedenheits-Strukturen im soziotechnischen Modell der Berufsgruppe Busfahrer
(2023)
Hintergrund: Die Übertragung von allgemeinen Erkenntnissen der Arbeitszufriedenheit (AZ) auf spezifische Berufsgruppen ist ohne Einbezug von Berufskontext und Persönlichkeit der involvierten Individuen nicht sinnhaft möglich. Mixed Methods Ansätze, samt darin inkludierter Experteninterviews, sind relevant um Zusammenhänge aus Expertensicht verstehen zu können. Der vorherrschenden Kritik der bestehenden Eindimensionalität der AZ Forschung soll damit begegnet werden.
Methode: Leitfadengestützte, teilstrukturierte Interviewerhebungen führten zu Faktorenidentifikation und Hypothesenmodellbildung der Berufsgruppe deutscher Busfahrer des öffentlichen Personennahverkehrs. In Fragebogenform wurden folgende Aspekte erhoben: Effort-Reward-Imbalance (ERI), Resilienz (RS 11), Arbeitszufriedenheitstypisierung (FEAT), Work-Life-Balance (TKS-WLB), Persönlichkeitsausprägungen (BFI-10), Gefährdungsbeurteilung gegenüber psychischen Belastungen am Arbeitsplatz (COPSOQ), Spillover zwischen Arbeit und Privatleben (BAOF). Strukturgleichungsmodelle (SGM) wurden zur teilweisen Quantifizierung der Hypothesenmodelle erstellt. Strukturiert wurde die Arbeit im soziotechnischen Modell (STM).
Ergebnisse: Die qualitative Datenanalyse wurde anhand des STMverfahrens geleitet und anschließend in Form von Causal-Loop-Diagrammen (CLD) abgebildet. Bestehende Faktoren der AZ (hemmend oder fördernd), ihre Interaktionen untereinander, sowie daraus entstehende Wirkketten konnten so abgebildet werden. Die SGM zeigten hohe Modellgüte (Chi²/df = 1.29; 1.90), hohe Modellaufklärung (TLI und CFI > .95; >.90) und gute Annäherung an ein perfektes Messmodell (RMSEA= .03; .06). TKS-WLB und BAOF zeigten reduzierte Werte. Die ERI identifizierte bei 68% der Probanden eine Gratifikationskrise (M=1.32, SD=.56). Mit einem Overcommitment (OC) von M=.63 neigen Busfahrer eher zu einer beruflichen Verausgabung. Arbeitsplatzsicherheit (M=5.03; SD= 1.71) wurde als adäquat eingeschätzt, Wertschätzung (M=4.51; SD=1.66) neutral und die Möglichkeit der Weiterqualifizierung (M=5.63; SD=1.89) als schwach. Die Resilienz (M=64; SD=3.58) wurde als stark ausgeprägt eingestuft. Die Persönlichkeitsmerkmale (BFI10) der Probanden weisen Normwerte der Verträglichkeit (M=3.30; SD=.77) und des Neurotizismus (M=2.18; SD=.79), etwas reduzierte Werte der Offenheit (M=3.31; SD=.92) und Extraversion (M=3.48; SD=.92) im Vergleich und deutlich ausgeprägte Ergebnisse der Gewissenhaftigkeit (M=4.08; SD=.84) auf. Im FEAT konnten 7 maßgebliche Typen identifiziert werden.
Fazit: Die Herangehensweise dieser Arbeit über die Gliederung des STM und die darin eingebettete Aufbereitung identifizierter Faktoren über CLD, zeigte Ausprägung, Ladung und Zusammenhang einwirkender Faktoren auf die AZ im Berufsbild Busfahrer und erschloss Systemverständnis in diesem Berufsbild hinsichtlich der AZ. Einbezug persönlichkeitsbezogener Aspekte erweiterten das Verständnis dieses bislang wenig beachteten Berufsbildes.
Diese Arbeit gehört in die algebraische Geometrie und die Darstellungstheorie und stellt eine Beziehung zwischen beiden Gebieten dar. Man beschäftigt sich mit den abgeleiteten Kategorien auf flachen Entartungen projektiver Geraden und elliptischer Kurven. Als Mittel benutzt man die Technik der Matrixprobleme. Das Hauptergebnis dieser Dissertation ist der folgende Satz: SATZ. Sei X ein Zykel projektiver Geraden. Dann gibt es drei Typen unzerlegbarer Objekte in D^-(Coh_X): - Shifts von Wolkenkratzergarben in einem regulären Punkt; - Bänder B(w,m,lambda), - Saiten S(w). Ganz analog beweist man die Zahmheit der abgeleiteten Kategorien vieler assoziativer Algebren.
Computer-based simulation and visualization of acoustics of a virtual scene can aid during the design process of concert halls, lecture rooms, theaters, or living rooms. Because, not only the visual aspect of the room is important, but also its acoustics. In factory floors noise reduction is important since noise is hazardous to health. Despite the obvious dissimilarity between our aural and visual senses, many techniques required for the visualization of photo-realistic images and for the auralization of acoustic environments are quite similar. Both applications can be served by geometric methods such as particle- and ray tracing if we neglect a number of less important effects. By means of the simulation of room acoustics we want to predict the acoustic properties of a virtual model. For auralization, a pulse response filter needs to be assembled for each pair of source and listener positions. The convolution of this filter with an anechoic source signal provides the signal received at the listener position. Hence, the pulse response filter must contain all reverberations (echos) of a unit pulse, including their frequency decompositions due to absorption at different surface materials. For the room acoustic simulation a method named phonon tracing, since it is based on particles, is developed. The approach computes the energy or pressure decomposition for each particle (phonon) sent out from a sound source and uses this in a second pass (phonon collection) to construct the response filters for different listeners. This step can be performed in different precision levels. During the tracing step particle paths and additional information are stored in a so called phonon map. Using this map several sound visualization approaches were developed. From the visualization, the effect of different materials on the spectral energy / pressure distribution can be observed. The first few reflections already show whether certain frequency bands are rapidly absorbed. The absorbing materials can be identified and replaced in the virtual model, improving the overall acoustic quality of the simulated room. Furthermore an insight into the pressure / energy received at the listener position is possible. The phonon tracing algorithm as well as several sound visualization approaches are integrated into a common system utilizing Virtual Reality technologies in order to facilitate the immersion into the virtual scene. The system is a prototype developed within a project at the University of Kaiserslautern and is still a subject of further improvements. It consists of a stereoscopic back-projection system for visual rendering as well as professional audio equipment for auralization purposes.
Three dimensional (3d) point data is used in industry for measurement and reverse engineering. Precise point data is usually acquired with triangulating laser scanners or high precision structured light scanners. Lower precision point data is acquired by real-time structured light devices or by stereo matching with multiple cameras. The basic principle of all these methods is the so-called triangulation of 3d coordinates from two dimensional (2d) camera images.
This dissertation contributes a method for multi-camera stereo matching that uses a system of four synchronized cameras. A GPU based stereo matching method is presented to achieve a high quality reconstruction at interactive frame rates. Good depth resolution is achieved by allowing large disparities between the images. A multi level approach on the GPU allows a fast processing of these large disparities. In reverse engineering, hand-held laser scanners are used for the scanning of complex shaped objects. The operator of the scanner can scan complex regions slower, multiple times, or from multiple angles to achieve a higher point density. Traditionally, computer aided design (CAD) geometry is reconstructed in a separate step after the scanning. Errors or missing parts in the scan prevent a successful reconstruction. The contribution of this dissertation is an on-line algorithm that allows the reconstruction during the scanning of an object. Scanned points are added to the reconstruction and improve it on-line. The operator can detect the areas in the scan where the reconstruction needs additional data.
First, the point data is thinned out using an octree based data structure. Local normals and principal curvatures are estimated for the reduced set of points. These local geometric values are used for segmentation using a region growing approach. Implicit quadrics are fitted to these segments. The canonical form of the quadrics provides the parameters of basic geometric primitives.
An improved approach uses so called accumulated means of local geometric properties to perform segmentation and primitive reconstruction in a single step. Local geometric values can be added and removed on-line to these means to get a stable estimate over a complete segment. By estimating the shape of the segment it is decided which local areas are added to a segment. An accumulated score estimates the probability for a segment to belong to a certain type of geometric primitive. A boundary around the segment is reconstructed using a growing algorithm that ensures that the boundary is closed and avoids self intersections.
Acrylamid und Acrolein gehören zu den alpha,beta-ungesättigten Carbonylverbindungen. Sie zeichnen sich wie andere alpha,beta-ungesättigte Carbonylverbindungen durch eine hohe Reaktionsfähigkeit aus. Einerseits können sie leicht mit Proteinen und DNA reagieren, was zytotoxische und genotoxische Wirkungen hervorrufen kann, andererseits können sie aber auch schnell durch Glutathionkonjugation detoxifiziert werden.
Acrylamid ist eine in großem Umfang produzierte Industriechemikalie, die hauptsächlich Anwendung bei der Herstellung von Polyacrylamidprodukten findet. Aus Acrylamid hergestellte Polymere und Copolymere werden in der Kosmetikindustrie, als Bindemittel bei der Papierherstellung, als Flockungsmittel in der Abwasseraufbereitung und in biochemischen Laboratorien verwendet. Nachdem Acrylamid-Hämoglobin-Addukte im Jahre 2002 auch in nicht Acrylamid-exponierten Personen gefunden wurden, vermutete man Lebensmittel als mögliche Expositionsquelle. Anschließende Studien konnten dies bestätigen und zeigten, dass Acrylamid beim Erhitzen von Lebensmitteln vor allem bei hohen Temperaturen im Verlauf der Maillard-Reaktion gebildet werden kann. Die World Health Organisation (WHO) beziffert die weltweite durchschnittliche Exposition mit Acrylamid über Lebensmittel auf 1-4 µg Acrylamid/kg Körpergewicht (KG) und Tag. Acrylamid zeigte in verschiedenen Studien neurotoxische, entwicklungs- und reproduktionstoxische, genotoxische und kanzerogene Wirkungen. Acrylamid wurde im Jahre 1994 von der International Agency for Research on Cancer (IARC) in die Gruppe 2A als Stoff eingestuft, der wahrscheinlich krebserzeugend beim Menschen ist.
Acrylamid wird im Organismus zum genotoxischen Metaboliten Glycidamid gegiftet. Glycidamid bildet DNA-Addukte vor allem mit dem N7 des Guanins. Glycidamid-DNA-Addukte konnten im Tierversuch an Nagern nach Verabreichung hoher Mengen Acrylamid in allen untersuchten Organen gefunden werden. Als Hauptweg der Entgiftung von Acrylamid und Glycidamid gilt die Bindung an Glutathion (GSH) und der Abbau und die Ausscheidung als Mercaptursäure (MA) in Urin. Aufgrund des oxidativen Metabolismus von Acrylamid hängt die biologische Wirkung wesentlich vom Gleichgewicht der giftenden und entgiftenden Metabolismuswege in der Leber ab.
Acrolein wird seit 1940 kommerziell zur Herstellung von Acrylsäure, dem Ausgangsprodukt für Acrylatpolymere industriell produziert. Außerdem kann Acrolein aus Aminosäuren, Fetten oder Kohlenhydraten während des Erhitzens von Lebensmitteln gebildet werden. Während der Zubereitung von kohlenhydratreichen Lebensmitteln kann Acrolein wie auch Acrylamid im Verlauf der Maillard-Reaktion entstehen. Acrolein ist als einfachster alpha,beta-ungesättigter Aldehyd hochreaktiv gegenüber Nukleophilen wie z.B. Thiol- oder Aminogruppen unter Ausbildung von Michael-Addukten. Die hohe Reaktivität und Flüchtigkeit von Acrolein führt dazu, dass derzeit nur wenig zuverlässige Daten zu Acrolein-Gehalten speziell in kohlenhydratreichen Lebensmitteln vorliegen; sofern Daten zu Gehalten vorhanden sind, bewegen sich diese im niedrigen µg/kg-Bereich. Zudem ist bisher ungeklärt, in welchem Ausmaß Acrolein zur humanen Gesamtexposition gegenüber hitzeinduzierten Schadstoffen neben Acrylamid in Lebensmitteln beiträgt. Die derzeitige Datenlage lässt eine eindeutige Risikobewertung nicht zu. Eine stetige Exposition mit Acrolein gilt als sicher. In verschiedenen Studien konnte gezeigt werden, dass die toxikologischen Effekte von Acrolein im Gegensatz zu Acrylamid insgesamt nicht auf einer erhöhten Tumorinzidenz beruhen. Daher wurde Acrolein von der IARC in Kategorie 3 eingestuft: Es gilt als möglicherweise krebserzeugend beim Menschen, allerdings ist die Datenlage nicht ausreichend, um eine eindeutige Beurteilung vornehmen zu können.
Ziel der vorliegenden Arbeit war es, die Toxikokinetik und -dynamik der beim Erhitzen von Lebensmitteln entstehenden Kontaminanten Acrylamid und Acrolein in vitro und in vivo zu untersuchen. Im Vordergrund stand die Erfassung dosisabhängiger Genotoxizität von Acrylamid sowie der MA als wichtigste Entgiftungsreaktion im Tierversuch im Bereich der derzeitigen Verbraucherexposition. Die Ergebnisse, insbesondere zur Toxikokinetik, sollten durch in vitro Versuche in primären Rattenhepatozyten untermauert werden. Außerdem sollte vergleichend die bisher kaum mithilfe von Biomarkern untersuchte nahrungsbezogene Exposition des Verbrauchers mit Acrylamid und Acrolein bestimmt werden. Eine Dosis-Wirkungsuntersuchung an Sprague Dawley (SD)-Ratten im Dosisbereich von 0,1 bis 10.000 µg/kg KG lieferte erstmals quantitative Informationen zur DNA-Adduktbildung durch den genotoxischen Acrylamid-Metaboliten Glycidamid bis in niedrigste Expositionsbereiche. In diesem Niedrigdosisbereich (0,1 bis 10 µg/kg KG) liegt die nach Einmaldosierung gemessene N7-GA-Gua-Bildung im unteren Bereich der humanen Hintergrundgewebsspiegel für DNA-Läsionen verschiedenen Ursprungs. Dieser Befund könnte die zukünftige Risikobewertung von Expositionen mit solchen genotoxischen Kanzerogenen auf eine neue und der Messung zugängliche Basis stellen. Mit der in dieser Arbeit eingesetzten extrem empfindlichen instrumentellen Analytik sind erstmals Messungen von genotoxischen Ereignissen bis in den Bereich der Verbraucherexposition möglich geworden. Es bleibt allerdings zu beachten, das Genotoxizität zwar eine notwendige, aber nicht hinreichende Bedingung für Mutagenität und maligne Transformation ist. Die auf ein genotoxisches Ereignis folgende biologische Antwort, muss jedoch in die Risikobewertung mit einbezogen werden.
In primären Rattenhepatozyten ließ sich bei Inkubation mit Acrylamid zeigen, dass GSH-Addukte deutlich früher bei niedrigeren Acrylamidkonzentrationen nachweisbar sind als Glycidamid und N7-GA-Gua-Addukte. Der direkte Vergleich der Bildung von Glycidamid mit jener der AA-GSH-Addukte ließ schließen, dass die Entgiftung von Acrylamid in primären Rattenhepatozyten bis zu dreifach schneller verläuft als die Giftung. Zusätzlich ließ sich erstmals zeigen, dass primäre Rattenhepatozyten neben der Kopplung von Xenobiotika an GSH, zumindest auch in kleinen Anteilen zur Umwandlung in die entsprechenden MA fähig sind.
Um das von Acrolein ausgehende Gefährdungspotential zu untersuchen, wurde dessen DNA-Adduktbildung in vitro untersucht. Als Biomarker für die Bildung eines Haupt-DNA-Adduktes wurde fünffach 15N-markiertes Hydroxypropanodeoxyguanosin (OH-[15N5]-PdG) synthetisiert und charakterisiert. DNA Inkubationsversuche mit Acrolein zeigten eine konzentrations- und zeitabhängige Bildung der OH-PdG-Addukte. Acrolein reagierte nur wenig langsamer als Glycidamid zu diesen Addukten.
Zur Untersuchung der Toxikokinetik von Acrylamid und Acrolein in vivo nach Aufnahme von hoch belasteten bzw. kommerziell erhältlichen Kartoffelchips wurden die Ergebnisse aus zwei Humanstudien durchgeführt und ausgewertet. Die Ausscheidungskinetiken Acrolein-assoziierter MA im Menschen korrelierten eindeutig mit der Aufnahme von Kartoffelchips. Der Vergleich der im Urin ausgeschiedenen Mengen an Acrolein- bzw. Acrylamid-assoziierten MA ließ auf eine wesentlich höhere nahrungsbezogene Exposition mit Acrolein (4- bis 12-fach) verglichen mit Acrylamid schließen. Analytische Messungen der Acroleingehalte in den Lebensmitteln hatten aber nur eine Kontamination ergeben, die nur einen geringen Anteil der expositionsbedingt im Urin erfassten MA-Mengen erklären kann. Ob Acrolein an der Lebensmittelmatrix in einer Weise gebunden vorliegt, dass es sich der analytischen Erfassung durch die zur Verfügung stehenden Verfahren wie Headspace-GC/MS entzieht und erst nach Aufnahme in den Organismus freigesetzt wird, wird Gegenstand künftiger Untersuchungen. Zusätzlich liefern die Ergebnisse beider Humanstudien starke Hinweise auf eine endogene Bildung von Acrolein, da auch in den Wash-out Phasen ein relativ hoher Anteil an Acrolein-assoziierten MA erfasst wurde. Zukünftige Untersuchungen sollten die endogene Exposition und die Bildungsmechanismen von Acrolein und anderen Alkenalen aus verschiedenen physiologischen Quellen genauer untersuchen, und in Beziehung setzen zur exogenen, ernährungsbezogenen Exposition. Ebenso sollten künftig verstärkt die Auswirkungen kombinierter Exposition durch solche erhitzungsbedingt gebildeten Stoffe untersucht werden.
At present the standardization of third generation (3G) mobile radio systems is the subject of worldwide research activities. These systems will cope with the market demand for high data rate services and the system requirement for exibility concerning the offered services and the transmission qualities. However, there will be de ciencies with respect to high capacity, if 3G mobile radio systems exclusively use single antennas. Very promising technique developed for increasing the capacity of 3G mobile radio systems the application is adaptive antennas. In this thesis, the benefits of using adaptive antennas are investigated for 3G mobile radio systems based on Time Division CDMA (TD-CDMA), which forms part of the European 3G mobile radio air interface standard adopted by the ETSI, and is intensively studied within the standardization activities towards a worldwide 3G air interface standard directed by the 3GPP (3rd Generation Partnership Project). One of the most important issues related to adaptive antennas is the analysis of the benefits of using adaptive antennas compared to single antennas. In this thesis, these bene ts are explained theoretically and illustrated by computer simulation results for both data detection, which is performed according to the joint detection principle, and channel estimation, which is applied according to the Steiner estimator, in the TD-CDMA uplink. The theoretical explanations are based on well-known solved mathematical problems. The simulation results illustrating the benefits of adaptive antennas are produced by employing a novel simulation concept, which offers a considerable reduction of the simulation time and complexity, as well as increased exibility concerning the use of different system parameters, compared to the existing simulation concepts for TD-CDMA. Furthermore, three novel techniques are presented which can be used in systems with adaptive antennas for additionally improving the system performance compared to single antennas. These techniques concern the problems of code-channel mismatch, of user separation in the spatial domain, and of intercell interference, which, as it is shown in the thesis, play a critical role on the performance of TD-CDMA with adaptive antennas. Finally, a novel approach for illustrating the performance differences between the uplink and downlink of TD-CDMA based mobile radio systems in a straightforward manner is presented. Since a cellular mobile radio system with adaptive antennas is considered, the ultimate goal is the investigation of the overall system efficiency rather than the efficiency of a single link. In this thesis, the efficiency of TD-CDMA is evaluated through its spectrum efficiency and capacity, which are two closely related performance measures for cellular mobile radio systems. Compared to the use of single antennas, the use of adaptive antennas allows impressive improvements of both spectrum efficiency and capacity. Depending on the mobile radio channel model and the user velocity, improvement factors range from six to 10.7 for the spectrum efficiency, and from 6.7 to 12.6 for the spectrum capacity of TD-CDMA. Thus, adaptive antennas constitute a promising technique for capacity increase of future mobile communications systems.
Adaptive Extraction and Representation of Geometric Structures from Unorganized 3D Point Sets
(2009)
The primary emphasis of this thesis concerns the extraction and representation of intrinsic properties of three-dimensional (3D) unorganized point clouds. The points establishing a point cloud as it mainly emerges from LiDaR (Light Detection and Ranging) scan devices or by reconstruction from two-dimensional (2D) image series represent discrete samples of real world objects. Depending on the type of scenery the data is generated from the resulting point cloud may exhibit a variety of different structures. Especially, in the case of environmental LiDaR scans the complexity of the corresponding point clouds is relatively high. Hence, finding new techniques allowing the efficient extraction and representation of the underlying structural entities becomes an important research issue of recent interest. This thesis introduces new methods regarding the extraction and visualization of structural features like surfaces and curves (e.g. ridge-lines, creases) from 3D (environmental) point clouds. One main part concerns the extraction of curve-like features from environmental point data sets. It provides a new method supporting a stable feature extraction by incorporating a probability-based point classification scheme that characterizes individual points regarding their affiliation to surface-, curve- and volume-like structures. Another part is concerned with the surface reconstruction from (environmental) point clouds exhibiting objects that are more or less complex. A new method providing multi-resolutional surface representations from regular point clouds is discussed. Following the applied principles of this approach a volumetric surface reconstruction method based on the proposed classification scheme is introduced. It allows the reconstruction of surfaces from highly unstructured and noisy point data sets. Furthermore, contributions in the field of reconstructing 3D point clouds from 2D image series are provided. In addition, a discussion concerning the most important properties of (environmental) point clouds with respect to feature extraction is presented.
Real-time systems are systems that have to react correctly to stimuli from the environment within given timing constraints.
Today, real-time systems are employed everywhere in industry, not only in safety-critical systems but also in, e.g., communication, entertainment, and multimedia systems.
With the advent of multicore platforms, new challenges on the efficient exploitation of real-time systems have arisen:
First, there is the need for effective scheduling algorithms that feature low overheads to improve the use of the computational resources of real-time systems.
The goal of these algorithms is to ensure timely execution of tasks, i.e., to provide runtime guarantees.
Additionally, many systems require their scheduling algorithm to flexibly react to unforeseen events.
Second, the inherent parallelism of multicore systems leads to contention for shared hardware resources and complicates system analysis.
At any time, multiple applications run with varying resource requirements and compete for the scarce resources of the system.
As a result, there is a need for an adaptive resource management.
Achieving and implementing an effective and efficient resource management is a challenging task.
The main goal of resource management is to guarantee a minimum resource availability to real-time applications.
A further goal is to fulfill global optimization objectives, e.g., maximization of the global system performance, or the user perceived quality of service.
In this thesis, we derive methods based on the slot shifting algorithm.
Slot shifting provides flexible scheduling of time-constrained applications and can react to unforeseen events in time-triggered systems.
For this reason, we aim at designing slot shifting based algorithms targeted for multicore systems to tackle the aforementioned challenges.
The main contribution of this thesis is to present two global slot shifting algorithms targeted for multicore systems.
Additionally, we extend slot shifting algorithms to improve their runtime behavior, or to handle non-preemptive firm aperiodic tasks.
In a variety of experiments, the effectiveness and efficiency of the algorithms are evaluated and confirmed.
Finally, the thesis presents an implementation of a slot-shifting-based logic into a resource management framework for multicore systems.
Thus, the thesis closes the circle and successfully bridges the gap between real-time scheduling theory and real-world implementations.
We prove applicability of the slot shifting algorithm to effectively and efficiently perform adaptive resource management on multicore systems.
Adaptive Strukturoptimierung von Faserkunststoffverbunden unter Berücksichtigung bionischer Aspekte
(2006)
Es finden immer häufiger Faserverbundmaterialien in Strukturbauteilen Anwendung,
da bei konventionellen Materialien die Zielkriterien, wie definierte Festigkeit, Steifigkeit,
etc. nicht mehr bzw. nicht mit hinreichend geringem Bauteilgewicht erreicht werden
können. Angesichts der hohen Kosten ist es verständlich, dass Faserkunststoffverbunde
(FKV) vorzugsweise in den Bereichen eingesetzt werden, wo die eingangs
erwähnten Optimierungsziele hohe Priorität haben. Besonders hervorzuheben ist
hierbei die Luft- und Raumfahrt. Zunehmende Bedeutung gewinnt der Einsatz von
Faserverbundwerkstoffen aber auch in der Automobil- bzw. Maschinenbauindustrie.
Mit fortschreitender Verbesserung der Optimierungsmethoden sowie der Fertigungstechnologien
und der damit verbundenen Kostenreduktion, werden heute bereits
komplexe Module hergestellt. Das zieht wiederum eine lastgerechte und werkstoffspezifische
Konstruktion nach sich. Gegenstand der Arbeit ist die Entwicklung eines
Topologieoptimierungswerkzeuges zur werkstoffgerechten Auslegung von FKVStrukturen.
Ziel ist, FKV - eine Klasse von Hochleistungswerkstoffen, deren Potenzial
sich nur mit geeigneten Modellen zur Nutzung ihrer anisotropen Eigenschaften ausschöpfen
lässt - unter Berücksichtigung der technischen Realisierbarkeit zu optimieren.
Dabei werden natürliche Wachstumsprinzipien in einen iterativen Prozess überführt.
Als Ziel dieses Algorithmus kann entweder eine gezielte Steifigkeit oder eine
gewichtsoptimale Lösung bei hinreichender Festigkeit mit möglichst gleichmäßiger
Spannungsverteilung im Bauteil definiert werden. Erreicht wird dies durch eine effektive
Lastverteilung von hoch belasteten auf geringer belastete Bereiche und somit
auch die Optimierung der Materialverteilung. In diesem Designvorschlag wird die
Grundorientierung der Basisschicht, die kraftflussgerechte Orientierung der Laminateinzellagen
sowie die Topologie von Zulagenschichten bzw. des Gesamtlaminates
optimiert. Besonders interessant ist die adaptive Strukturoptimierung von FKVStrukturen
bei lokalen Zulagen an hoch belasteten Krafteinleitungsstellen bzw. allgemein
in Bereichen hoher Spannungen. Wie weiterhin gezeigt wird, ist die entwickelte
adaptive Topologie- und Faserwinkeloptimierung in Kombination aus technologischer,werkstoffmechanischer sowie wirtschaftlicher Sicht vorteilhaft und kann
problemlos in der Praxis angewandt werden.
More and more fibre-reinforced composite materials are being used in structural building
components because with conventional materials, the target criteria, such as
defined strength, rigidity etc. can no longer be achieved with a sufficiently low weight
of the structural components, if at all. In view of the high costs, it is understandable
that fibre-reinforced plastic composites tend to be used in technical areas where the
optimization goals mentioned above have a high priority. The aviation and aerospace
industry deserves special mention here. The use of fibre composite materials is also
gaining significance in the automotive and mechanical engineering industry. Thanks
to increasing improvements in optimization methods and manufacturing technologies
and the reduction in costs that this brings with it, complex modules are being produced
even today. This in turn ensures specific-material construction with the necessary
load-bearing properties. The objective of the presentation is the development of
a topology optimization tool for designing Fibre-plastic-composite (FPC)-structures
which is appropriate for each material involved. The objective is to optimize FPC – a
class of high-performance materials the potential of which can only be exploited with
suitable models for the utilization of their anisotropic properties – under consideration
of their capability for technical realization. In doing so, natural growth principles are
implemented into an iterative process, thereby enabling computer simulation. The
main goal of this algorithm is maximum rigidity with as even a distribution of tension
as possible throughout the component, which is achieved by distributing the load
from high-load to lower load bearing areas, thereby optimizing the material distribution.
The weight optimization of specific components is possible in this way. The basic
orientation of the base layer, the orientation of the individual laminate layers in a
manner appropriate to the power flux, as well as the topology of bonding layers
and/or the entire laminate are optimized in this design recommendation. Of particular
interest here is the adaptive structural optimization of FPC structures with localized
bonding to high-load bearing load introduction points or generally, in areas with high
stresses. As continues to be shown, the developed adaptive topology and fibre angle optimization is beneficial from a technological, material-mechanical and economical
point of view, and can be applied in everyday practice without any problems.
Additive Fertigungsverfahren zeichnen sich durch eine hohe erreichbare Komplexität der zu erzeugenden Geometrien, bei gleichzeitig kaum steigendem Fertigungsaufwand, aus. Dies wird durch den schichtweisen Aufbau additiver Fertigungsverfahren erreicht, bei dem die zu fertigende Geometrie zunächst in einzelne Bauteilquerschnitte aufgeteilt und anschließend durch Fügen dieser Querschnitte aufgebaut wird. Ein etabliertes additives Fertigungsverfahren ist das selektive Laserschmelzen, bei dem die zu erzeugende Geometrie durch Aufschmelzen von Metallpulver mittels eines Lasers in einem Pulverbett erzeugt wird. Die durch selektives Laserschmelzen generierten Oberflächen müssen zur Erzeugung von Funktionsflächen spanend endbearbeitet werden, wobei die charakteristische Anisotropie additiv erzeugter Werkstoffe berücksichtigt werden muss. Diese Arbeit beschäftigt sich mit den Wirkmechanismen additiv-subtraktiver Prozessketten bei der spanenden Bearbeitung von Edelstahl 1.4404, wobei zunächst beide Prozesskettenteile getrennt voneinander analysiert werden. Es werden unterschiedliche Wirkzusammenhänge der Prozessparameter der additiven Fertigung und Pulvereigenschaften auf den erzeugten Werkstoff identifizert. Weiter werden durch den Einsatz des Mikrofräsens als Bearbeitungsverfahren (Werkzeugdurchmesser < 50 µm) Wechselwirkungen zwischen anisotropem Werkstoff sowie Prozess- und Prozessergebnisgrößen der spanenden Bearbeitung besonders deutlich. Die Untersuchungen ergaben, dass prozesskettenübergreifende Wirkmechanismen zwischen additiven und subtraktiven Prozesskettenteilen beim selektiven Laserschmelzen und Mikrofräsen von Edelstahl 1.4404 bestehen.
Additive 3D-Drucksverfahren ermöglichen eine automatisierte wie flexible Fertigung
komplexer 3D-Geometrien direkt aus einem CAD-Modell ohne die Notwendigkeit ei
nes bauteilspezifischen Werkzeugs. Nachteil vor allem beim 3D-Drucken von Kunst
stoffen sind jedoch die geringen mechanischen Eigenschaften, die auf verfahrensbe
dingte Herausforderungen, aber auch auf eine eingeschränkte Auswahl verarbeitba
rer Materialien zurückzuführen sind. Eine Möglichkeit die mechanischen Eigenschaf
ten von Kunststoffen zu verbessern, ist die Kombination mit Verstärkungsfasern. Die
höchste Verstärkungswirkung entfalten Faser-Kunststoff-Verbunde (FKV) wenn die
Fasern kontinuierlich und in Lastrichtung vorliegen. Um ihr volles Potential zu entfal
ten, müssen FKV daher möglichst gut an die jeweiligen Anwendungen angepasst
werden. Das erschwert eine automatisierte und effiziente Fertigung, gerade von
komplexeren Strukturen. Ziel der Arbeit war daher die Entwicklung eines 3D
Verfahrens für kontinuierlich faserverstärkte Kunststoffe. Hierdurch soll das Anwen
dungsspektrum kunststoffbasierter 3D-Druck-Verfahren vergrößert und gleichzeitig
eine effiziente sowie flexible Fertigung komplexer FKV-Strukturen ermöglicht werden.
Das entwickelte Prozesskonzept basiert dabei auf 3D-Druck-Extrusionsverfahren für
thermoplastische Kunststoffe. Im sogenannten Fiber Integrated Fused Deposition
Modeling Prozess, kurz FIFDM, werden bereits imprägnierte Halbzeuge in Form von
kontinuierlich faserverstärkten Thermoplaststrängen (FTS) verarbeitet. Um die Fa
serorientierung frei einstellen zu können, werden die Stränge nicht wie herkömmlich
nur schichtweise, sondern frei in alle Raumrichtungen positioniert. Realisiert wird dies
über die Steuerung der FTS-Temperatur nach der Extrusion. Im Rahmen dieser Ar
beit wurde zur Quantifizierung und zum einfachen Vergleich der Halbzeugqualität ein
Qualitätsanalyseverfahren entwickelt und damit ein geeigneter FTS für weitere Pro
zessuntersuchungen ausgewählt. Zudem wurde eine FIFDM-Prototypenanlage ent
wickelt und aufgebaut. Mithilfe der thermischen Simulation des Extrusions- und Ab
kühlprozesses konnten thermische Prozessgrenzen auch für die 3D-Ablage im Raum
definiert werden. In einer umfassenden experimentellen Prozessanalyse wurde zu
dem untersucht, welche Prozessparameter einen Einfluss auf verschiedene Zielgrö
ßen der Prozessstabilität und Bauteilqualität besitzen. Ausgehend von den Erkennt
nissen aus dieser Arbeit wurden eine erste Einschätzung des Prozesspotentials vor
genommen und Vorschläge zur Prozessoptimierung formuliert.
3D printing enables automated and flexible production of complex 3D geometries
directly from a CAD model without the need for a component-specific tool. However,
the disadvantage, especially in the Additive Manufacturing (AM) of polymers, is the
low mechanical properties, which can be attributed to process-related challenges and
to a limited selection of processable materials. One way of improving the mechanical
properties of polymers is to combine them with reinforcing fibers. The highest rein
forcing effect for Fiber Reinforced Polymer Composites (FRPC) is achieved when the
fibers are continuously present in load direction. In order to develop their full poten
tial, FRPC must therefore be adapted as well as possible to the respective applica
tion. This complicates automated and efficient production, especially of more com
plex structures. The aim of the work was therefore to develop an AM process for con
tinuously fiber-reinforced polymers. This should increase the range of applications for
polymer-based AM processes and at the same time enable efficient and flexible pro
duction of complex FRPC structures. The developed process concept is based on 3D
printing extrusion processes for thermoplastics. In the so-called Fiber Integrated
Fused Deposition Modeling Process (FIFDM) already impregnated semi-finished
products are processed in the form of continuously fiber-reinforced thermoplastic
strands (FTS). In order to be able to freely adjust the fiber orientation, the strands can
be positioned in all spatial directions, not just layer by layer as is the case with con
ventional AM systems. This is realized by controlling the FTS temperature after ex
trusion. As part of this work, a quality analysis method was developed for quantifying
and comparing the semi-finished product quality and a suitable FTS was thus select
ed for further process investigations. In addition, a FIFDM prototype unit was devel
oped and set up. With the help of thermal simulation of the extrusion and cooling
process, thermal process limits could also be defined for the 3D placement in all spa
tial directions. In a comprehensive experimental process analysis, it was investigated
which process parameters have an influence on different target parameters of pro
cess stability and component quality. Based on the results of this work, an initial as
sessment of the process potential was made and proposals for process optimization
were formulated.
Adjoint-Based Shape Optimization and Optimal Control with Applications to Microchannel Systems
(2021)
Optimization problems constrained by partial differential equations (PDEs) play an important role in many areas of science and engineering. They often arise in the optimization of technological applications, where the underlying physical effects are modeled by PDEs. This thesis investigates such problems in the context of shape optimization and optimal control with microchannel systems as novel applications. Such systems are used, e.g., as cooling systems, heat exchangers, or chemical reactors as their high surface-to-volume ratio, which results in beneficial heat and mass transfer characteristics, allows them to excel in these settings. Additionally, this thesis considers general PDE constrained optimization problems with particular regard to their efficient solution.
As our first application, we study a shape optimization problem for a microchannel cooling system: We rigorously analyze this problem, prove its shape differentiability, and calculate the corresponding shape derivative. Afterwards, we consider the numerical optimization of the cooling system for which we employ a hierarchy of reduced models derived via porous medium modeling and a dimension reduction technique. A comparison of the models in this context shows that the reduced models approximate the original one very accurately while requiring substantially less computational resources.
Our second application is the optimization of a chemical microchannel reactor for the Sabatier process using techniques from PDE constrained optimal control. To treat this problem, we introduce two models for the reactor and solve a parameter identification problem to determine the necessary kinetic reaction parameters for our models. Thereafter, we consider the optimization of the reactor's operating conditions with the objective of improving its product yield, which shows considerable potential for enhancing the design of the reactor.
To provide efficient solution techniques for general shape optimization problems, we introduce novel nonlinear conjugate gradient methods for PDE constrained shape optimization and analyze their performance on several well-established benchmark problems. Our results show that the proposed methods perform very well, making them efficient and appealing gradient-based shape optimization algorithms.
Finally, we continue recent software-based developments for PDE constrained optimization and present our novel open-source software package cashocs. Our software implements and automates the adjoint approach and, thus, facilitates the solution of general PDE constrained shape optimization and optimal control problems. Particularly, we highlight our software's user-friendly interface, straightforward applicability, and mesh independent behavior.
Die vorliegende Arbeit beschreibt die Trennung von kurzkettigen Alkan/Alken-Gemischen an nanostrukturierten porösen Adsorbentien. Zu diesem Zweck wurden unterschiedliche metallorganische Koordinationspolymere und Zeolithe synthetisiert und charakterisiert. Zur Untersuchung des Adsorptionsverhaltens dieser Adsorbentien wurden Adsorptionsisothermen von C2-, C3- und C4-Kohlenwasserstoffen bei verschiedenen Temperaturen gemessen. Die Messung der Adsorption der reinen Kohlenwasserstoffe ergab, dass die adsorbierte Stoffmenge mit der spezifischen Oberfläche des Adsorbens korreliert, sowie von der kritischen Temperatur des Adsorptivs abhängt und in der Reihenfolge C2 < C3 < C4 zunimmt. Eine Ausnahme hiervon bilden flexible metallorganische Koordinationspolymere, welche Atmungs- bzw. Porenöffnungseffekte zeigen. Die Isothermen dieser Materialien weisen Sprünge auf, wobei diese jedoch abhängig vom Druck, vom Adsorptiv und von der Temperatur sind. Die Trennung von Alkan/Alken-Gemischen an den hergestellten Adsorbentien wurde in einem kontinuierlich durchströmten Festbettadsorber untersucht. Es zeigten sich unterschiedliche Trennfaktoren in Abhängigkeit von der Porenöffnung und der Gerüststruktur der Adsorbentien. Die Untersuchung der Desorption der Kohlenwasserstoffe von Cu\(_3\)(btc)\(_2\) ergab, dass der Desorptionsprozess bei Raumtemperatur nur sehr langsam verläuft. Es zeigte sich, dass die zur Desorption erforderliche Temperatur nimmt steigender Kohlenstoffzahl des Kohlenwasserstoffs zunimmt.
In recent years the field of polymer tribology experienced a tremendous development
leading to an increased demand for highly sophisticated in-situ measurement methods.
Therefore, advanced measurement techniques were developed and established
in this study. Innovative approaches based on dynamic thermocouple, resistive electrical
conductivity, and confocal distance measurement methods were developed in
order to in-situ characterize both the temperature at sliding interfaces and real contact
area, and furthermore the thickness of transfer films. Although dynamic thermocouple
and real contact area measurement techniques were already used in similar
applications for metallic sliding pairs, comprehensive modifications were necessary to
meet the specific demands and characteristics of polymers and composites since
they have significantly different thermal conductivities and contact kinematics. By using
tribologically optimized PEEK compounds as reference a new measurement and
calculation model for the dynamic thermocouple method was set up. This method
allows the determination of hot spot temperatures for PEEK compounds, and it was
found that they can reach up to 1000 °C in case of short carbon fibers present in the
polymer. With regard to the non-isotropic characteristics of the polymer compound,
the contact situation between short carbon fibers and steel counterbody could be
successfully monitored by applying a resistive measurement method for the real contact
area determination. Temperature compensation approaches were investigated
for the transfer film layer thickness determination, resulting in in-situ measurements
with a resolution of ~0.1 μm. In addition to a successful implementation of the measurement
systems, failure mechanism processes were clarified for the PEEK compound
used. For the first time in polymer tribology the behavior of the most interesting
system parameters could be monitored simultaneously under increasing load
conditions. It showed an increasing friction coefficient, wear rate, transfer film layer
thickness, and specimen overall temperature when frictional energy exceeded the
thermal transport capabilities of the specimen. In contrast, the real contact area between
short carbon fibers and steel decreased due to the separation effect caused by
the transfer film layer. Since the sliding contact was more and more matrix dominated,
the hot spot temperatures on the fibers dropped, too. The results of this failure
mechanism investigation already demonstrate the opportunities which the new
measurement techniques provide for a deeper understanding of tribological processes,
enabling improvements in material composition and application design.
If gradient based derivative algorithms are used to improve industrial products by reducing their target functions, the derivatives need to be exact.
The last percent of possible improvement, like the efficiency of a turbine, can only be gained if the derivatives are consistent with the solution process that is used in the simulation software.
It is problematic that the development of the simulation software is an ongoing process which leads to the use of approximated derivatives.
If a derivative computation is implemented manually, it will be inconsistent after some time if it is not updated.
This thesis presents a generalized approach which differentiates the whole simulation software with Algorithmic Differentiation (AD), and guarantees a correct and consistent derivative computation after each change to the software.
For this purpose, the variable tagging technique is developed.
The technique checks at run-time if all dependencies, which are used by the derivative algorithms, are correct.
Since it is also necessary to check the correctness of the implementation, a theorem is developed which describes how AD derivatives can be compared.
This theorem is used to develop further methods that can detect and correct errors.
All methods are designed such that they can be applied in real world applications and are used within industrial configurations.
The process described above yields consistent and correct derivatives but the efficiency can still be improved.
This is done by deriving new derivative algorithms.
A fixed-point iterator approach, with a consistent derivation, yields all state of the art algorithms and produces two new algorithms.
These two new algorithms include all implementation details and therefore they produce consistent derivative results.
For detecting hot spots in the application, the state of the art techniques are presented and extended.
The data management is changed such that the performance of the software is affected only marginally when quantities, like the number of input and output variables or the memory consumption, are computed for the detection.
The hot spots can be treated with techniques like checkpointing or preaccumulation.
How these techniques change the time and memory consumption is analyzed and it is shown how they need to be used in selected AD tools.
As a last step, the used AD tools are analyzed in more detail.
The major implementation strategies for operator overloading AD tools are presented and implementation improvements for existing AD tools are discussed.
The discussion focuses on a minimal memory consumption and makes it possible to compare AD tools on a theoretical level.
The new AD tool CoDiPack is based on these findings and its design and concepts are presented.
The improvements and findings in this thesis make it possible, that an automatic, consistent and correct derivative is generated in an efficient way for industrial applications.
Automated theorem proving is a search problem and, by its undecidability, a very difficult one. The challenge in the development of a practically successful prover is the mapping of the extensively developed theory into a program that runs efficiently on a computer. Starting from a level-based system model for automated theorem provers, in this work we present different techniques that are important for the development of powerful equational theorem provers. The contributions can be divided into three areas: Architecture. We present a novel prover architecture that is based on a set-based compression scheme. With moderate additional computational costs we achieve a substantial reduction of the memory requirements. Further wins are architectural clarity, the easy provision of proof objects, and a new way to parallelize a prover which shows respectable speed-ups in practice. The compact representation paves the way to new applications of automated equational provers in the area of verification systems. Algorithms. To improve the speed of a prover we need efficient solutions for the most time-consuming sub-tasks. We demonstrate improvements of several orders of magnitude for two of the most widely used term orderings, LPO and KBO. Other important contributions are a novel generic unsatisfiability test for ordering constraints and, based on that, a sufficient ground reducibility criterion with an excellent cost-benefit ratio. Redundancy avoidance. The notion of redundancy is of central importance to justify simplifying inferences which are used to prune the search space. In our experience with unfailing completion, the usual notion of redundancy is not strong enough. In the presence of associativity and commutativity, the provers often get stuck enumerating equations that are permutations of each other. By extending and refining the proof ordering, many more equations can be shown redundant. Furthermore, our refinement of the unfailing completion approach allows us to use redundant equations for simplification without the need to consider them for generating inferences. We describe the efficient implementation of several redundancy criteria and experimentally investigate their influence on the proof search. The combination of these techniques results in a considerable improvement of the practical performance of a prover, which we demonstrate with extensive experiments for the automated theorem prover Waldmeister. The progress achieved allows the prover to solve problems that were previously out of reach. This considerably enhances the potential of the prover and opens up the way for new applications.
Stochastic Network Calculus (SNC) emerged from two branches in the late 90s:
the theory of effective bandwidths and its predecessor the Deterministic Network
Calculus (DNC). As such SNC’s goal is to analyze queueing networks and support
their design and control.
In contrast to queueing theory, which strives for similar goals, SNC uses in-
equalities to circumvent complex situations, such as stochastic dependencies or
non-Poisson arrivals. Leaving the objective to compute exact distributions behind,
SNC derives stochastic performance bounds. Such a bound would, for example,
guarantee a system’s maximal queue length that is violated by a known small prob-
ability only.
This work includes several contributions towards the theory of SNC. They are
sorted into four main contributions:
(1) The first chapters give a self-contained introduction to deterministic net-
work calculus and its two branches of stochastic extensions. The focus lies on the
notion of network operations. They allow to derive the performance bounds and
simplifying complex scenarios.
(2) The author created the first open-source tool to automate the steps of cal-
culating and optimizing MGF-based performance bounds. The tool automatically
calculates end-to-end performance bounds, via a symbolic approach. In a second
step, this solution is numerically optimized. A modular design allows the user to
implement their own functions, like traffic models or analysis methods.
(3) The problem of the initial modeling step is addressed with the development
of a statistical network calculus. In many applications the properties of included
elements are mostly unknown. To that end, assumptions about the underlying
processes are made and backed by measurement-based statistical methods. This
thesis presents a way to integrate possible modeling errors into the bounds of SNC.
As a byproduct a dynamic view on the system is obtained that allows SNC to adapt
to non-stationarities.
(4) Probabilistic bounds are fundamentally different from deterministic bounds:
While deterministic bounds hold for all times of the analyzed system, this is not
true for probabilistic bounds. Stochastic bounds, although still valid for every time
t, only hold for one time instance at once. Sample path bounds are only achieved by
using Boole’s inequality. This thesis presents an alternative method, by adapting
the theory of extreme values.
(5) A long standing problem of SNC is the construction of stochastic bounds
for a window flow controller. The corresponding problem for DNC had been solved
over a decade ago, but remained an open problem for SNC. This thesis presents
two methods for a successful application of SNC to the window flow controller.
Toxicology, the study of the adverse effects of chemicals and physical agents on living organisms, is a critical process in chemical and drug development. The low throughput, high costs, limited predictivity and ethical concerns related to traditional animal-based toxicity studies render them impractical to assess the growing number and complexity of both existing and new compounds and their formulations. These factors together with the increasing implementation of more demanding regulations, evidence the current need to develop innovative, reliable, cost effective and high throughput toxicological methods.
The use of metabolomics in vitro presents the powerful combination of a human relevant system with a multiparametric approach that allows assessing multiple endpoints in a single biological sample. Applying metabolomics in a cell-based system offers an alternative to both, the ethical concerns and relevance of animal testing and the restraining nature of single endpoint evaluations characteristic of conventional toxicological in vitro assays. However, there are still challenges that hamper the expansion of metabolomics beyond a research tool to a feasible and implementable technology for toxicology assessment.
The aim of this dissertation is to advance the applications of in vitro metabolomics in toxicology by addressing three major challenges that have limited its widespread implementation in the field. In chapter 2 the restrictive high cost and low throughput of in vitro metabolomics was addressed through the development, standardization and proof of concept of a high throughput targeted LC-MS/MS in vitro metabolomics platform for the characterization of hepatotoxicity. In chapter 3, the use of the developed in vitro metabolomics system was expanded beyond hazard identification, to its implementation for deriving dose- and time response metrics that were shown useful for Point of departure (PoD) estimations for human risk assessment. Finally, in chapter 4 in order to increase the reliance and confidence of using in vitro metabolomics data for risk assessment, the human relevance of the metabolomics in vitro assays was attempted to be improved by the implementation and evaluation of in vitro metabolomics in a hiPSCs-derived 3D liver organoid system.
The work developed here demonstrates the suitable of in vitro metabolomics for mechanistic-based hazard identification and risk assessment. By advancing the applications of metabolomics in toxicology, this work has significantly contributed to the aim of toxicology of the 21st century for a human-relevant non-animal toxicological testing, supporting the toxicology task of protecting human health and the environment.
The recently established technologies in the areas of distributed measurement and intelligent
information processing systems, e.g., Cyber Physical Systems (CPS), Ambient
Intelligence/Ambient Assisted Living systems (AmI/AAL), the Internet of Things
(IoT), and Industry 4.0 have increased the demand for the development of intelligent
integrated multi-sensory systems as to serve rapid growing markets [1, 2]. These increase
the significance of complex measurement systems, that incorporate numerous advanced
methodological implementations including electronics circuit, signal processing,
and multi-sensory information fusion. In particular, in multi-sensory cognition applications,
to design such systems, the skill-required tasks, e.g., method selection, parameterization,
model analysis, and processing chain construction are elaborated with immense
effort, which conventionally are done manually by the expert designer. Moreover, the
strong technological competition imposes even more complicated design problems with
multiple constraints, e.g., cost, speed, power consumption,
exibility, and reliability.
Thus, the conventional human expert based design approach may not be able to cope
with the increasing demand in numbers, complexity, and diversity. To alleviate the issue,
the design automation approach has been the topic for numerous research works [3-14]
and has been commercialized to several products [15-18]. Additionally, the dynamic
adaptation of intelligent multi-sensor systems is the potential solution for developing
dependable and robust systems. Intrinsic evolution approach and self-x properties [19],
which include self-monitoring, -calibrating/trimming, and -healing/repairing, are among
the best candidates for the issue. Motivated from the ongoing research trends and based
on the background of our research work [12, 13] among the pioneers in this topic, the
research work of the thesis contributes to the design automation of intelligent integrated
multi-sensor systems.
In this research work, the Design Automation for Intelligent COgnitive system with self-
X properties, the DAICOX, architecture is presented with the aim of tackling the design
effort and to providing high quality and robust solutions for multi-sensor intelligent
systems. Therefore, the DAICOX architecture is conceived with the defined goals as
listed below.
Perform front to back complete processing chain design with automated method
selection and parameterization,
Provide a rich choice of pattern recognition methods to the design method pool,
Associate design information via interactive user interface and visualization along
with intuitive visual programming,
Deliver high quality solutions outperforming conventional approaches by using
multi-objective optimization,
Gain the adaptability, reliability and robustness of designed solutions with self-x
properties,
Derived from the goals, several scientific methodological developments and implementations,
particularly in the areas of pattern recognition and computational intelligence,
will be pursued as part of the DAICOX architecture in the research work of this thesis.
The method pool is aimed to contain a rich choice of methods and algorithms covering
data acquisition and sensor configuration, signal processing and feature computation,
dimensionality reduction, and classification. These methods will be selected and parameterized
automatically by the DAICOX design optimization to construct a multi-sensory
cognition processing chain. A collection of non-parametric feature quality assessment
functions for the purpose of Dimensionality Reduction (DR) process will be presented.
In addition, to standard DR methods, the variations of feature selection method, in
particular, feature weighting will be proposed. Three different classification categories
shall be incorporated in the method pool. Hierarchical classification approach will be
proposed and developed to serve as a multi-sensor fusion architecture at the decision
level. Beside multi-class classification, one-class classification methods, e.g., One-Class
SVM and NOVCLASS will be presented to extend functionality of the solutions, in particular,
anomaly and novelty detection. DAICOX is conceived to effectively handle the
problem of method selection and parameter setting for a particular application yielding
high performance solutions. The processing chain construction tasks will be carried
out by meta-heuristic optimization methods, e.g., Genetic Algorithms (GA) and Particle
Swarm Optimization (PSO), with multi-objective optimization approach and model
analysis for robust solutions. In addition, to the automated system design mechanisms,
DAICOX will facilitate the design tasks with intuitive visual programming and various
options of visualization. Design database concept of DAICOX is aimed to allow the
reusability and extensibility of the designed solutions gained from previous knowledge.
Thus, the cooperative design of machine and knowledge from the design expert can also
be utilized for obtaining fully enhanced solutions. In particular, the integration of self-x
properties as well as intrinsic optimization into the system is proposed to gain enduring
reliability and robustness. Hence, DAICOX will allow the inclusion of dynamically
reconfigurable hardware instances to the designed solutions in order to realize intrinsic
optimization and self-x properties.
As a result from the research work in this thesis, a comprehensive intelligent multisensor
system design architecture with automated method selection, parameterization,
and model analysis is developed with compliance to open-source multi-platform software.It is integrated with an intuitive design environment, which includes visual programming
concept and design information visualizations. Thus, the design effort is minimized as
investigated in three case studies of different application background, e.g., food analysis
(LoX), driving assistance (DeCaDrive), and magnetic localization. Moreover, DAICOX
achieved better quality of the solutions compared to the manual approach in all cases,
where the classification rate was increased by 5.4%, 0.06%, and 11.4% in the LoX,
DeCaDrive, and magnetic localization case, respectively. The design time was reduced
by 81.87% compared to the conventional approach by using DAICOX in the LoX case
study. At the current state of development, a number of novel contributions of the thesis
are outlined below.
Automated processing chain construction and parameterization for the design of
signal processing and feature computation.
Novel dimensionality reduction methods, e.g., GA and PSO based feature selection
and feature weighting with multi-objective feature quality assessment.
A modification of non-parametric compactness measure for feature space quality
assessment.
Decision level sensor fusion architecture based on proposed hierarchical classification
approach using, i.e., H-SVM.
A collection of one-class classification methods and a novel variation, i.e.,
NOVCLASS-R.
Automated design toolboxes supporting front to back design with automated
model selection and information visualization.
In this research work, due to the complexity of the task, neither all of the identified goals
have been comprehensively reached yet nor has the complete architecture definition been
fully implemented. Based on the currently implemented tools and frameworks, ongoing
development of DAICOX is pursuing towards the complete architecture. The potential
future improvements are the extension of method pool with a richer choice of methods
and algorithms, processing chain breeding via graph based evolution approach, incorporation
of intrinsic optimization, and the integration of self-x properties. According to
these features, DAICOX will improve its aptness in designing advanced systems to serve
the increasingly growing technologies of distributed intelligent measurement systems, in
particular, CPS and Industrie 4.0.
Advantage of Filtering for Portfolio Optimization in Financial Markets with Partial Information
(2016)
In a financial market we consider three types of investors trading with a finite
time horizon with access to a bank account as well as multliple stocks: the
fully informed investor, the partially informed investor whose only source of
information are the stock prices and an investor who does not use this infor-
mation. The drift is modeled either as following linear Gaussian dynamics
or as being a continuous time Markov chain with finite state space. The
optimization problem is to maximize expected utility of terminal wealth.
The case of partial information is based on the use of filtering techniques.
Conditions to ensure boundedness of the expected value of the filters are
developed, in the Markov case also for positivity. For the Markov modulated
drift, boundedness of the expected value of the filter relates strongly to port-
folio optimization: effects are studied and quantified. The derivation of an
equivalent, less dimensional market is presented next. It is a type of Mutual
Fund Theorem that is shown here.
Gains and losses eminating from the use of filtering are then discussed in
detail for different market parameters: For infrequent trading we find that
both filters need to comply with the boundedness conditions to be an advan-
tage for the investor. Losses are minimal in case the filters are advantageous.
At an increasing number of stocks, again boundedness conditions need to be
met. Losses in this case depend strongly on the added stocks. The relation
of boundedness and portfolio optimization in the Markov model leads here to
increasing losses for the investor if the boundedness condition is to hold for
all numbers of stocks. In the Markov case, the losses for different numbers
of states are negligible in case more states are assumed then were originally
present. Assuming less states leads to high losses. Again for the Markov
model, a simplification of the complex optimal trading strategy for power
utility in the partial information setting is shown to cause only minor losses.
If the market parameters are such that shortselling and borrowing constraints
are in effect, these constraints may lead to big losses depending on how much
effect the constraints have. They can though also be an advantage for the
investor in case the expected value of the filters does not meet the conditions
for boundedness.
All results are implemented and illustrated with the corresponding numerical
findings.
Aerodynamische Mehrpunktoptimierung eines Hochdruckverdichters mithilfe des Adjungiertenverfahrens
(2022)
In der vorliegenden Arbeit wird eine Prozesskette entwickelt, mit deren Hilfe aerodynamische
numerische Optimierungen eines Hochdruckverdichters unter Berücksichtigung mehrerer relevanter
Betriebspunkte effizient durchgeführt werden können.
Dazu werden zwei Parametriken erarbeitet und implementiert, um neben den konventionellen
Beschaufelungsparametern auch die betriebspunktspezifische Stellung der Verstellstatoren
und deren Verstellgesetz als freie Parameter berücksichtigen zu können.
Um die für die Optimierung nötigen Sensitivitätsinformationen des 17 Schaufelgitter umfassenden
Hochdruckverdichter effizient bestimmen zu können, wird auf das Adjungiertenverfahren
zurückgegriffen. Dieses entkoppelt den Aufwand zur Bestimmung der Gradienteninformation
von der Anzahl der freien Parameter nahezu. Der rechenintensive Teil der Prozesskette,
die Strömungslösung und -auswertung, wird mit dem diskret adjungierten Strömungslöser
adjointTRACE und dem Auswertewerkzeug adjointPOST durchgeführt. Die auf einem
Fixpunkt-Ansatz beruhende Vorgehensweise zur Lösung der adjungierten Gleichung ermöglicht
eine konsistente adjungierte Strömungslösung, deren Konvergenzrate derjenigen der primalen
Strömungslösung entspricht.
Die Validierung der Sensitivitätsinformationen, auf Basis der im Rahmen der Arbeit entwickelten
Prozesskette, wird in einem Vergleich zum Ansatz der Finiten Differenzen erfolgreich
durchgeführt.
Mithilfe der validierten Prozesskette können erfolgreich Mehrpunktoptimierungen mit dem
Ziel durchgeführt werden, den Pumpgrenzabstand an einem der betrachteten Betriebspunkte
deutlich zu vergrößern. Die durch die neu implementierten Verstellgitterparametriken verbesserten
Optimierungsergebnisse zeigen dabei deutlich den Einfluss und das Potential modifizierbarer
Statorverstellwinkel im Auslegungsprozess.
Die Durchlaufzeiten der durchgeführten Optimierungen können aufgrund der erarbeiteten
Prozesskette um Faktor vier bis zehn verglichen zur konventionellen Vorgehensweisen reduziert
werden. Die noch stärker verkürzten Rechenaufwände gemessen in CPU-Zeit belegen
die Notwendigkeit der Verwendung des Adjungiertenverfahrens für aerodynamischen Optimierungsaufgaben
mit mehreren hundert Parametern und wenigen Nebenbedingungen.
The present thesis describes the experimental performance determination and numerical
modeling of an aerostatic porous bearing made of an orthotropically layered ceramic
composite material (CMC). The high temperature resistance, low thermal expansion and
high reusability of this material makes it eminently suitable for use in highly stressed
fluid-film bearing applications.
The work involves the development of an aerostatic journal bearing made of porous,
orthotropically layered carbon fiber-reinforced carbon composite (C/C) and the design
of a journal bearing test rig, which contained additional aerostatic support bearings and
six optical laser triangulation sensors. The sensor system enabled the measurement of
lubricant film thickness and shaft misalignment. As a result of the slight air lubrication
clearance of 30 μm, the focus was on low concentricity and the determination of shaft
misalignments.
The preliminary tests included the determination of the permeability of the porous material
and the applicability of Darcy’s law. A scan of the inner surface of the porous bushing
revealed a characteristic grooved structure, which can be attributed to the layered structure
of the material. Bearing tests were conducted up to a rotational speed of 8000 rpm and a
pressure ratio of 5 to 7. No significant effect of rotational speed on load-carrying capacity
and gas consumption was observed in this operating range. The examined operating points
did not indicate any sign of the occurrence of the pneumatic hammer. A temporary load of
below 90N on the bearing and an eccentricity ratio below 0.8 did not cause any significant
wear on the shaft.
Four numerical models, based on Reynolds’ lubricant film equation and Darcy’s law were
developed. The models were gradually extended with consideration of shaft misalignment,
the compressibility of the gas, the geometry of the pressure supply chamber and the
embedding of the groove structure. The models were validated with external publications
and the performed tests.
Numerous studies have investigated aerostatic porous bearings made of sintered metal
and graphite. Current computational approaches to determine a fast preliminary design
reached max. deviations of approximately 20 - 24% compared to experimental tests. One
of the central claims of this research was to extend this area of investigation by porous,
othotropically layered bearings made of C/C. The developed extended Full-Darcy model
achieved a maximum deviation in the load-carrying capacity of 21.6% and in the gas
consumption of 23.5%.
This study demonstrates the applicability of a resistant material from the aerospace field
(reusable thrust chambers made of CMC) for highly stressed and durable fluid-film bearings.
Furthermore, a numerical model for the computation and design of these bearings was
developed and validated.
In recent years, the Internet has become a major source of visual information exchange. Popular social platforms have reported an average of 80 million photo uploads a day. These images, are often accompanied with a user provided text one-liner, called an image caption. Deep Learning techniques have made significant advances towards automatic generation of factual image captions. However, captions generated by humans are much more than mere factual image descriptions. This work takes a step towards enhancing a machine's ability to generate image captions with human-like properties. We name this field as Affective Image Captioning, to differentiate it from the other areas of research focused on generating factual descriptions.
To deepen our understanding of human generated captions, we first perform a large-scale Crowd-Sourcing study on a subset of Yahoo Flickr Creative Commons 100 Million Dataset (YFCC100M). Three thousand random image-caption pairs were evaluated by native English speakers w.r.t different dimensions like focus, intent, emotion, meaning, and visibility. Our findings indicate three important underlying properties of human captions: subjectivity, sentiment, and variability. Based on these results, we develop Deep Learning models to address each of these dimensions.
To address the subjectivity dimension, we propose the Focus-Aspect-Value (FAV) model (along with a new task of aspect-detection) to structure the process of capturing subjectivity. We also introduce a novel dataset, aspects-DB, following this way of modeling. To implement the model, we propose a novel architecture called Tensor Fusion. Our experiments show that Tensor Fusion outperforms the state-of-the-art cross residual networks (XResNet) in aspect-detection.
Towards the sentiment dimension, we propose two models:Concept & Syntax Transition Network (CAST) and Show & Tell with Emotions (STEM). The CAST model uses a graphical structure to generate sentiment. The STEM model uses a neural network to inject adjectives into a neutral caption. Achieving a high score of 93% with human evaluation, these models were selected as the top-3 at the ACMMM Grand Challenge 2016.
To address the last dimension, variability, we take a generative approach called Generative Adversarial Networks (GAN) along with multimodal fusion. Our modified GAN, with two discriminators, is trained using Reinforcement Learning. We also show that it is possible to control the properties of the generated caption-variations with an external signal. Using sentiment as the external signal, we show that we can easily outperform state-of-the-art sentiment caption models.
Aflatoxins, a group of mycotoxins produced by various mold species within the genus Aspergillus, have been extensively investigated for their potential to contaminate food and feed, rendering them unfit for consumption. Nevertheless, the role of aflatoxins as environmental contaminants in soil, which represents their natural habitat, remains a relatively unexplored area in aflatoxin research. This knowledge gap can be attributed, in part, to the methodological challenges associated with detecting aflatoxins in soil. The main objective of this PhD project was to develop and validate an analytical method that allows monitoring of aflatoxins in soil, and scrutinize the mechanisms and extent of occurrence of aflatoxins in soil, the processes governing their dissipation, and their impact on the soil microbiome and associated soil functions. By utilizing an efficient extraction solvent mixture comprising acetonitrile and water, coupled with an ultrasonication step, recoveries of 78% to 92% were achieved, enabling reliable determination of trace levels in soil ranging from 0.5 to 20 µg kg-1. However, in a field trial conducted in a high-risk model region for aflatoxin contamination in Sub-Saharan Africa, no aflatoxins were detected using this procedure, underscoring the complexities of field monitoring. These challenges encompassed rapid degradation, spatial heterogeneity, and seasonal fluctuations in aflatoxin occurrence. Degradation experiments revealed the importance of microbial and photochemical processes in the dissipation of aflatoxins in soil with half-lives of 20 - 65 days. The rate of dissipation was found to be influenced by soil properties, most notably soil texture and the initial concentration of aflatoxins in the soil. An exposure study provided evidence that aflatoxins do not pose a substantial threat to the soil microbiome, encompassing microbial biomass, activity, and catabolic functionality. This was particularly evident in clayey soils, where the toxicity of aflatoxins diminished significantly due to their strong binding to clay minerals. However, several critical questions remain unanswered, emphasizing the necessity for further research to attain a more comprehensive understanding of the ecological importance of aflatoxins. Future research should prioritize the challenges associated with field monitoring of aflatoxins, elucidate the mechanisms responsible for the dissipation of aflatoxins in soil during microbial and photochemical degradation, and investigate the ecological consequences of aflatoxins in regions heavily affected by aflatoxins, taking into account the interactions between aflatoxins and environmental and anthropogenic stressors. Addressing these questions contributes to a comprehensive understanding of the environmental impact of aflatoxins in soil, ultimately contributing to more effective strategies for aflatoxin management in agriculture.
Die vorliegende Arbeit stellt die Ergebnisse vor, die unter Einsatz von optischer Molekülspektroskopie und quantenchemischen Berechnungen von Merocyanin-Dimeraggregaten erzielt wurden. Mit Hilfe der UV/Vis-Spektroskopie konnten aus der Vielzahl der zur Verfügung stehenden Farbstoffe diejenigen mit ausgeprägter Aggregationsneigung identifiziert werden. Für neun positiv getestete Verbindungen wurden konzentrations- und temperaturabhängige UV/Vis-Spektren aufgenommen. Die Auswertung gelang dabei mit einem selbst entwickelten Algorithmus, der neben der Aggregationskonstante auch die reinen Spektren von Monomer und Dimer berechnet. Für eine Serie von acht neuen Merocyaninen wurde eine umfassende Charakterisierung der elektrooptischen Eigenschaften vorgestellt und die Ergebnisse im Hinblick auf deren Anwendung diskutiert. Für zwei weitere Farbstoffe konnte eine Beeinflussung der Dimerisierung durch ein äußeres elektrisches Feld frei von Diskrepanzen bestätigt werden. Implikationen der beobachteten Befunde auf das Design photonischer Materialien mit exzitonisch gekoppelten Dimeren wurden besprochen. Durch die stationären und dynamischen Fluoreszenzmessungen konnte das bislang nur für einen Farbstoff bekannte Phänomen der Emission von H-Typ Dimeren an drei weiteren Merocyaninen nachgewiesen werden. Es gelang eine präzise spektrale Trennung der Teilbeiträge von Monomer und Dimer in Absorption sowie Emission vorzunehmen und damit erstmals den von Kasha[1] 1965 vorhergesagten Relaxationskanal für H-Typ Aggregate in allen Details quantitativ zu belegen. Durch quantenchemische Berechnungen auf MP2-Niveau konnte die Geometrie von sechs Monomeren und Dimeren optimiert und mit Hilfe experimenteller Strukturinformationen verifiziert werden. Auf Basis dieser Geometrien wurden essentielle Eigenschaften vom elektronischen Grund- und Anregungszustand berechnet und damit Übereinstimmungen und Unterschiede zu den Experimenten aufgezeigt. Weiterhin wurde eine Möglichkeit zur Vorhersage der Aggregationsneigung für einen gegebenen Strukturtyp alleine auf Grundlage von quantenchemischen Ergebnissen vorgestellt. Im Hinblick auf die grundlegenden Triebkräfte der Aggregation ergab die Analyse, dass die Dimerisierung im Wesentlichen auf elektrostatischen Dipol-Dipol- und Dispersionswechselwirkungen beruht, daneben aber auch von der Topologie der Chromophore abhängige lokale Wechselwirkungen einen Beitrag leisten.
Die Anbieter von Produkt-Service Systemen (PSS) im Investitionsgüterbereich unterliegen zunehmend dem Einfluss einer VUKA-Welt. Neue Technologien und Geschäftsmodelle, sich schnell ändernde Kundenbedürfnisse und der Einbezug vieler interner und externer Akteure resultieren in Komplexität, Unsicherheiten und kurzfristigen Änderungen, auf welche die PSS-Anbieter reagieren müssen, um frühzeitig entsprechende Lösungen anbieten zu können. Die PSS-Planung befasst sich mit der Identifikation und Spezifikation neuer PSS-Ideen. Die meisten Vorgehen zur PSS-Planung fokussieren auf die Planung von Sachprodukten, Serviceprodukten, unterstützender Infrastruktur und Netzwerke von Akteuren und binden den Kunden in die Planung ein. Durch die genannten Einflüsse der VUKA-Welt auf die Planung von PSS, muss diese an sich schnell ändernde Gegebenheiten angepasst werden. Agile Methoden der Softwareentwicklung bieten hierfür Potenziale. Insbesondere die Agile Methode Design Thinking, die den Kunden innerhalb eines kurzzyklischen Vorgehens in den Mittelpunkt der Entwicklungstätigkeiten stellt und auf die Entwicklung von Prototypen zur schnellen Realisierung von Lösungen setzt, bietet sich für den Einsatz in der PSS-Planung an. Die Arbeit befasst sich mit der Frage, wie Design Thinking in die PSS-Planung integriert werden kann, um zukünftig auf Änderungen adäquat reagieren zu können und gleichzeitig die Vorteile bestehender Ansätze, wie z. B. Kundenorientierung, nicht zu vernachlässigen. Dabei wird mit Hilfe eines Modellierungsansatzes eine Methode entwickelt, welches mit Rollen, Aktivitäten, Techniken und Ergebnissen den Einsatz von Design Thinking für die agile Planung von PSS ermöglicht, den Kunden an unterschiedlichen Stellen mit einbindet und Rücksprünge bei Änderungen erlaubt. Hervorzuheben ist, dass die Methode sowohl technologiegetrieben als auch marktgetrieben initiiert werden kann. Validiert wurde die Methode innerhalb eines Verbundprojekts bei der GRIMME Landmaschinenfabrik GmbH & Co. KG.
Organizational routines constitute how work is accomplished in organizations. This dissertation thesis draws on recent routine research and is anchored in the field of organization theory. The thesis consists of four separate manuscripts that contribute to related research fields such as agility or coordination research from a routine perspective while also extending routine dynamics research. Recent routine dynamics research offers a wide perspective on how situated actions within and across routines unfold as emergent accomplishments. This allows us to analyze other organization research phenomena, such as agility and coordination. Accordingly, the first and second manuscripts argue for the adoption of a very dynamic perspective on routines and the incorporation of these insights into agility and coordination research. This is followed by two empirical manuscripts that expand the routine literature based on qualitative research within agile software development. The third manuscript of this dissertation analyzes how situated actions address different temporal orientations (i.e., past, present, and future). Last, the fourth manuscript addresses the performing of roles within and through routines. In general, this dissertation contributes to overall organization research in two ways: (1) by outlining and examining how agility is enacted; (2) by highlighting that actions are performed flexibly to consider the situation at hand.
This thesis contains the mathematical treatment of a special class of analog microelectronic circuits called translinear circuits. The goal is to provide foundations of a new coherent synthesis approach for this class of circuits. The mathematical methods of the suggested synthesis approach come from graph theory, combinatorics, and from algebraic geometry, in particular symbolic methods from computer algebra. Translinear circuits form a very special class of analog circuits, because they rely on nonlinear device models, but still allow a very structured approach to network analysis and synthesis. Thus, translinear circuits play the role of a bridge between the "unknown space" of nonlinear circuit theory and the very well exploited domain of linear circuit theory. The nonlinear equations describing the behavior of translinear circuits possess a strong algebraic structure that is nonetheless flexible enough for a wide range of nonlinear functionality. Furthermore, translinear circuits offer several technical advantages like high functional density, low supply voltage and insensitivity to temperature. This unique profile is the reason that several authors consider translinear networks as the key to systematic synthesis methods for nonlinear circuits. The thesis proposes the usage of a computer-generated catalog of translinear network topologies as a synthesis tool. The idea to compile such a catalog has grown from the observation that on the one hand, the topology of a translinear network must satisfy strong constraints which severely limit the number of "admissible" topologies, in particular for networks with few transistors, and on the other hand, the topology of a translinear network already fixes its essential behavior, at least for static networks, because the so-called translinear principle requires the continuous parameters of all transistors to be the same. Even though the admissible topologies are heavily restricted, it is a highly nontrivial task to compile such a catalog. Combinatorial techniques have been adapted to undertake this task. In a catalog of translinear network topologies, prototype network equations can be stored along with each topology. When a circuit with a specified behavior is to be designed, one can search the catalog for a network whose equations can be matched with the desired behavior. In this context, two algebraic problems arise: To set up a meaningful equation for a network in the catalog, an elimination of variables must be performed, and to test whether a prototype equation from the catalog and a specified equation of desired behavior can be "matched", a complex system of polynomial equations must be solved, where the solutions are restricted to a finite set of integers. Sophisticated algorithms from computer algebra are applied in both cases to perform the symbolic computations. All mentioned algorithms have been implemented using C++, Singular, and Mathematica, and are successfully applied to actual design problems of humidity sensor circuitry at Analog Microelectronics GmbH, Mainz. As result of the research conducted, an exhaustive catalog of all static formal translinear networks with at most eight transistors is available. The application for the humidity sensor system proves the applicability of the developed synthesis approach. The details and implementations of the algorithms are worked out only for static networks, but can easily be adopted for dynamic networks as well. While the implementation of the combinatorial algorithms is stand-alone software written "from scratch" in C++, the implementation of the algebraic algorithms, namely the symbolic treatment of the network equations and the match finding, heavily rely on the sophisticated Gröbner basis engine of Singular and thus on more than a decade of experience contained in a special-purpose computer algebra system. It should be pointed out that the thesis contains the new observation that the translinear loop equations of a translinear network are precisely represented by the toric ideal of the network's translinear digraph. Altogether, this thesis confirms and strengthenes the key role of translinear circuits as systematically designable nonlinear circuits.
In the first part of this thesis we study algorithmic aspects of tropical intersection theory. We analyse how divisors and intersection products on tropical cycles can actually be computed using polyhedral geometry. The main focus is the study of moduli spaces, where the underlying combinatorics of the varieties involved allow a much more efficient way of computing certain tropical cycles. The algorithms discussed here have been implemented in an extension for polymake, a software for polyhedral computations.
In the second part we apply the algorithmic toolkit developed in the first part to the study of tropical double Hurwitz cycles. Hurwitz cycles are a higher-dimensional generalization of Hurwitz numbers, which count covers of \(\mathbb{P}^1\) by smooth curves of a given genus with a certain fixed ramification behaviour. Double Hurwitz numbers provide a strong connection between various mathematical disciplines, including algebraic geometry, representation theory and combinatorics. The tropical cycles have a rather complex combinatorial nature, so it is very difficult to study them purely "by hand". Being able to compute examples has been very helpful
in coming up with theoretical results. Our main result states that all marked and unmarked Hurwitz cycles are connected in codimension one and that for a generic choice of simple ramification points the marked cycle is a multiple of an irreducible cycle. In addition we provide computational examples to show that this is the strongest possible statement.
This thesis builds a bridge between singularity theory and computer algebra. To an isolated hypersurface singularity one can associate a regular meromorphic connection, the Gauß-Manin connection, containing a lattice, the Brieskorn lattice. The leading terms of the Brieskorn lattice with respect to the weight and V-filtration of the Gauß-Manin connection define the spectral pairs. They correspond to the Hodge numbers of the mixed Hodge structure on the cohomology of the Milnor fibre and belong to the finest known invariants of isolated hypersurface singularities. The differential structure of the Brieskorn lattice can be described by two complex endomorphisms A0 and A1 containing even more information than the spectral pairs. In this thesis, an algorithmic approach to the Brieskorn lattice in the Gauß-Manin connection is presented. It leads to algorithms to compute the complex monodromy, the spectral pairs, and the differential structure of the Brieskorn lattice. These algorithms are implemented in the computer algebra system Singular.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show
how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
In modern algebraic geometry solutions of polynomial equations are studied from a qualitative point of view using highly sophisticated tools such as cohomology, \(D\)-modules and Hodge structures. The latter have been unified in Saito’s far-reaching theory of mixed Hodge modules, that has shown striking applications including vanishing theorems for cohomology. A mixed Hodge module can be seen as a special type of filtered \(D\)-module, which is an algebraic counterpart of a system of linear differential equations. We present the first algorithmic approach to Saito’s theory. To this end, we develop a Gröbner basis theory for a new class of algebras generalizing PBW-algebras.
The category of mixed Hodge modules satisfies Grothendieck’s six-functor formalism. In part these functors rely on an additional natural filtration, the so-called \(V\)-filtration. A key result of this thesis is an algorithm to compute the \(V\)-filtration in the filtered setting. We derive from this algorithm methods for the computation of (extraordinary) direct image functors under open embeddings of complements of pure codimension one subvarieties. As side results we show how to compute vanishing and nearby cycle functors and a quasi-inverse of Kashiwara’s equivalence for mixed Hodge modules.
Describing these functors in terms of local coordinates and taking local sections, we reduce the corresponding computations to algorithms over certain bifiltered algebras. It leads us to introduce the class of so-called PBW-reduction-algebras, a generalization of the class of PBW-algebras. We establish a comprehensive Gröbner basis framework for this generalization representing the involved filtrations by weight vectors.
Ultraschall ist eines der am häufigsten genutzen, bildgebenden Verfahren in der Kardiologie. Dies ist durch die günstige Erzeugung, die Nicht-Invasivität und die Unschädlichkeit für die Patienten begründet. Nachteilig an den existierenden Geräten ist der Umstand, daß lediglich zwei-dimensionale Bilder generiert werden können. Zusätzlich können diese Bilder aufgrund anatomischer Gegebenheiten nicht aus einer wahlfreien Position akquiriert werden. Dies erschwert die Analyse der Daten und folglich die Diagnose. Mit dieser Arbeit wurden neue, algorithmische Aspekte des vier-dimensionalen, kardiologischen Ultraschalls ausgehend von der Akquisition der Rohdaten, deren Synchronisation und Rekonstruktion bis hin zur Visualisierung bearbeitet. In einem zusätzlichen Kapitel wurde eine neue Technik zur weiteren Aufwertung der Visualisierung, sowie zur visuellen Bearbeitung der Ultraschalldaten entwickelt. Durch die hier entwickelten Verfahren ist es möglich bestimmte Einschränkungen des kardiologischen Ultraschalls aufzuheben oder zumindest zu mildern. Hierunter zählen vor allem die Einschränkung auf zwei-dimensionale Schnittbilder, sowie die eingeschränkte Sichtwahl.
Software is becoming increasingly concurrent: parallelization, decentralization, and reactivity necessitate asynchronous programming in which processes communicate by posting messages/tasks to others’ message/task buffers. Asynchronous programming has been widely used to build fast servers and routers, embedded systems and sensor networks, and is the basis of Web programming using Javascript. Languages such as Erlang and Scala have adopted asynchronous programming as a fundamental concept with which highly scalable and highly reliable distributed systems are built.
Asynchronous programs are challenging to implement correctly: the loose coupling between asynchronously executed tasks makes the control and data dependencies difficult to follow. Even subtle design and programming mistakes on the programs have the capability to introduce erroneous or divergent behaviors. As asynchronous programs are typically written to provide a reliable, high-performance infrastructure, there is a critical need for analysis techniques to guarantee their correctness.
In this dissertation, I provide scalable verification and testing tools to make asyn- chronous programs more reliable. I show that the combination of counter abstraction and partial order reduction is an effective approach for the verification of asynchronous systems by presenting PROVKEEPER and KUAI, two scalable verifiers for two types of asynchronous systems. I also provide a theoretical result that proves a counter-abstraction based algorithm called expand-enlarge-check, is an asymptotically optimal algorithm for the coverability problem of branching vector addition systems as which many asynchronous programs can be modeled. In addition, I present BBS and LLSPLAT, two testing tools for asynchronous programs that efficiently uncover many subtle memory violation bugs.
In the first part of the thesis we develop the theory of standard bases in free modules over (localized) polynomial rings. Given that linear equations are solvable in the coefficients of the polynomials, we introduce an algorithm to compute standard bases with respect to arbitrary (module) monomial orderings. Moreover, we take special care to principal ideal rings, allowing zero divisors. For these rings we design modified algorithms which are new and much faster than the general ones. These algorithms were motivated by current limitations in formal verification of microelectronic System-on-Chip designs. We show that our novel approach using computational algebra is able to overcome these limitations in important classes of applications coming from industrial challenges.
The second part is based on research in collaboration with Jason Morton, Bernd Sturmfels and Anne Shiu. We devise a general method to describe and compute a certain class of rank tests motivated by statistics. The class of rank tests may loosely be described as being based on computing the number of linear extensions to given partial orders. In order to apply these tests to actual data we developed two algorithms and used our implementations to apply the methodology to gene expression data created at the Stowers Institute for Medical Research. The dataset is concerned with the development of the vertebra. Our rankings proved valuable to the biologists.
This thesis, whose subject is located in the field of algorithmic commutative algebra and algebraic geometry, consists of three parts.
The first part is devoted to parallelization, a technique which allows us to take advantage of the computational power of modern multicore processors. First, we present parallel algorithms for the normalization of a reduced affine algebra A over a perfect field. Starting from the algorithm of Greuel, Laplagne, and Seelisch, we propose two approaches. For the local-to-global approach, we stratify the singular locus Sing(A) of A, compute the normalization locally at each stratum and finally reconstruct the normalization of A from the local results. For the second approach, we apply modular methods to both the global and the local-to-global normalization algorithm.
Second, we propose a parallel version of the algorithm of Gianni, Trager, and Zacharias for primary decomposition. For the parallelization of this algorithm, we use modular methods for the computationally hardest steps, such as for the computation of the associated prime ideals in the zero-dimensional case and for the standard bases computations. We then apply an innovative fast method to verify that the result is indeed a primary decomposition of the input ideal. This allows us to skip the verification step at each of the intermediate modular computations.
The proposed parallel algorithms are implemented in the open-source computer algebra system SINGULAR. The implementation is based on SINGULAR's new parallel framework which has been developed as part of this thesis and which is specifically designed for applications in mathematical research.
In the second part, we propose new algorithms for the computation of syzygies, based on an in-depth analysis of Schreyer's algorithm. Here, the main ideas are that we may leave out so-called "lower order terms" which do not contribute to the result of the algorithm, that we do not need to order the terms of certain module elements which occur at intermediate steps, and that some partial results can be cached and reused.
Finally, the third part deals with the algorithmic classification of singularities over the real numbers. First, we present a real version of the Splitting Lemma and, based on the classification theorems of Arnold, algorithms for the classification of the simple real singularities. In addition to the algorithms, we also provide insights into how real and complex singularities are related geometrically. Second, we explicitly describe the structure of the equivalence classes of the unimodal real singularities of corank 2. We prove that the equivalences are given by automorphisms of a certain shape. Based on this theorem, we explain in detail how the structure of the equivalence classes can be computed using SINGULAR and present the results in concise form. The probably most surprising outcome is that the real singularity type \(J_{10}^-\) is actually redundant.
Alkylcyclopentadienylchrom(II)-Verbindungen und Stickstoffkomplexe des Molybdäns und Wolframs
(2016)
Der Einsatz von Chrom(II)acetat als Ausgangsverbindung führte in einer Reaktion mit Na\( ^4 \)Cp zum dimeren, Acetato-verbrückten Tetraisopropylcyclopentadienylchrom(II)-Halbsandwich-komplex [\( ^4Cp \)Cr(OAc)]\( _2 \) 13.
Die sehr gut zugängliche Verbindung 13 wurde auf ihre Reaktivität untersucht und als Startmaterial für die Herstellung weiterer Chromverbindungen eingesetzt. Der Tetraisopropylcyclopentadienylchrom(II)-Halbsandwichkomplex 13 ergab bei der Reduktion mit Kalium in einer Stickstoffatmosphäre den zweikernigen Nitrido-Komplex [\( ^4Cp \)Cr(N)]\( _2 \) und bei der Substitution mit Cyanid das quadratische Tetramer [\( ^4Cp \)Cr(CN)]\( _4 \).
Mit anderen Reaktionspartnern wie z. B. den Pseudohalogeniden Azid und Cyanid wurden dagegen unvollständige Umsetzungen beobachtet, die den Wunsch nach einer besser geeigneten Ausgangsverbindung weckten. Dies gelang durch den Einsatz von Trimethylhalogensilanen, welche die Acetato-Liganden von 13 gegen Chlorid, Bromid, Iodid und im Falle des Trimethylsilylesters der Trifluormethansulfonsäure auch gegen Trifluormethansulfonat austauschen.
Die Reduktion der Halbsandwichkomplexe des Typs [\( ^RCp \)MoCl\( _4 \)] mit Kalium in Gegenwart ungesättigter Ringsysteme (Toluol, Cycloheptatrien oder Cyclooctatetraen) knüpfte an die noch unveröffentlichten Ergebnisse mit Cyclopentadienylnickel- und eisenverbindungen an und erbrachte folgendes Resultat: Während die Formeln der Reaktionsprodukte [\( ^RCp \)Mo\( _2 \)(Ring)] zur Interpretation als Tripeldecker-Sandwichkomplexe mit einem Ringsystem als Mitteldeck zwischen den beiden Metallatomen einluden, gaben die Massenspektren Hinweise auf eine Reaktivität, die dazu nicht passt.
Die unter Argon hergestellte Verbindung musste am Spektrometer aus messtechnischen Gründen unter Stickstoff gehandhabt werden und die Spektren gaben Hinweise auf den Einbau von Stickstoff.
Lithiumphenylacetylid führte zu einem Halbsandwichkomplex unter Austausch der beiden verbrückenden Bromide gegen Phenylacetylide. Diese neue Halbsandwichverbindung wurde mit Röntgenabsorptionsspektroskopie untersucht und charakterisiert.
Um substituierte Cyclopentadienylliganden anhand ihrer Sperrigkeit charakterisieren zu können, wurden als Maß für den sterischen Anspruch die beiden Kegelwinkel Θ und Ω in Anlehnung an Tolman eingeführt. Als Modellsystem wurden in einer Zusammenarbeit mit der Arbeitsgruppe von Tamm (TU Braunschweig) Cycloheptatrienyl-Zirconium-Komplexe des Typs \([(η^7-C_7H_7)Zr(C_5R_5)]\) ausgewählt, da diese die an ein Modellsystem gestellten Anforderungen erfüllen. Dadurch konnten etwa 20 neue Cycloheptatrienyl-Cyclopentadienyl-Zirconium-Komplexe erhalten werden, bei welchen anhand von Festkörperstrukturen oder DFT-Berechnungen der sterische Anspruch der substituierten Cyclopentadienylliganden ermittelt wurde.
Basierend auf sehr reaktiven Cyclopentadienyl-Eisen-σ-Arylkomplexen konnten zwei unterschiedliche Reaktionen untersucht werden. Zum einen wurde die von Wallasch beobachtete Oxidation von \(Cp'''Fe(II)(σ-Mes)\) \((Cp''' = C_5H_2tBu_3; Mes = C_6H_3-2,4,6-Me_3)\) mit \(PdCl_2\) zu dem neuartigen Komplex \(Cp'''Fe(III)(σ-Mes)Cl\) weiter untersucht, um diese Substanzklasse kennenzulernen. Dabei erwies sich Hexachlorethan als wesentlich günstigeres Oxidationsmittel, mit dessen Hilfe die Verbindungen \(^4CpFe(III)(σ-Mes)Cl,\) \(^4CpFe(III)(σ-C_6H_3-2,6-iPr_2)Cl\) und \(^5CpFe(III)(σ-C_6H_3-2,6-iPr_2)Cl\) synthetisiert und charakterisiert werden konnten.
Auch die von Weismann beobachtete σ/π-Umlagerungsreaktion von \(^5CpFe(σ-C_6H_3-2,6-iPr_2)\) mit Trimethylaluminium zu \(^5CpFe(π-C_6H_3-2,6-iPr_2-AlMe_3)\) wurde näher untersucht. Dazu wurde \(^4CpFe(σ-C_6H_3-2,6-iPr_2)\) mit \(AlR_3\) (R = Me, Et, Pr) umgesetzt, um die entsprechenden Umlagerungsprodukte herzustellen. Diese konnten bei der Umsetzung mit \(AlMe_3\) auch erhalten werden, allerdings zeigten sich bei der Reaktion mit \(AlEt_3\) und \(AlPr_3\) unerwartete Reaktionsprodukte. Durch einen Bromid-Alkyl-Austausch wurden die Komplexe \(^4CpFe(π-C_6H_3-2,6-iPr_2-AlEt_2Br)\) und \(^4CpFe(π-C_6H_3-2,6-iPr_2-AlPr_2Br)\) gebildet. Zusätzlich konnten bei der Umlagerung mit \(AlEt_3\) vier weitere Neben- bzw. Folgeprodukte beobachtet werden. Eines der Folgeprodukte ist der Butinkomplex \((^4CpFe)_2(μ,η^2:η^2-but-2-in)\), der vermutlich durch eine Kupplung zweier Ethylgruppen und Dehydrierung gebildet wird. Reaktionen von \([^4CpFe(μ-Br)]_2\) oder \(^4CpFe(σ-C_6H_3-2,6-iPr_2)\) mit Ethylmagnesiumbromid ergaben den gleichen Komplex. Für die Entstehung der unterschiedlichen Produkte bei der Reaktion von \(^4CpFe(σ-C_6H_3-2,6-iPr_2)\) mit \(AlEt_3\) wurde ein Mechanismus entwickelt und dieser anhand der gewonnenen Erkenntnisse belegt.
Due to its performance, the field of deep learning has gained a lot of attention, with neural networks succeeding in areas like \( \textit{Computer Vision} \) (CV), \( \textit{Neural Language Processing} \) (NLP), and \( \textit{Reinforcement Learning} \) (RL). However, high accuracy comes at a computational cost as larger networks require longer training time and no longer fit onto a single GPU. To reduce training costs, researchers are looking into the dynamics of different optimizers, in order to find ways to make training more efficient. Resource requirements can be limited by reducing model size during training or designing more efficient models that improve accuracy without increasing network size.
This thesis combines eigenvalue computation and high-dimensional loss surface visualization to study different optimizers and deep neural network models. Eigenvectors of different eigenvalues are computed, and the loss landscape and optimizer trajectory are projected onto the plane spanned by those eigenvectors. A new parallelization method for the stochastic Lanczos method is introduced, resulting in faster computation and thus enabling high-resolution videos of the trajectory and second-order information during neural network training. Additionally, the thesis presents the loss landscape between two minima along with the eigenvalue density spectrum at intermediate points for the first time.
Secondly, this thesis presents a regularization method for \( \textit{Generative Adversarial Networks} \) (GANs) that uses second-order information. The gradient during training is modified by subtracting the eigenvector direction of the biggest eigenvalue, preventing the network from falling into the steepest minima and avoiding mode collapse. The thesis also shows the full eigenvalue density spectra of GANs during training.
Thirdly, this thesis introduces ProxSGD, a proximal algorithm for neural network training that guarantees convergence to a stationary point and unifies multiple popular optimizers. Proximal gradients are used to find a closed-form solution to the problem of training neural networks with smooth and non-smooth regularizations, resulting in better sparsity and more efficient optimization. Experiments show that ProxSGD can find sparser networks while reaching the same accuracy as popular optimizers.
Lastly, this thesis unifies sparsity and \( \textit{neural architecture search} \) (NAS) through the framework of group sparsity. Group sparsity is achieved through \( \ell_{2,1} \)-regularization during training, allowing for filter and operation pruning to reduce model size with minimal sacrifice in accuracy. By grouping multiple operations together, group sparsity can be used for NAS as well. This approach is shown to be more robust while still achieving competitive accuracies compared to state-of-the-art methods.
Die Verwendung der Fest-Flüssig Extraktion zur Gewinnung von Wirkstoffen aus
Pflanzenmaterial ist so alt wie die Menschheit. Es sind daher angereicherte Extrakte oder
aufgereinigte Wirkstoffe in der Lebensmitteltechnologie, Biotechnologie und Pharmakologie
alltäglich vorzufinden. Die Diffusion sowie der Stofftransport der Wirkstoffe werden durch die
biologischen, chemischen, physikalischen Charakteristika des Pflanzenmaterials und durch die
Betriebsführung des Extraktionsvorgangs beeinflusst. Diese Dissertation analysiert das
Diffusionsverhalten diverser Wirkstoffe aus unterschiedlichen Pflanzenmaterialien mit einer
Unterstützung des Extraktionsvorgangs durch Mikrowellen, Ultraschallwellen oder Hochspannungsimpulse. Der Stofftransport wird bei festgesetzter Temperatur mit verfahrenstechnischen Modellen einstufig und mehrstufig beschrieben.
Im Gegensatz zu vorangegangenen Studien werden mannigfaltige Pflanzenstrukturen in
Kombination mit alternativen Prozesskonzepte beruhend auf den verschiedenen
Wirkmechanismen ausgewählt. Diese Modellierungen sind durch die vielfältigen Pflanzenmaterialien wie Blätter, Blüten, Nadeln, Samen, Rinde, Wurzeln und Kräuter unter der Einwirkung
der Mikrowellen, Ultraschallwellen sowie Hochspannungsimpulse, die zur Erhöhung der Ausbeute
an den Wirkstoffen im Extrakt beitragen, inspiriert. Zuerst wurden die Eigenschaften wie die
Schüttgutdichte, der Anteil an flüchtigen Bestandteilen, Partikelgrößenverteilung, der Lösungsmittel- und Phasenverhältniseinfluss der ausgewählten Pflanzenmaterialien untersucht. Wie
erwartet zeigt jede Pflanze und deren Wirkstoffe verschiedene Merkmale, die sich auf den
Extraktionsvorgang mit alternativen Prozesskonzepte auswirken.
Entgegen den Vermutungen, erreicht die Mikrowellen-unterstützende Extraktion mittels
dielektrischer Erwärmung nach Optimierung der Leistung die höchste Ausbeute für alle ausgewählten Pflanzenmaterialien, wenn diese getrocknet vorliegen. Mit der Ultraschall-unterstützenden Extraktionstechnologie wird bei festgesetzter Extraktionstemperatur eine
größere Quantität der Wirkstoffe im Extrakt im Vergleich zu einem gerührten Batch gemessen.
Die Hochspannungsimpuls-unterstützende Extraktionstechnologie mit einem einfachen
Pulsprotokoll und einer moderaten elektrischen Feldstärke zeigt bei frisch geernteten bzw. nicht
getrockneten Pflanzenmaterialien und wässrigen Extraktionsmilieu mit maximal 20 Vol% Ethanol
eine hohe Ausbeute an Wirkstoffen und eine milde Erwärmung des Extraktionsmittels.
Die Berechnungen der Geschwindigkeitskonstanten, der daraus resultierenden
Aktivierungsenergien und der effektiven Diffusionskoeffizienten, die auf der analytischen Lösung
des 2. Fick‘schen Gesetz basieren, korrelieren mit den festgestellten Makro- und Mikroeigenschaften der Pflanzenmaterialien. Schließlich werden mit einem automatisierten Hochdurchsatzsystem durchgeführte, dreistufige Kreuzstromextraktionen auf Grundlage der
Massenbilanzen modelliert und die tatsächlich eingesetzten mit den berechneten
Extraktionsmittelmengen bei unterschiedlichen Pflanzenmaterialien verglichen. Die Wirkstoffe in
holzartigen Strukturen und in Kräuter zeigen aufgrund ihres Quellverhalten eine abgeschwächte
Diffusion im Vergleich zum Herauslösen der Wirkstoffe aus blättrigem Rohstoff oder
Gewürzsamen.
In this dissertation, we discuss how to price American-style options. Our aim is to study and improve the regression-based Monte Carlo methods. In order to have good benchmarks to compare with them, we also study the tree methods.
In the second chapter, we investigate the tree methods specifically. We do research firstly within the Black-Scholes model and then within the Heston model. In the Black-Scholes model, based on Müller's work, we illustrate how to price one dimensional and multidimensional American options, American Asian options, American lookback options, American barrier options and so on. In the Heston model, based on Sayer's research, we implement his algorithm to price one dimensional American options. In this way, we have good benchmarks of various American-style options and put them all in the appendix.
In the third chapter, we focus on the regression-based Monte Carlo methods theoretically and numerically. Firstly, we introduce two variations, the so called "Tsitsiklis-Roy method" and the "Longstaff-Schwartz method". Secondly, we illustrate the approximation of American option by its Bermudan counterpart. Thirdly we explain the source of low bias and high bias. Fourthly we compare these two methods using in-the-money paths and all paths. Fifthly, we examine the effect using different number and form of basis functions. Finally, we study the Andersen-Broadie method and present the lower and upper bounds.
In the fourth chapter, we study two machine learning techniques to improve the regression part of the Monte Carlo methods: Gaussian kernel method and kernel-based support vector machine. In order to choose a proper smooth parameter, we compare fixed bandwidth, global optimum and suboptimum from a finite set. We also point out that scaling the training data to [0,1] can avoid numerical difficulty. When out-of-sample paths of stock prices are simulated, the kernel method is robust and even performs better in several cases than the Tsitsiklis-Roy method and the Longstaff-Schwartz method. The support vector machine can keep on improving the kernel method and needs less representations of old stock prices during prediction of option continuation value for a new stock price.
In the fifth chapter, we switch to the hardware (FGPA) implementation of the Longstaff-Schwartz method and propose novel reversion formulas for the stock price and volatility within the Black-Scholes and Heston models. The test for this formula within the Black-Scholes model shows that the storage of data is reduced and also the corresponding energy consumption.
Schon jetzt durchlaufen mehr als vier von fünf chemischen Produkten bei der Herstellung einen Katalysezyklus. In zunehmendem Maße findet man katalytische Anwendungen neben dem Einsatz in der Synthesechemie auch in den Biowissenschaften, beim Klima- und Umweltschutz sowie zur Energieversorgung. Durch gezieltes Ligandendesign werden dabei kontinuierlich bekannte Katalysatorsysteme optimiert und die Anwendungsbreite erweitert. Für zweizähnige, pyrimidinhaltige Ligandensysteme ist aus anderen Forschungsarbeiten der AG Thiel eine intramolekularen C-H-Aktivierung im Pyrimidinring bekannt, die zu einer carbanionkoordination am Übergangsmetallzentrum führt. Diese Reaktivität wurde im Rahmen dieser Arbeit mit der stabilisierenden Wirkung eines N-heterocyclischen Carbenliganden (NHC) zu einem neuen Ligandensystem kombiniert. Verschiedene Imidazoliumvorstufen neuer NHC-Liganden mit einem in der 2-Position aminosubstituierten Pyrimidinring als N-Substituent wurden über zwei neu erarbeitete Syntheserouten dargestellt und mit verschiedenen Übergangsmetallvorstufen umgesetzt. In Palladium(II)-Komplexen von pyrimidinyl- und mesitylsubstituierten NHC-Liganden wurden verschiedene Koordinationsmodi in Abhängigkeit von der verwendeten Synthesemethode beobachtet. Über Silber-Carben-Komplexe als Carben-Transferreagenzien konnten für verschieden tertiär amino- und mesitylsubstituierten Liganden die nicht C-H-aktivierten, d.h. C,N-koordinierten Palladiumkomplexe dargestellt werden. Eine direkte Umsetzung der ionischen Imidazoliumverbindungen mit Palladiumvorstufen wie PdCl2 führte in Pyridin und Pyridinderivaten als Lösungsmittel bei Reaktionstemperaturen direkt zu einer C-H-Aktivierung im Pyrimidinring des Liganden. Der leicht basische Pyridinligand stabilisiert während der Komplexbildung die hochreaktive, C-H-aktivierte Spezies und verhindert so Neben- und Zersetzungsreaktionen. Über die Abspaltung des labilen Pyridinliganden durch Erhitzen in schwach koordinierenden Lösungsmitteln wurden die zweikernigen, unlöslichen, pyridinfreien Palladiumkomplexe erhalten und mittels Festkörper-NMR-Spektroskopie charakterisiert. Diese Reaktion ist vollständig reversibel und wurde zum Einführen verschiedener Pyridinderivate als labile Liganden genutzt. In schwach koordinierenden Lösungsmitteln mit einem Siedepunkt < 80 °C, wie THF, wurde durch eine direkte Umsetzung der ionischen Vorstufen der Liganden mit PdCl2 eine weitere Art von Pd(II)-Komplexen erhalten, für welche die Strukturformel eines N-koordinierten Palladates postuliert wurde. In NMR-spektroskopischen Experimenten wurde die Reversibilität der C-H-Aktivierung im Pyrimidinring der Pd(II)-Komplexe in Abhängigkeit von pH-Wert und Temperatur nachgewiesen. Auch hier erwies sich der stabilisierende Pyridinligand für die C-H-Aktivierung und HCl-Eliminierung als notwendig. Die Rückreaktion wurde unter schwach sauren Reaktionsbedingungen bei Raumtemperatur über eine NHC-gebundene, pyridinhaltige Spezies, strukturanalog der literaturbekannten PEPPSI-Komplexe, beobachtet.
Für die stark Lewis-aciden Übergangsmetallzentren Iridium (III) und Ruthenium (II) wurden mit den entsprechenden ionischen Ligandenvorstufen über in situ dargestellte Silber-Carben-Komplexe ausschließlich die C-H-aktivierten, C,N-koordinierten Halbsandwichkomplexe der neuen 2-Amino-4-(imidazolylidenyl)pyrimidinliganden erhalten, trotz variierter Reaktionsbedingungen. Die C-H-Aktivierung mit anschließender HCl-Eliminierung erfolgte bei diesen Übergangsmetallzentren bereits bei Raumtemperatur irreversibel.
In Rahmen dieser Arbeit wurde außerdem die Notwendigkeit eines sterisch anspruchsvollen, stabilisierenden Mesitylrestes am NHC-Liganden für stabile und isolierbare C-H-aktivierte Komplexe beobachtet. Mit anderen, sterisch weniger anspruchsvollen Resten an dieser Position des Liganden wurden unter den Reaktionsbedingungen für potentielle C-H-Aktivierungen nur Zersetzungsprodukte erhalten. Von jedem Komplextyp der neuen C-H-aktivierten Übergangsmetallkomplexe wurden messbare Kristalle für eine Kristallstrukturanalyse erhalten, welche tiefere Einblicke in die Bindungssituation der neuen Liganden ermöglichte.
Die C-H-aktivierten Übergangsmetallkomplexe der neuen Liganden zeigen sehr gute Aktivitäten in verschiedenen katalytischen Anwendungen. Neben der stabilisierenden Wirkung des NHC mit starkem σ-Donorcharakter wird die hohe Elektronendichte am Übergangsmetallzentrum durch die Koordination des Carbanions weiter erhöht. Unter optimierten Bedingungen wurden in der Suzuki-Miyaura-Kupplung mit geringeren Katalysatorkonzentrationen der C-H-aktivierten Pd(II)-Komplexe eine große Bandbreite von sterisch und elektronisch gehinderten Chlorarylen mit verschiedenen Boronsäurederivaten erfolgreich zu Biarylen umgesetzt. Mit den C-H-aktivierten Ru(II)- und Ir(III)-Halbsandwichkomplexen der neuen Liganden wurden in der katalytischen Transferhydrierung von Acetophenon bereits bei geringen Katalysatorkonzentrationen von 0.15 mol% sehr hohe Ausbeuten erhalten. Die katalytisch hochaktiven Komplexe zeichneten sich außerdem durch eine hohe Stabilität unter den optimierten Reaktionsbedingungen aus. Die C-H-Aktivierung weist zwar keine Abhängigkeit vom sterischen Anspruch der variierten tertiären Aminosubstituenten auf, wurde aber für die anderen Reste in der 2-Position des Pyrimidinrings nicht beobachtet.
Nowadays one of the major objectives in geosciences is the determination of the gravitational field of our planet, the Earth. A precise knowledge of this quantity is not just interesting on its own but it is indeed a key point for a vast number of applications. The important question is how to obtain a good model for the gravitational field on a global scale. The only applicable solution - both in costs and data coverage - is the usage of satellite data. We concentrate on highly precise measurements which will be obtained by GOCE (Gravity Field and Steady State Ocean Circulation Explorer, launch expected 2006). This satellite has a gradiometer onboard which returns the second derivatives of the gravitational potential. Mathematically seen we have to deal with several obstacles. The first one is that the noise in the different components of these second derivatives differs over several orders of magnitude, i.e. a straightforward solution of this outer boundary value problem will not work properly. Furthermore we are not interested in the data at satellite height but we want to know the field at the Earth's surface, thus we need a regularization (downward-continuation) of the data. These two problems are tackled in the thesis and are now described briefly. Split Operators: We have to solve an outer boundary value problem at the height of the satellite track. Classically one can handle first order side conditions which are not tangential to the surface and second derivatives pointing in the radial direction employing integral and pseudo differential equation methods. We present a different approach: We classify all first and purely second order operators which fulfill that a harmonic function stays harmonic under their application. This task is done by using modern algebraic methods for solving systems of partial differential equations symbolically. Now we can look at the problem with oblique side conditions as if we had ordinary i.e. non-derived side conditions. The only additional work which has to be done is an inversion of the differential operator, i.e. integration. In particular we are capable to deal with derivatives which are tangential to the boundary. Auto-Regularization: The second obstacle is finding a proper regularization procedure. This is complicated by the fact that we are facing stochastic rather than deterministic noise. The main question is how to find an optimal regularization parameter which is impossible without any additional knowledge. However we could show that with a very limited number of additional information, which are obtainable also in practice, we can regularize in an asymptotically optimal way. In particular we showed that the knowledge of two input data sets allows an order optimal regularization procedure even under the hard conditions of Gaussian white noise and an exponentially ill-posed problem. A last but rather simple task is combining data from different derivatives which can be done by a weighted least squares approach using the information we obtained out of the regularization procedure. A practical application to the downward-continuation problem for simulated gravitational data is shown.
Wireless Sensor Networks (WSN) are dynamically-arranged networks typically composed of a large number of arbitrarily-distributed sensor nodes with computing capabilities contributing to –at least– one common application. The main characteristic of these networks is that of being functionally constrained due to a scarce availability of resources and strong dependence on uncontrollable environmental factors. These conditions introduce severe restrictions on the applicability of classic real-time methods aiming at guaranteeing time-bounded communications. Existing real-time solutions tend to apply concepts that were originally not conceived for sensor networks, idealizing realistic application scenarios and overlooking at important design limitations. This results in a number of misleading practices contributing to approaches of restricted validity in real-world scenarios. Amending the confrontation between WSNs and real-time objectives starts with a review of the basic fundamentals of existing approaches. In doing so, this thesis presents an alternative approach based on a generalized timeliness notion suitable to the particularities of WSNs. The new conceptual notion allows the definition of feasible real-time objectives opening a new scope of possibilities not constrained to idealized systems. The core of this thesis is based on the definition and application of Quality of Service (QoS) trade-offs between timeliness and other significant QoS metrics. The analysis of local and global trade-offs provides a step-by-step methodology identifying the correlations between these quality metrics. This association enables the definition of alternative trade-off configurations (set points) influencing the quality performance of the network at selected instants of time. With the basic grounds established, the above concepts are embedded in a simple routing protocol constituting a proof of concept for the validity of the presented analysis. Extensive evaluations under realistic scenarios are driven on simulation environments as well as real testbeds, validating the consistency of this approach.
Open distributed systems are a class of distributed systems where (i) only partial information about the environment, in which they are running, is present, (ii) new resources may become available at runtime, and (iii) a subsystem may become aware of other subsystems after some interaction. Modeling and implementing such systems correctly is a complex task due to the openness and the dynamicity aspects. One way to ensure that the resulting systems behave correctly is to utilize formal verification.
Formal verification requires an adequate semantic model of the implementation, a specification of the desired behavior, and a reasoning technique. The actor model is a semantic model that captures the challenging aspects of open distributed systems by utilizing actors as universal primitives to represent system entities and allowing them to create new actors and to communicate by sending directed messages as reply to received messages. To enable compositional reasoning, where the reasoning task is reduced to independent verification of the system parts, semantic entities at a higher level of abstraction than actors are needed.
This thesis proposes an automaton model and combines sound reasoning techniques to compositionally verify implementations of open actor systems. Based on I/O automata, the model allows automata to be created dynamically and captures dynamic changes in communication patterns. Each automaton represents either an actor or a group of actors. The specification of the desired behavior is given constructively as an automaton. As the basis for compositionality, we formalize a component notion based on the static structure of the implementation instead of the dynamic entities (the actors) occurring in the system execution. The reasoning proceeds in two stages. The first stage establishes the connection between the automata representing single actors and their implementation description by means of weakest liberal preconditions. The second stage employs this result as the basis for verifying whether a component specification is satisfied. The verification is done by building a simulation relation from the automaton representing the implementation to the component's automaton. Finally, we validate the compositional verification approach through a number of examples by proving correctness of their actor implementations with respect to system specifications.
An Efficient Automated Machine Learning Framework for Genomics and Proteomics Sequence Analysis
(2023)
Genomics and Proteomics sequence analyses are the scientific studies of understanding the language of Deoxyribonucleic Acid (DNA), Ribonucleic Acid (RNA) and protein biomolecules with an objective of controlling the production of proteins and understanding their core functionalities. It helps to detect chronic diseases in early stages, root causes of clinical changes, key genetic targets for pharmaceutical development and optimization of therapeutics for various age groups. Most Genomics and Proteomics sequence analysis work is performed using typical wet lab experimental approaches that make use of different genetic diagnostic technologies. However, these approaches are costly, time consuming, skill and labor intensive. Hence, these approaches slow down the process of developing an efficient and economical sequence analysis landscape essential to demystify a variety of cellular processes and functioning of biomolecules in living organisms. To empower manual wet lab experiment driven research, many machine learning based approaches have been developed in recent years. However, these approaches cannot be used in practical environment due to their limited performance. Considering the sensitive and inherently demanding nature of Genomics and Proteomics sequence
analysis which can have very far-reaching as well as serious repercussions on account of misdiagnosis, the main
objective of this research is to develop an efficient automated computational framework for Genomics and Proteomics sequence analysis using the predictive and prescriptive analytical powers of Artificial Intelligence (AI) to significantly improve healthcare operations.
The proposed framework is comprised of 3 main components namely sequence encoding, feature engineering and
discrete or continuous value predictor. The sequence encoding module is equipped with a variety of existing and newly developed sequence encoding algorithms that are capable of generating a rich statistical representation of DNA, RNA and protein raw sequences. The feature engineering module has diverse types of feature selection and dimensionality reduction approaches which can be used to generate the most effective feature space. Furthermore, the discrete and/or continuous value predictor module of the proposed framework contains a wide range of existing machine learning and newly developed deep learning regressors and classifiers. To evaluate the integrity and generalizability of the proposed framework, we have performed a large-scale experimentation over diverse types of Genomics and Proteomics sequence analysis tasks (i.e., DNA, RNA and proteins).
In Genomics analysis, Epigenetic modification detection is one of the key component. It helps clinical researchers and practitioners to distinguish normal cellular activities from malfunctioned ones, which can lead to diverse genetic disorders such as metabolic disorders, cancers, etc. To support this analysis, the proposed framework is used to solve the problem of DNA and Histone modification prediction where it has achieved state-of-the-art performance on 27 publicly available benchmark datasets of 17 different species with best accuracy of 97%. RNA sequence analysis is another vital component of Genomics sequence analysis where the identification of different coding and non-coding RNAs as well as their subcellular localization patterns help to demystify the functions of diverse RNAs, root causes of clinical changes, develop precision medicine and optimize therapeutics. To support this analysis, the proposed framework is utilized for non-coding RNA classification and multi-compartment RNA subcellular localization prediction. Where it achieved state-of-the-art performance on 10 publicly available benchmark datasets of Homo sapiens and Mus Musculus species with best accuracy of 98%.
Proteomics sequence analysis is essential to demystify the virus pathogenesis, host immunity responses, the way
proteins affect or are affected by the cell processes, their structure and core functionalities. To support this analysis, the proposed framework is used for host protein-protein and virus-host protein-protein interaction prediction. It has achieved state-of-the-art performance on 2 publicly available protein protein interaction datasets of Homo Sapiens and Mus Musculus species with best accuracy of 96% and 7 viral host protein protein interaction datasets of multiple hosts and viruses with best accuracy of 94%. Considering the performance and practical significance of proposed framework, we believe proposed framework will help researchers in developing cutting-edge practical applications for diverse Genomic and Proteomic sequence analyses tasks (i.e., DNA, RNA and proteins).
Multidisciplinary optimizations (MDOs) encompass optimization problems that combine different disciplines into a single optimization with the aim of converging towards a design that simultaneously fulfills multiple criteria. For example, considering both fluid and structural disciplines to obtain a shape that is not only aerodynamically efficient, but also respects structural constraints. Combined with CAD-based parametrizations, the optimization produces an improved, manufacturable shape. For turbomachinery applications, this method has been successfully applied using gradient-free optimization methods such as genetic algorithms, surrogate modeling, and others. While such algorithms can be easily applied without access to the source code, the number of iterations to converge is dependent on the number of design parameters. This results in high computational costs and limited design spaces. A competitive alternative is offered by gradient-based optimization algorithms combined with adjoint methods, where the computational complexity of the gradient calculation is no longer dependent on the number of design parameters, but rather on the number of outputs. Such methods have been extensively used in single-disciplinary aerodynamic optimizations using adjoint fluid solvers and CAD parametrizations. However, CAD-based MDOs leveraging adjoint methods are just beginning to emerge.
This thesis contributes to this field of research by setting up a CAD-based adjoint MDO framework for turbomachinery design using both fluid and structural disciplines. To achieve this, the von Kármán Institute’s existing CAD-based optimization framework cado is augmented by the development of a FEM-based structural solver which has been differentiated using the algorithmic differentiation tool CoDiPack of TU Kaiserslautern. While most of the code could be differentiated in a black-box fashion, special treatment is required for the iterative linear and eigenvalue solvers to ensure accuracy and reduce memory consumption. As a result, the solver has the capability of computing both stress and vibration gradients at a cost independent on the number of design parameters. For the presented application case of a radial turbine optimization, the full gradient calculation has a computational cost of approximately 3.14 times the cost of a primal run and the peak memory usage of approximately 2.76 times that of a primal run.
The FEM code leverages object-oriented design such that the same base structure can be reused for different purposes with minimal re-differentiation. This is demonstrated by considering a composite material test case where the gradients could be easily calculated with respect to an extended design space that includes material properties. Additionally, the structural solver is reused within a CAD-based mesh deformation, which propagates the structural FEM mesh gradients through to the CAD parameters. This closes the link between the CAD shape and FEM mesh. Finally, the MDO framework is applied by optimizing the aerodynamic efficiency of a radial turbine while respecting structural constraints.
An efficient multiscale approach is established in order to compute the macroscopic response of nonlinear composites. The micro problem is rewritten in an integral form of the Lippmann-Schwinger type and solved efficiently by Fast Fourier Transforms. Using realistic microstructure models complex nonlinear effects are reproduced and validated with measured data of fiber reinforced plastics. The micro problem is integrated in a Finite Element framework which is used to solve the macroscale. The scale coupling technique and a consistent numerical algorithm is established. The method provides an efficient way to determine the macroscopic response considering arbitrary microstructures, constitutive behaviors and loading conditions.
Optical Character Recognition (OCR) system plays an important role in digitization of data acquired as images from a variety of sources. Although the area is very well explored for Latin languages, some of the languages based on Arabic cursive script are not yet explored. It is due to many factors: Most importantly are the unavailability of proper data sets and complexities posed by cursive scripts. The Pashto language is one of such languages which needs considerable exploration towards OCR. In order to develop such an OCR system, this thesis provides a pioneering study that explores deep learning for the Pashto language in the field of OCR.
The Pashto language is spoken by more than $50$ million people across the world, and it is an active medium both for oral as well as written communication. It is associated with rich literary heritage and contains huge written collection. These written materials present contents of simple to complex nature, and layouts from hand-scribed to printed text. The Pashto language presents mainly two types of complexities (i) generic w.r.t. cursive script, (ii) specific w.r.t. Pashto language. Generic complexities are cursiveness, context dependency, and breaker character anomalies, as well as space anomalies. Pashto specific complexities are variations in shape for a single character and shape similarity for some of the additional Pashto characters. Existing research in the area of Arabic OCR did not lead to an end-to-end solution for the mentioned complexities and therefore could not be generalized to build a sophisticated OCR system for Pashto.
The contribution of this thesis spans in three levels, conceptual level, data level, and practical level. In the conceptual level, we have deeply explored the Pashto language and identified those characters, which are responsible for the challenges mentioned above. In the data level, a comprehensive dataset is introduced containing real images of hand-scribed contents. The dataset is manually transcribed and has the most frequent layout patterns associated with the Pashto language. The practical level contribution provides a bridge, in the form of a complete Pashto OCR system, and connects the outcomes of the conceptual and data levels contributions. The practical contribution comprises of skew detection, text-line segmentation, feature extraction, classification, and post-processing. The OCR module is more strengthened by using deep learning paradigm to recognize Pashto cursive script by the framework of Recursive Neural Networks (RNN). Proposed Pashto text recognition is based on Long Short-Term Memory Network (LSTM) and realizes a character recognition rate of $90.78\%$ on Pashto real hand-scribed images. All these contributions are integrated into an application to provide a flexible and generic End-to-End Pashto OCR system.
The impact of this thesis is not only specific to the Pashto language, but it is also beneficial to other cursive languages like Arabic, Urdu, and Persian e.t.c. The main reason is the Pashto character set, which is a superset of Arabic, Persian, and Urdu languages. Therefore, the conceptual contribution of this thesis provides insight and proposes solutions to almost all generic complexities associated with Arabic, Persian, and Urdu languages. For example, an anomaly caused by breaker characters is deeply analyzed, which is shared among 70 languages, mainly use Arabic script. This thesis presents a solution to this issue and is equally beneficial to almost all Arabic like languages.
The scope of this thesis has two important aspects. First, a social impact, i.e., how a society may benefit from it. The main advantages are to bring the historical and almost vanished document to life and to ensure the opportunities to explore, analyze, translate, share, and understand the contents of Pashto language globally. Second, the advancement and exploration of the technical aspects. Because, this thesis empirically explores the recognition and challenges which are solely related to the Pashto language, both regarding character-set and the materials which present such complexities. Furthermore, the conceptual and practical background of this thesis regarding complexities of Pashto language is very beneficial regarding OCR for other cursive languages.
The main theme of this thesis is the interplay between algebraic and tropical intersection
theory, especially in the context of enumerative geometry. We begin by exploiting
well-known results about tropicalizations of subvarieties of algebraic tori to give a
simple proof of Nishinou and Siebert’s correspondence theorem for rational curves
through given points in toric varieties. Afterwards, we extend this correspondence
by additionally allowing intersections with psi-classes. We do this by constructing
a tropicalization map for cycle classes on toroidal embeddings. It maps algebraic
cycle classes to elements of the Chow group of the cone complex of the toroidal
embedding, that is to weighted polyhedral complexes, which are balanced with respect
to an appropriate map to a vector space, modulo a naturally defined equivalence relation.
We then show that tropicalization respects basic intersection-theoretic operations like
intersections with boundary divisors and apply this to the appropriate moduli spaces
to obtain our correspondence theorem.
Trying to apply similar methods in higher genera inevitably confronts us with moduli
spaces which are not toroidal. This motivates the last part of this thesis, where we
construct tropicalizations of cycles on fine logarithmic schemes. The logarithmic point of
view also motivates our interpretation of tropical intersection theory as the dualization
of the intersection theory of Kato fans. This duality gives a new perspective on the
tropicalization map; namely, as the dualization of a pull-back via the characteristic
morphism of a logarithmic scheme.
A popular model for the locations of fibres or grains in composite materials
is the inhomogeneous Poisson process in dimension 3. Its local intensity function
may be estimated non-parametrically by local smoothing, e.g. by kernel
estimates. They crucially depend on the choice of bandwidths as tuning parameters
controlling the smoothness of the resulting function estimate. In this
thesis, we propose a fast algorithm for learning suitable global and local bandwidths
from the data. It is well-known, that intensity estimation is closely
related to probability density estimation. As a by-product of our study, we
show that the difference is asymptotically negligible regarding the choice of
good bandwidths, and, hence, we focus on density estimation.
There are quite a number of data-driven bandwidth selection methods for
kernel density estimates. cross-validation is a popular one and frequently proposed
to estimate the optimal bandwidth. However, if the sample size is very
large, it becomes computational expensive. In material science, in particular,
it is very common to have several thousand up to several million points.
Another type of bandwidth selection is a solve-the-equation plug-in approach
which involves replacing the unknown quantities in the asymptotically optimal
bandwidth formula by their estimates.
In this thesis, we develop such an iterative fast plug-in algorithm for estimating
the optimal global and local bandwidth for density and intensity estimation with a focus on 2- and 3-dimensional data. It is based on a detailed
asymptotics of the estimators of the intensity function and of its second
derivatives and integrals of second derivatives which appear in the formulae
for asymptotically optimal bandwidths. These asymptotics are utilised to determine
the exact number of iteration steps and some tuning parameters. For
both global and local case, fewer than 10 iterations suffice. Simulation studies
show that the estimated intensity by local bandwidth can better indicate
the variation of local intensity than that by global bandwidth. Finally, the
algorithm is applied to two real data sets from test bodies of fibre-reinforced
high-performance concrete, clearly showing some inhomogeneity of the fibre
intensity.
Analyse des Potenzials von Phasenwechselmaterialien im Einsatz bei mehrschichtigen Bauelementen
(2023)
Das letzte Jahrzehnt hat die menschliche Wahrnehmung von Klimawandel und Energieverbrauch dramatisch verändert. Es ist ersichtlich, dass der Mensch aufgrund der
CO2-Emissionen aus dem fossilen Energieverbrauch eine dominante Rolle im Klima-
wandel spielt. Es ist heute weitgehend Konsens, dass wir den Einsatz fossiler Brenn-
stoffe reduzieren müssen, um die Abhängigkeit von den Lieferländern zu verringern
und die CO2-Emissionen zu reduzieren. In dieser Hinsicht ist die beste Alternative die
Verwendung Erneuerbarer Energien.
Energie aus solarer Strahlung ist weder kontinuierlich noch vollständig steuerbar, aber
quasi kostenlos verfügbar. Um die nicht kontinuierliche, teilweise steuerbare Bereit-
stellung zu kompensieren, muss die Energie für ihre Nutzung möglichst effizient ge-
speichert werden. Aktuell werden Bauteile bereits passiv als Wärmespeicher genutzt
und decken einen Teil des Wärmebedarfs von Gebäuden. Für eine umfangreichere
Deckung dieses Bedarfs können Bauteile mittels thermischer Aktivierung aktiv beladen
werden. Darüber hinaus können Materialien eingesetzt werden, um die thermische
Speicherkapazität zu erhöhen. Besonders interessant sind Phasenwechselmaterialien
(PCM), welche ohne Temperaturanstieg Wärme speichern.
PCMs weisen ein nichtlineares Verhalten auf, was bei Simulationen oder Steuerungen
neue Herausforderungen bedeutet. Daher ist es unerlässlich, ihr Verhalten anhand ei-
ner numerischen Methode zu modellieren. In dieser Dissertation wird die Finite-Diffe-
renzen-Methode (FDM) zur Modellierung und Simulation des thermischen Verhaltens
von thermisch aktivierten Bauteilen, ausgestattet mit PCM, benutzt und zur Diskreti-
sierung der Differentialgleichungen wird das Crank-Nicolson-Verfahren eingesetzt. Au-
ßerdem wird zur Modellierung des latenten Verhaltens von PCM die Enthalpie-Me-
thode angewendet. Die Hysterese vom PCM ist auch modelliert. Für die Simulation der
aktivierten Bauteile wird das Stern-Dreieck-Transformationsverfahren genutzt. Basie-
rend auf den genannten Methoden und den dazugehörigen Gleichungen wird ein Code
im MATLAB Programm implementiert, mit dem das thermische Verhalten von ther-
misch aktivierten Bauteilen, ausgestattet mit PCM, untersuchen werden kann. Letzt-
endlich wird zur Validierung des numerischen Modells das komplexe Verhalten von
PCM untersucht und zusätzlich wird der entwickelte Code durch thermische Simulati-
onen einer genormten Wand in „ANSYS Workbench“ verifiziert. Nach der Validierung
des Codes und seiner Entwicklung als TRNSYS-Komponente wird die Parametrisie-
rung durchgeführt. Sein Zweck war es, unter Berücksichtigung der verschiedenen Pa-
rameter eine Außenwand zu entwerfen, die den Heizwärmebedarf des Referenzge-
bäudes, SFH30 nach IEA SHC Task32, maximal und gleichmäßig im Laufe der drei
aufeinanderfolgenden kältesten Tage in jedem Monat der Heizperiode abdecken kann.
Schließlich wird ein Optimierungscode mit Hilfe der Simulated Annealing Methode im
MATLAB Programm implementiert. Mit diesem Code wird die Wandkonstruktion be-
stimmt, welche den gesamten Heizwärmebedarf des Referenzgebäudes gleichmäßig
im Lauf der drei aufeinanderfolgenden kältesten Tage in jedem Monat der Heizperiode
abdecken kann.
Diese Arbeit befasste sich mit der Analyse genetischer Veränderungen in der Familie P006 von Piperacillin-resistenten Mutanten von S. pneumoniae R6. Jede der fünf Mutanten P106 bis P506 dieser Familie wurde aus dem jeweiligen Parentalstamm auf ansteigender Konzentration des lytischen ß-Lactams Piperacillin isoliert und zeichnete sich durch eine jeweils höhere minimale Hemmkonzentration (MHK) für Piperacillin aus (Laible et al., 1987). In Mutante P106 konnte mit CpoA bereits eine Resistenzdeterminante für Piperacillin identifiziert werden, welche nicht zu den klassischen Targets der ß-Lactamantibiotika, den Penicillin-Bindeproteinen (Pbp), zählt (Grebe et al., 1997). Die Mutanten P206 und P306 zeigten aufgrund von Mutationen in Pbp2b und Pbp2x eine höhere Resistenz gegen Piperacillin (Hakenbeck et al., 1994; Grebe & Hakenbeck, 1996). In dieser Arbeit standen die Identifizierung und Charakterisierung der bisher unbekannten Resistenzdeterminante für Piperacillin in Mutante P406 und die Charakterisierung der bisher nur unzulänglich untersuchten Nicht-Pbp-Resistenzdeterminante CpoA in Mutante P106 im Mittelpunkt der Analysen. Im Fall der bereits identifizierten Resistenzdeterminante in Mutante P106 handelt es sich um das für eine Glykosyltransferase kodierende Gen cpoA. Die Herstellung einer cpoA-Deletionsmutante, sowie deren Charakterisierung, sollten zur Aufklärung des zugrundeliegenden Resistenzmechanismus in P106 und der Funktion von CpoA beitragen. Durch die Herstellung der cpoA-Deletionsmutante und die Bestimmung der MHK für Piperacillin konnte gezeigt werden, daß ein Ausfall der durch CpoA katalysierten Reaktion einen Anstieg der MHK für Piperacillin in S. pneumoniae R6 bewirkt. Die eingehende phänotypische Charakterisierung zeigte, daß die cpoA-Deletionsmutante zudem eine reduzierte genetische Kompetenz, eine reduzierte Säuretoleranz, einen höheren Bedarf an zweiwertigen Mg-Ionen, eine längere Generationszeit und eine verlangsamte Autolyse im Vergleich zu S. pneumoniae R6 besitzt. Diese Beobachtungen, sowie die Ergebnisse einer Microarray-basierten, globalen Transkriptomanalyse lassen es unter Berücksichtigung der biochemischen Charakterisierung von CpoA (Edman et al., 2003) als wahrscheinlich erscheinen, daß CpoA an der Synthese von α-Galactosyl-Glucosyl-Diacylglycerin, einem der Hauptglycolipide der Cytoplasmamembran von S. pneumoniae beteiligt ist. Die Deletion von cpoA könnte demzufolge auch einen Effekt auf die Menge der Lipoteichonsäuren in der Zellwand von S. pneumoniae besitzen, da der Precursor von α-Galactosyl-Glucosyl-Diacylglycerin, das α-Monoglucosyl-Diacylglycerin vermutlich den Lipidanker der Lipoteichonsäuren darstellt. Basierend auf dieser Annahme konnte ein Modell zur Funktion von CpoA erstellt werden, welches eine Erklärung des Resistenzmechanismus für Piperacillin in Mutante P106, bzw. in der cpoA-Deletionsmutante ermöglichen würde. In Mutante P406 konnten weitere Veränderungen der Pbps bereits ausgeschlossen werden (Hakenbeck et al., 1994; Grebe & Hakenbeck, 1996). Durch eine Microarray-basierte, globale Transkriptomanalyse aller fünf Mutanten der Familie P006 konnten Gene identifiziert werden, deren Transkripte im Vergleich zu S. pneumoniae R6 nur in P406 signifikant veränderte Mengen aufwiesen: Unter diesen Genen befanden sich sechs Gene, welche aufgrund ihrer geclusterten Anordnung im Genom von S. pneumoniae als putative funktionelle Einheit (TCS11-Cluster) angesehen wurden. Diese Gene zeigten eine bis zu 22-fach erhöhte Transkriptmenge in Mutante P406. Zudem konnten nur in Mutante P406 von Genen des an der Teichonsäurebiosynthese beteiligten lic1-Operons eine bis zu 7,9-fach höhere Transkriptmenge beobachtet werden. Keines der Genprodukte des TCS11-Clusters wurde bisher charakterisiert. Aufgrund von Blast- und Domänen-Analysen konnten den Genprodukten putative Funktionen zugeschrieben werden. Die Gene smp11A und smp11B kodieren für zwei putative Membranproteine von 63 und 64 Aminosäuren. Die Gene nbp11 und msp11 kodieren für die ATPase- und die Permease-Komponente eines putativen ABC-Transporters. kin11 und reg11 kodieren für die Histidin-Kinase und den Response-Regulator des bisher uncharakterisierten Zweikomponentensystems 11 (TCS11) in S. pneumoniae. Die Gene sind in der Reihenfolge smp11A, smp11B, nbp11, msp11, kin11 und reg11 auf dem Minus-Strang im Genom von S. pneumoniae lokalisiert. Die Deletion der Gene kin11 und reg11 des TCS11 in P406 führte zum Abfall der MHK für Piperacillin auf die MHK des Parentalstamms P306. Somit konnte die Beteiligung des TCS11 in S. pneumoniae an einem unbekannten Resistenzmechanismus gegen Piperacillin nachgewiesen werden. Die Deletion von nbp11 in P406 führte ebenfalls zum Abfall der MHK für Piperacillin, womit einerseits auch für Nbp11 eine Beteiligung an dem unbekannten Resistenzmechanismus gegen Piperacillin gezeigt werden konnte und andererseits eine transkriptionelle Regulation der Gene smp11A, smp11B, nbp11 und msp11 durch das TCS11 vermutet werden konnte. Durch eine konstitutive, gemeinsame Überexpression der Gene smp11A, smp11B, nbp11 und msp11 in S. pneumoniae R6 wurde gezeigt, daß die Überexpression dieser Gene hinreichend für eine Erhöhung der Resistenz gegen Piperacillin ist. Durch 5´-RACE-Analysen konnten die beiden Transkriptionsstartpunkte P11.1 und P11.2 im Bereich des TCS11-Clusters kartiert werden. P11.1 befindet sich 20bp upstream von smp11A und P11.2 befindet sich 441bp upstream von kin11 innerhalb von msp11. Eine Northern-Analyse und die Durchführung von PCR auf cDNA zeigte, daß die Gene des TCS11-Clusters in zwei überlappenden Transkriptionseinheiten transkribiert werden. Die Gene kin11 und reg11 sind zusammen mit einer downstream von reg11 liegenden Kopie des repetetiven Elements rupA im 11-2-Operon organisiert und werden ausgehend von Promotor P11.2 inklusive des ungewöhnlich langen Leaders von 441bp transkribiert. smp11A, smp11B, nbp11 und msp11 sind im 11-1-Operon organisiert und werden ausgehend von Promotor P11.1 transkribiert. Die Zugehörigkeit von kin11 und reg11 zum 11-1-Operon konnte hingegegen bei den verwendeten Wachstumsbedingungen nicht gezeigt werden. Es konnte bereits gezeigt werden, daß phosphoryliertes Reg11 (Reg11-P) an die Promotor-Region von P11.1 bindet (Marciszewski, Diplomarbeit 2007). Die Bestimmung der Aktivität von P11.1 in S. pneumoniae R6, sowie in kin11-, reg11- und kin11reg11-Deletionsmutanten zeigte, daß P11.1 einer direkten, positiven Regulation durch das TCS11 unterliegt. Durch Sequenzvergleiche der Promotor-Region von P11.1 mit den DNA-Regionen putativer Promotoren von zum TCS11-Cluster ähnlich organisierten Clustern homologer Proteine in Genomen anderer Gram-positiver Bakterien konnten drei hoch konservierte Sequenzabschnitte identifiziert werden, von welchen gezeigt werden konnte, daß sie für die Bindung von Reg11-P in S. pneumoniae essentiell sind. Vermutlich stellt die Konsensus-Sequenz ATGACA(2)TGTCAT(8-9)GTGACA die DNA-Bindestelle von Reg11-P dar. Es konnten keine weiteren, zu 100 % konservierten Sequenzen dieser Art im Genom von S. pneumoniae gefunden werden. In EMSA-Assays mit weniger konservierten Sequenzen dieser Art konnte keine Bindung von Reg11-P beobachtet werden. Somit handelt es sich bei der Bindestelle an P11.1 vermutlich um die einzige Bindestelle von Reg11-P im Genom von S. pneumoniae. Von P11.2 konnte durch die Bestimmung der Promotor-Aktivität in Deletionsmutanten einzelner Gene des TCS11-Clusters gezeigt werden, daß auch dieser Promotor einer Regulation unterliegt, welche jedoch nicht durch die Bindung von Reg11-P, oder von unphosphoryliertem Reg11 vermittelt wird. Die Aktivität von P11.2 ist hierbei jedoch einerseits Abhängig von der Anwesenheit von Kin11 und andererseits entweder von der Funktion der Membranproteine Smp11A und Smp11B, oder von der durch Nbp11/Msp11 transportierten unbekannten Substanz. Die Bestimmung der Promotor-Aktivität von P11.2 in Deletionsmutanten einzelner Gene des TCS11-Clusters und die unterschiedlichen phänotypischen Effekte einer kin11-, reg11- und einer kin11reg11-Deletionsmutante zeigten, daß unphosphoryliertes Reg11 ebenfalls in der Lage sein muß, die Transkription von noch unbekannten Zielgenen durch Bindung an eine weitere, unbekannte DNA-Bindestelle zu regulieren. Sowohl durch die Deletion eines Großteils des 441nt langen Leaders des Transkripts des 11-2-Operons, als auch durch die Deletion zweier verschiedener Abschnitte des 3´-untranslatierten Bereichs dieses Transkripts konnte gezeigt werden, daß der 5´- und der 3´-untranslatierte Bereich an noch unbekannten regulatorischen Mechanismen beteiligt sind. Die Deletion einzelner Gene des TCS11-Clusters, sowie die gemeinsame Überexpression von Smp11A, Smp11B, Nbp11 und Msp11 bewirkten die gleichen phänotypischen Effekte wie die charakterisierte cpoA-Deletionsmutante. So konnte in Wachstumsexperimenten der gleiche Einfluß auf die Generationszeit, die maximale Zelldichte und die Autolyse, wie in der cpoA-Deletionsmutante, gezeigt werden. Die Microarray-basierte Transkriptomanalyse zweier Deletionsmutanten von Genen des TCS11-Clusters zeigte zudem, daß sich infolge dieser Deletionen größtenteils die Transkriptmengen solcher Gene ändern, welche auch auf eine Deletion von cpoA reagieren. Hierzu zählen neben zahlreichen Genen für Proteine unbekannter Funktion die Gene des Kompetenzregulons, des blp-Clusters, sowie des Cholinbindeproteins PcpA und der Subtilisin-artigen Proteinase PrtA. Die in Mutante P406 beobachteten höheren Transkriptmengen von an der Teichonsäurebiosynthese beteiligten Genen des lic1-Operons konnten durch die Deletion von kin11reg11 revidiert werden. Die konstitutive gemeinsame Überexpression von Smp11A, Smp11B, Nbp11 und Msp11, sowie die Bestimmung der Aktivität des Promotors P1spr1149 des lic1-Operons zeigte, daß die Transkription der Gene des lic1-Operons indirekt von der Menge an Smp11A, Smp11B, Nbp11 und Msp11 abhängt. Diese Ergebnisse führten zu der Hypothese, daß das TCS11-Cluster und die Glykosyltransferase CpoA an den gleichen, oder zumindest an sich beeinflussenden, Membran-assoziierten Vorgängen beteiligt sind. Folglich konnte durch die molekulargenetische Charakterisierung des in ähnlicher genetischer Organisation in Gram-positiven Bakterien weit verbreiteten TCS11-Clusters erstmals ein Hinweis auf die putative physiologische Funktion des TCS11-Clusters in S. pneumoniae erhalten werden.
Diversitätsgenerierende Retroelemente (DGRs) wurden im Jahre 2002 in Bordetella‐Phagen entdeckt
und stellen eine einzigartige Klasse unter den Retroelementen dar. Durch einen speziellen „Copy‐and‐
Replace“ Mechanismus sind sie in der Lage ein bestimmtes Zielgen zu hypermutieren. Bei diesem
Mutagenic Homing‐Prozess wird die RNA der Templat‐Region (TR) durch die elementeigene reverse
Transkriptase (RT) transkribiert. Die dabei entstandene mutierte cDNA wird anschließend in die
variable Region (VR) des Zielgens inkorporiert und dieses somit diversifiziert. Hierbei steht der
experimentelle Nachweis für die Hypermutation durch die RT noch aus. Zudem spielt das akzessorische
Protein (Avd) eine weitere wichtige Rolle im Mutagenic Homing Prozess, wobei dessen tatsächliche
Funktion noch nicht charakterisiert werden konnte. Bis dato gibt es vor allem Analysen in Bezug auf
das Bordetella‐Phagen DGR, womit sich die Frage nach anderen Systemen und allgemeiner
Anwendbarkeit stellt. Daher war die Analyse des Nostoc sp. PCC 7120 DGR Hauptgegenstand dieser
Arbeit, wobei der Fokus auf der Untersuchung der reversen Transkripase (nRT), sowie der
Charakterisierung des akzessorischen Proteins (nAvd) aus dem Nostoc sp. PCC 7120 DGR lag.
Die nRT konnte überexprimiert werden, wobei sie nur teilweise löslich vorlag. Eine effektive
Aufreinigung der nRT konnte mit den hier getesteten Methoden nicht erzielt werden, sodass andere
Aufreinigungsmethoden erprobt werden müssen. Zudem war die nRT nicht lagerfähig, wodurch eine
regelmäßige neue Proteinpräparation nötig war. In Aktivitätsstudien konnten erste Hinweise auf eine
Aktivität der nRT erhalten werden. Dabei konnten die entstandenen Nukleinsäuren nicht nur
detektiert, sondern auch mittels analytischem Verdau als DNA identifiziert werden. Darüber hinaus
konnte die synthetisierte cDNA mittels PCR amplifiziert und die PCR‐Produkte anschließend
sequenziert werden. Hierbei wurden jedoch keine Adenin‐spezifischen oder sonstigen Mutationen
beobachtet. Somit konnte kein Nachweis für Hypermutation durch die RT erbracht werden. Bei
Untersuchungen bezüglich einer möglichen Interaktion zwischen nRT und nAvd konnte keine erhöhte
nRT‐Aktivität durch die nAvd festgestellt werden.
Die Untersuchungen der nAvd zeigten, dass diese Nukleinsäuren bindet. Hierbei waren Präferenzen
gegenüber verschiedenen Nukleinsäuren zu beobachten. Vor allem RNA/DNA‐Hybride zeigt die
höchste Affinität gegenüber der nAvd, während dsDNA eine höhere Affinität zur nAvd aufweist als
ssRNA. Zudem ist die nAvd in der Lage Nukleinsäuren zu hybridisieren. Hierbei hybridisiert sie ATreiche
DNA‐Moleküle von mittlerer Länge (48 bp) am effizientesten. Ein von der nAvd katalysierter
Strangaustausch konnte nicht beobachtet werden. Weiter konnte gezeigt werden, dass die nAvd selbst
bis 95 °C hitzestabil ist und im Anschluss an Hitzestress weiterhin Nukleinsäuren hybridisieren kann.
Darüber hinaus ist sie befähigt Nukleinsäuren unter Hitzestress zu stabilisieren. Diese Ergebnisse lassen
auf eine Rolle der nAvd als Lotse oder zur Stabilisierung von Nukleinsäuren schließen.
Sphärische keramische Nanopartikel können die Eigenschaften von Thermoplasten
signifikant positiv verändern. Eine gute Dispersität von Nanopartikeln in einer polymeren
Matrix ermöglicht z.B. eine außergewöhnliche Steigerung der Zähigkeit. Allerdings
neigen die Nanoadditive wegen ihrer großen spezifischen Oberfläche zur Agglomeration,
was der Verbesserung der Eigenschaften entgegenwirkt. Dies stellt eine der
größten Herausforderungen der Nanokompositforschung dar. Da industriell hergestellte
Nanokomposite von steigendem Interesse für vielerlei Anwendungen sind, ist es
ingenieurwissenschaftlich relevant, Prozess-Struktur-Eigenschaftsbeziehungen von Nanokompositen
mit kommerziell erhältlichen Nanopartikeln genauer zu verstehen. Dies
erlaubt eine gezielte Steuerung bzw. Einstellung der Materialeigenschaften.
In den bisherigen wissenschaftlichen Arbeiten zu thermoplastischen Nanokompositen
mit sphärischen keramischen Nanofüllstoffen ist die Dispersität der Nanokomposite
nicht hinreichend gut quantifiziert worden, was zur Folge hat, dass verschiedene Herstellungsmethoden
nicht miteinander verglichen werden können. Diese Arbeit zielt darauf
ab, thermoplastische Polyamid 6-Nanoverbundwerkstoffe mit guter Dispersität mittels
Extrusion herzustellen und zu untersuchen. Dabei werden drei Herstellungsmethoden
und die dabei erreichten Dispersionsqualitäten und Eigenschaftsprofile betrachtet. Dafür
werden Verbundwerkstoffe aus einer PA6-Matrix und keramischen Nanofüllstoffen (TiO2,
SiO2, BaSO4) - als Pulver oder als Nanopartikeldispersion - generiert. Die erzeugten
Komposite werden mit TEM-, REM- und μ-CT-Analysen morphologisch analysiert. Die
Materialeigenschaften werden durch DSC-, DMTA-, GPC-, Viskositätsuntersuchungen
erfasst. Weiterhin werden Kerbschlagbiege- und Zugversuche durchgeführt.
In einem ersten Schritt wird eine häufig angewendete Herstellungsmethode untersucht,
bei der Nanopartikelpulver zum Extrusionsprozess zugegeben werden. Es ist
nicht möglich alle Agglomerate durch die Bearbeitung im Extruder aufzubrechen. Die
Agglomeratfestigkeit für die verwendeten Partikel wird aus den Verläufen der Dispersität
bei mehrfacher Extrusion erfolgreich bestimmt. Die Untersuchung der Vorgänge
bei der Deagglomeration anhand eines Modells zeigt, dass das Verhältnis zwischen
Agglomeratbruch und Erosion von einzelnen Partikeln von der Oberfläche des Agglomerates
für die Materialeigenschaften von maßgeblicher Bedeutung ist. Trotz sehr guter
Dispersionsqualität der TiO2-Komposite und einer guten Partikel-Matrix-Anbindung lassen
sich nur die Festigkeit und Steifigkeit steigern, während die Kerbschlagzähigkeit nicht erhöht ist. Die TiO2-Nanopartikel weisen eine relativ geringe Agglomeratfestigkeit
(0,1 MPa) auf, und die Erosion spielt neben Bruch eine wichtige Rolle im Deagglomerationsmechanismus,
weshalb für diese Partikel die Zugabe als Pulver zu empfehlen
ist. Restagglomerate führen jedoch zu Spannungskonzentrationen im Material, was
eine Zähigkeitssteigerung verhindert. SiO2-Nanopartikel dagegen können bei den in
dieser Arbeit gewonnenen Erkenntnissen nicht als Füllstoffe empfohlen werden. Ihre
Agglomerate weisen eine so hohe Festigkeit auf, dass diese überwiegend zerbrechen.
BaSO4 sollte als Pulver nicht verwendet werden, denn es kann mittels Extrusion kaum
dispergiert werden.
In der zweiten Bearbeitungsphase werden die Materialeigenschaften bei der Zugabe
der Nanopartikel als wässrige Dispersion untersucht. Dabei wird die Partikeldispersion
drucklos zugegeben; das Dispersionsmedium kann an der Zugabestelle direkt verdampfen.
Zusammengefasst ist festzustellen, dass Agglomeration an der Zugabestelle zu
verschlechterten mechanischen Eigenschaften führt.
Im dritten Schritt werden wässrige Nanopartikeldispersionen unter Druck in den Extruder
gepumpt, um zu erreichen, dass sich eine Mischung aus flüssiger Dispersion und Polymerschmelze
bildet. Dabei tritt zum einen Diffusion der Partikel in die Polymerschmelze
auf, zum anderen kommt es zu Tropfenverkleinerung durch die Scherspannung im
Extruder. Bei der theoretischen Untersuchung der Zerkleinerung der Dispersionstropfen
wird festgestellt, dass das Verhältnis der Viskositäten der zu mischenden Medien, deren
Oberflächenspannungen und die Scherspannung im Extruder den Vorgang bestimmen.
Die so ermittelte Größe der kleinsten Agglomerate liegt nicht im Nanometerbereich.
Infolge der geringen Mischdauer nach der Verdunstung des Dispergiermediums sind die
Agglomerate schlecht an die Matrix angebunden. Weiterhin bilden sich sehr kompakte
Agglomerate. Aufgrund dessen steigert sich der E-Modul des Nanokomposits kaum
bei einer gleichzeitig reduzierten Zähigkeit. Als Dispersion zugegeben diffundieren die
SiO2-Partikel kaum und es bilden sich relativ große Agglomerate. Da insbesondere bei
TiO2 und BaSO4 außergewöhnlich kleine Agglomerate (<100 nm) bzw. sogar Primärpartikel
gefunden werden, ist davon auszugehen, dass für diese beiden Nanoadditive
auch Diffusion von Bedeutung ist. Nanokomposite mit diesen Füllstoffen sollten über
die Methode der Zugabe von wässrigen Dispersionen unter Druck hergestellt werden.
Diese Arbeit bildet mit systematischen Untersuchungen von industriell relevanten Prozessen
zur Herstellung von Nanokompositen, den Mechanismen, die dabei ablaufen,
und den erzielbaren Materialmorphologien und Materialeigenschaften die Grundlage
für maßgeschneiderte Nanokomposite.
Regenüberlaufbecken (RÜB) sind wichtige Bauwerke in Entwässerungsnetzen nach dem Mischverfahren. Sie tragen durch ihre Rückhaltewirkung dazu bei, den Schmutzaustrag in die Gewässer zu vermindern und die nachfolgende Kläranlage während der Niederschlagsereignisse zu entlasten. Obwohl nach einheitlichen Richtlinien bemessene Regenüberlaufbecken in großer An-zahl in Deutschland in Betrieb sind, ist über die Wirkung dieser recht teuren Bauwerke noch sehr wenig bekannt. Das gilt auch für alternative Anlagen wie hydrodynamische Abscheider und Kombinationsbauwerke, die in den letzten Jahren gebaut worden sind. Hier knüpft die vorliegende Arbeit an, deren Ziel es war, die Wirkung eines Kombinati-onsbauwerks in Bexbach/Rothmühle, bestehend aus Durchlaufbecken im Nebenschluss und zwei parallel beschickten hydrodynamischen Abscheidern als Trennbauwerke, zu untersuchen und modelltechnisch nachzubilden. Am Anfang der Arbeit steht ein Exkurs über Anlagen der Regenwasserbehandlung im Mischsystem und Faktoren, die für deren Reinigungswirkung maßgebend sind. Es wer-den die Grundlagen der Sedimentation und Ansätze zur Bilanzierung von Wirkungsgrad und Effektivität behandelt. Die Bedeutung des Bilanzierungszeitraums wird herausgestellt. Anschließend werden die Randbedingungen für die Untersuchungen des RÜB Bex-bach/Rothmühle sowie die Konzeption und der Betrieb des Bauwerks erläutert. Um die Wirkungsweise des Bauwerks zu ermitteln, wurden umfangreiche Messungen der Abflussquantität und -qualität an verschiedenen Punkten des Bauwerks durchgeführt. Die gemessenen Daten dienten dann als Grundlage zur Bestimmung der Wir-kungsgrade und Effektivitäten der einzelnen Anlagenteile sowie der Gesamtanlage für einige Entlastungsereignisse. Die Ergebnisse der Auswertungen über die Reinigungswirkung wurden mit den Resultaten ähnlicher Untersuchungen an anderen Anlagen verglichen und es wurde eine qualitative Wertung des untersuchten Bauwerks vorgenom-men. Im nächsten Schritt wurden sowohl die Ergebnisse der Untersuchungen als auch eines Tracerversuchs dazu genutzt, ein MATLAB/SIMULINK-Modell zur Nachbildung der Reinigungsvorgänge zu entwickeln. Durch die Verknüpfung dieses Modells mit dem Schmutzfrachtmodell WKosmoCOM gelang es, das Langzeitverhalten des Kombinati-onsbauwerks zu untersuchen. In einer vergleichenden Betrachtung der langfristigen Entlastungstätigkeit der unter-suchten Anlage mit der eines fiktiven Durchlaufbeckens im Nebenschluss herkömmli-cher Bauart wurde abgeschätzt, ob Speichervolumen durch den Einsatz der Kombinati-on von Wirbelabscheider und Durchlaufbecken eingespart werden kann. Die angesetzte Reinigungswirkung des fiktiven Durchlaufbeckens orientierte sich dabei an derjenigen, die für die Durchlaufbeckenstufe des untersuchten Bauwerks festgestellt wurde zuzüglich eines Aufschlags zur Verbesserung der Rückhaltewirkung. In mehreren Simulationsläufen auf der Basis des MATLAB/SIMULINK-Modells, gekoppelt mit dem Schmutzfrachtmodell WKosmoCOM, wurde das Volumen des fiktiven Beckens variiert, bis die Rückhaltewirkung der des realen Beckens entsprach. Der aus dem Vergleich resultierende Volumenunterschied ist ein Maß für das Einsparpotenzial. Die Simulationen erga-ben, dass die Reinigungswirkung des untersuchten Kombinationsbauwerks durch ein herkömmliches Durchlaufbecken erreicht wird, dessen Speichervolumen um etwa 17% größer ist.