Doctoral Thesis
Refine
Year of publication
Document Type
- Doctoral Thesis (939) (remove)
Language
- English (939) (remove)
Has Fulltext
- yes (939)
Keywords
- Visualisierung (16)
- Visualization (9)
- finite element method (9)
- Infrarotspektroskopie (8)
- Deep Learning (7)
- Finite-Elemente-Methode (7)
- Optimization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Simulation (6)
- Computergraphik (5)
- Finanzmathematik (5)
- Machine Learning (5)
- Mobilfunk (5)
- Portfolio Selection (5)
- machine learning (5)
- verification (5)
- Artificial Intelligence (4)
- Computeralgebra (4)
- Elastizität (4)
- Evaluation (4)
- Gasphase (4)
- Homogenisierung <Mathematik> (4)
- Kontinuumsmechanik (4)
- Navier-Stokes-Gleichung (4)
- Nichtlineare Finite-Elemente-Methode (4)
- Optimierung (4)
- Portfolio-Optimierung (4)
- Robotik (4)
- Stochastische dynamische Optimierung (4)
- computational mechanics (4)
- portfolio optimization (4)
- Bewertung (3)
- Datenanalyse (3)
- Finite-Volumen-Methode (3)
- Flüssig-Flüssig-Extraktion (3)
- Formal Verification (3)
- Geoinformationssystem (3)
- Inverses Problem (3)
- Ionenfalle (3)
- Layout (3)
- MIMO (3)
- Machine learning (3)
- Mehrskalenmodell (3)
- Mikrostruktur (3)
- Model checking (3)
- Mosco convergence (3)
- Mustererkennung (3)
- NURBS (3)
- Neural Networks (3)
- Numerische Mathematik (3)
- OFDM (3)
- Optische Zeichenerkennung (3)
- Partial Differential Equations (3)
- Portfolio Optimization (3)
- Ray casting (3)
- Recommender Systems (3)
- Risikomanagement (3)
- Scientific Visualization (3)
- Semantic Web (3)
- Transaction Costs (3)
- Tropische Geometrie (3)
- Unsicherheit (3)
- Verifikation (3)
- Virtual Reality (3)
- Wavelet (3)
- cobalt (3)
- computer graphics (3)
- deep learning (3)
- document analysis (3)
- isogeometric analysis (3)
- optical character recognition (3)
- optimales Investment (3)
- visualization (3)
- ADAS (2)
- Activity recognition (2)
- Algorithmus (2)
- Apoptosis (2)
- Asymptotic Expansion (2)
- Asymptotik (2)
- Automation (2)
- Automatische Differentiation (2)
- B-Spline (2)
- B-splines (2)
- Bildverarbeitung (2)
- Blattschneiderameisen (2)
- Bottom-up (2)
- Bruchmechanik (2)
- CFD (2)
- CYP1A1 (2)
- Cluster-Analyse (2)
- Clusterion (2)
- Code Generation (2)
- Computational Fluid Dynamics (2)
- Cyanobakterien (2)
- Datenbank (2)
- Diskrete Fourier-Transformation (2)
- Domänenumklappen (2)
- Effizienter Algorithmus (2)
- Elasticity (2)
- Elastomer (2)
- Elastoplasticity (2)
- Elastoplastizität (2)
- Electronic Design Automation (2)
- Endlicher Automat (2)
- Erdmagnetismus (2)
- Erwarteter Nutzen (2)
- Evolution (2)
- Experimentelle Psychologie (2)
- Extrapolation (2)
- FEM (2)
- FFT (2)
- Faserkunststoffverbunde (2)
- Festkörper (2)
- Filtergesetz (2)
- Filtration (2)
- Finite Pointset Method (2)
- Fließgewässer (2)
- Funktionale Sicherheit (2)
- Geometric Ergodicity (2)
- Gröbner-Basis (2)
- Hochskalieren (2)
- Homogenization (2)
- Hydrodynamics (2)
- Hydrodynamik (2)
- IMRT (2)
- IR-MPD (2)
- IRMPD (2)
- Interaction (2)
- Interaktion (2)
- Isogeometrische Analyse (2)
- Knowledge Management (2)
- Kreditrisiko (2)
- Langevin equation (2)
- Lebensversicherung (2)
- Lennard-Jones (2)
- Level-Set-Methode (2)
- Lineare Elastizitätstheorie (2)
- Lineare partielle Differentialgleichung (2)
- Local smoothing (2)
- Mathematik (2)
- Mehrskalenanalyse (2)
- Merkmalsextraktion (2)
- Microarray (2)
- Model Checking (2)
- Model Predictive Control (2)
- Modellierung (2)
- Modulraum (2)
- Molekulardynamik (2)
- Molekularstrahl (2)
- Monte-Carlo-Simulation (2)
- Morphologie (2)
- Morphology (2)
- Multiset Multicover (2)
- Münzmetall (2)
- Nanocomposites (2)
- Natural Language Processing (2)
- Network Calculus (2)
- Nichtlineare Kontinuumsmechanik (2)
- Optionspreistheorie (2)
- Pattern Recognition (2)
- Pflanzenschutzmittel (2)
- Phasengleichgewicht (2)
- Photodissoziation (2)
- Piezoelektrizität (2)
- Plastizität (2)
- Polymere (2)
- Populationsbilanzen (2)
- Portfoliomanagement (2)
- Poröser Stoff (2)
- Protonentransfer (2)
- Quantization (2)
- Raumakustik (2)
- Regressionsanalyse (2)
- Regularisierung (2)
- Response Priming (2)
- Risikoanalyse (2)
- Robotics (2)
- Robust Optimization (2)
- Room acoustics (2)
- Räumliche Statistik (2)
- SOC (2)
- Scattered-Data-Interpolation (2)
- Schnitttheorie (2)
- Statistisches Modell (2)
- Stochastic Control (2)
- Strömungsmechanik (2)
- System-on-Chip (2)
- Technische Mechanik (2)
- Teilchen (2)
- Time Series (2)
- Topology (2)
- Tribology (2)
- Tropenökologie (2)
- Uncertain Data (2)
- Uncertainty Visualization (2)
- Upscaling (2)
- Vektorwavelets (2)
- Verbundwerkstoffe (2)
- Viskoelastizität (2)
- Visual Analytics (2)
- Wahrscheinlichkeitsfunktion (2)
- Wearable computing (2)
- WiFi (2)
- Wissenschaftliches Rechnen (2)
- ab initio (2)
- air interface (2)
- anisotropy (2)
- artificial intelligence (2)
- auditory brainstem (2)
- autoencoder (2)
- benzene (2)
- beyond 3G (2)
- biodiversity (2)
- bottom-up (2)
- classification (2)
- cluster (2)
- clustering (2)
- computational fluid dynamics (2)
- computational homogenization (2)
- computer vision (2)
- configurational forces (2)
- continuum mechanics (2)
- curve singularity (2)
- deuteration (2)
- dipeptide (2)
- direct laser writing (2)
- domain decomposition (2)
- duality (2)
- elastomer (2)
- enamide (2)
- finite deformations (2)
- finite elements (2)
- finite volume method (2)
- fluid interface (2)
- forest fragmentation (2)
- gas phase (2)
- geomagnetism (2)
- higher education (2)
- homogenization (2)
- ice shelves (2)
- illiquidity (2)
- image analysis (2)
- image processing (2)
- impedance spectroscopy (2)
- interface problem (2)
- iron (2)
- langfaserverstärkte Thermoplaste (2)
- layout analysis (2)
- lichen (2)
- linear kinetics theory (2)
- lineare kinetische Theorie (2)
- material forces (2)
- mesh generation (2)
- metal (2)
- metal cluster (2)
- molecular simulation (2)
- numerics (2)
- numerische Mechanik (2)
- optimal investment (2)
- phase field model (2)
- probabilistic approach (2)
- rate-dependency (2)
- real-time systems (2)
- regression analysis (2)
- rolling friction (2)
- ruthenium (2)
- semantic web (2)
- sensor fusion (2)
- single molecule magnet (2)
- social media (2)
- spatial planning (2)
- splines (2)
- thermodynamics (2)
- thermophysical properties (2)
- tractor (2)
- tribology (2)
- urban shrinkage (2)
- virtual acoustics (2)
- viscoelasticity (2)
- wetting (2)
- "Slender-Body"-Theorie (1)
- "Stress-Mentor" (1)
- (Joint) chance constraints (1)
- 150 bar loop (1)
- 17beta-Estradiol (1)
- 19-century architecture (1)
- 1D-CFD (1)
- 2D-CFD (1)
- 3D Gene Expression (1)
- 3D Point Data (1)
- 3D image analysis (1)
- 3D printing (1)
- 50CrMo4 (1)
- A-infinity-bimodule (1)
- A-infinity-category (1)
- A-infinity-functor (1)
- A/D conversion (1)
- ADMM (1)
- AFDX (1)
- ALE-Methode (1)
- AMC225xe (1)
- ANC (1)
- ASM (1)
- AUTOSAR (1)
- Ab-initio-Rechnung (1)
- Ableitungsfreie Optimierung (1)
- Ableitungsschätzung (1)
- Abrechnungsmanagement (1)
- Abstraction (1)
- Abstraction-Based Controller Design (1)
- Abstraktion (1)
- Accelerometer (1)
- Accounting Agent (1)
- Achslage (1)
- Active Noice Canceling (1)
- Actor Engagement (1)
- Actor Roles (1)
- Acute toxicity (1)
- Ad-hoc networks (1)
- Ad-hoc-Netz (1)
- Adaptive Antennen (1)
- Adaptive Data Structure (1)
- Adaptive Entzerrung (1)
- Adaptive time step (1)
- Additionsreaktion (1)
- Additive Fertigung (1)
- Addukt (1)
- Adhäsion (1)
- Adjoint method (1)
- Adsorption (1)
- Adsorptionskinetik (1)
- Adult identity (1)
- Adult learning (1)
- Advanced Encryption Standard (1)
- Aerosol (1)
- Aerosol Particles (1)
- Aerosol Partikeln (1)
- Affine Arithmetic (1)
- Agriculture Loan (1)
- Ah-Rezeptor (1)
- AhR/ER Crosstalk (1)
- AhRR (1)
- Ahr Knockout Model (1)
- Algebraic dependence of commuting elements (1)
- Algebraic geometry (1)
- Algebraic groups (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Algebraischer Funktionenkörper (1)
- Algorithm (1)
- Algorithmic Differentiation (1)
- Amazonia (1)
- Ambulatory Assessment (1)
- Amharic, Attention, Factored Convolutional Neural Network, OCR (1)
- Amination (1)
- Analysis (1)
- Analytical method (1)
- Ananasgewächse (1)
- Aneuploidy, Whole Genome Doubling (1)
- Angewandte Mathematik (1)
- Anion recognition (1)
- Anisotropie (1)
- Annulus (1)
- Anomaly Detection (1)
- Ansäuerung (1)
- Ant Colony Optimization (1)
- Anthropogener Einfluss (1)
- Anti-diffusion (1)
- Antidiffusion (1)
- Application Framework (1)
- Approximationsalgorithmus (1)
- Arbeitsmaschine (1)
- Arc distance (1)
- Archaikum (1)
- Archimedische Kopula (1)
- Architektur des 19. Jahrhunderts (1)
- Arithmetic data-path (1)
- Aryl hydrocarbon Receptor (1)
- Ascorbat (1)
- Ascorbinsäure (1)
- Ascorbylradikal (1)
- Asiatische Option (1)
- Asset allocation (1)
- Asset-liability management (1)
- Association (1)
- Asympotic Analysis (1)
- Asymptotic Analysis (1)
- Asymptotische Entwicklung (1)
- Atmungskette (1)
- Atom optics (1)
- Atomoptik (1)
- Ausfallrisiko (1)
- Austin (1)
- Automat <Automatentheorie> (1)
- Automatic Differentiation (1)
- Automatic Image Captioning (1)
- Automatic risk assessment (1)
- Automatische Gefahrenanalyse (1)
- Automatische Risikobewertung (1)
- Automatisches Beweisverfahren (1)
- Automorphismengruppe (1)
- Autonomer Roboter (1)
- Autoregressive Hilbertian model (1)
- Avirulence (1)
- Backlog (1)
- Baeocyte (1)
- Baiturrahman Grand Mosque (1)
- Balance sheet (1)
- Banda Aceh old city center (1)
- Barriers (1)
- Basic Scheme (1)
- Basis Risk (1)
- Basisband (1)
- Basket Option (1)
- Bass-ackwards analysis (1)
- Bayes-Entscheidungstheorie (1)
- Beam models (1)
- Beam orientation (1)
- Bearing (1)
- Bearing capacitance (1)
- Bearing current (1)
- Befahrbarkeitsanalyse (1)
- Beleuchtung (1)
- Benetzung (1)
- Benutzer (1)
- Benutzerfreundlichkeit (1)
- Benzol (1)
- Bernstein–Gelfand–Gelfand construction (1)
- Berufliche Entwicklung (1)
- Beschichtungsprozess (1)
- Beschränkte Arithmetik (1)
- Beschränkte Krümmung (1)
- Betrachtung des Schlimmstmöglichen Falles (1)
- Bevolkingsdaling (1)
- Bibligraphic References (1)
- Bifurkation (1)
- Bilanzstrukturmanagement (1)
- Bildsegmentierung (1)
- Bildung (1)
- Bildungsreform (1)
- Bildungsungleichheit (1)
- Binomialbaum (1)
- Bio-inspired (1)
- Biodiversität (1)
- Biogeographie (1)
- Biogeography (1)
- Bioinformatik (1)
- Biomarker (1)
- Biomechanik (1)
- Biomimetic (1)
- Bionas (1)
- Bionik (1)
- Biophysics (1)
- Bioplastic-based blend nanocomposites (1)
- Biorthogonalisation (1)
- Biot Poroelastizitätgleichung (1)
- Biot-Savart Operator (1)
- Biot-Savart operator (1)
- Biotrophy (1)
- Bioturbation (1)
- Bipedal Locomotion (1)
- Bitvektor (1)
- Bluetooth (1)
- Boltzmann Equation (1)
- Boosting (1)
- Bootstrap (1)
- Boundary Value Problem / Oblique Derivative (1)
- Bounded Model Checking (1)
- Brandenburg-Lubuskie (1)
- Bremerhaven (1)
- Brinkman (1)
- Brownian Diffusion (1)
- Brownian motion (1)
- Brownsche Bewegung (1)
- Buchstabe (1)
- Buffer (1)
- Buffer Zone Method (1)
- Bundle Methods (1)
- Business Model Innovation (1)
- Business Sustainability (1)
- Büyükçekmece and Mogan Lake (1)
- CAD (1)
- CBR (1)
- CCT (1)
- CDS (1)
- CDSwaption (1)
- CFD Simulation (1)
- CFK, Epoxidharzmatrix (1)
- CFRP (1)
- CHAMP (1)
- CID (1)
- CMOS (1)
- CMOS-Schaltung (1)
- CPDO (1)
- CYP1B1 (1)
- Caching (1)
- Carbon Capture (1)
- Carbon footprint (1)
- Careless Responding (1)
- Castelnuovo Funktion (1)
- Castelnuovo function (1)
- Cauchy-Born Regel (1)
- Cauchy-Born Rule (1)
- Cauchy-Born rule (1)
- Cauchy-Navier-Equation (1)
- Cauchy-Navier-Gleichung (1)
- Cell crosstalk (1)
- Cellular Communications (1)
- Cellulose (1)
- Celluloseacetat (1)
- Censoring (1)
- Center Location (1)
- Change Point Analysis (1)
- Change Point Test (1)
- Change-point Analysis (1)
- Change-point estimator (1)
- Change-point test (1)
- Channel Hopping (1)
- Channel Scheduling (1)
- Channel estimation (1)
- Channel hopping (1)
- Channel sensing (1)
- Chi-Quadrat-Test (1)
- Chlamydomonas reinhardii (1)
- Chloride regulation (1)
- Cholesky-Verfahren (1)
- Chow Quotient (1)
- Chromatin (1)
- Chromatographiesäule (1)
- Chroococcales (1)
- Chroococcidiopsis (1)
- Chroococcidiopsis cubana (1)
- Chroococcidiopsis thermalis (1)
- Chroococcidiopsisdaceae (1)
- Circle Location (1)
- City center (1)
- Classification (1)
- Classification of biomedical signals (1)
- Click chemistry (1)
- Clock and Data Recovery Circuits (1)
- Closure (1)
- Cluster (1)
- Clusterverbindungen (1)
- Coarse graining (1)
- Codierung (1)
- Cognitive Amplification (1)
- Cognitive Load (1)
- Cohen-Lenstra heuristic (1)
- Collaboration (1)
- Collision Induced Dissociation (1)
- Combinatorial Optimization (1)
- Combinatorial Testing (1)
- Combined IR/UV spectroscopy (1)
- Commodity Index (1)
- Competence (1)
- Complex Structures (1)
- Composite Materials (1)
- Composites (1)
- Computational Homogenization (1)
- Computational Mechanics (1)
- Computer Algebra (1)
- Computer Algebra System (1)
- Computer Graphic (1)
- Computer Supported Cooperative Work (1)
- Computer algebra (1)
- Computer graphics (1)
- Computer-Aided Diagnosis (1)
- Computeralgebra System (1)
- Computerphysik (1)
- Computersimulation (1)
- Computertomographie (1)
- Computervision (1)
- Concrete experience (1)
- Concurrent data structures (1)
- Conditional Value-at-Risk (1)
- Configurational Forces (1)
- Congitive Radio Networks (1)
- Connectivity (1)
- Conservation laws (1)
- Consistencyanalysis (1)
- Consistent Price Processes (1)
- Constraint Generation (1)
- Constraint-Coupled Systems (1)
- Construction of hypersurfaces (1)
- Constructivism (1)
- Context Awareness (1)
- Context-sensitive Assistance (1)
- Continuous-Time Neural Networks (1)
- Continuum Damage (1)
- Continuum-Atomistic Multiscale Algorithm (1)
- Continuum-Atomistics (1)
- Control Engineering (1)
- Convergence Rate (1)
- Convex Optimization (1)
- Cook Wilson (1)
- Coordination (1)
- Copper (1)
- Copula (1)
- Correct-by-Design Controller Synthesis (1)
- Corridors (1)
- Coupled PDEs (1)
- Coxeter-Freudenthal-Kuhn triangulation (1)
- Crack resistance (1)
- Crash (1)
- Crash Hedging (1)
- Crash modelling (1)
- Crash-Charakteristiken (1)
- Crashmodellierung (1)
- Credit Default Swap (1)
- Credit Risk (1)
- Cross-Cultural Product Development (1)
- Cross-border regions (1)
- Cross-border transport (1)
- Crystallization fouling (1)
- Curvature (1)
- Curved viscous fibers (1)
- Cyanobacteria (1)
- Cyanobakterium (1)
- Cyber-Physical Systems (1)
- Cycle Accuracy (1)
- Cycle Decomposition (1)
- Cytochomes P450 (1)
- Cytochrom P-450 (1)
- DC/DC Converter (1)
- DCE <Programm> (1)
- DFG (1)
- DFT (1)
- DFT calculation (1)
- DL-PCBs (1)
- DLW (1)
- DNA adducts (1)
- DNA metabarcoding (1)
- DNS-Schädigung (1)
- DOSY (1)
- DPN (1)
- DSM (1)
- DSMC (1)
- Damage (1)
- Dark-state Polariton (1)
- Darstellungstheorie (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Data Modeling (1)
- Data Spreading (1)
- Data path (1)
- Dataset (1)
- Datenfusion (1)
- Datenrückgewinnungsschaltungen (1)
- Datenspreizung (1)
- Decision Support Systems (1)
- Defaultable Options (1)
- Defektinteraktion (1)
- Deformationstheorie (1)
- Degenerate Diffusion Semigroups (1)
- Dekonsolidierung (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Delaunay-Triangulierung (1)
- Derivat <Wertpapier> (1)
- Derivative Estimation (1)
- Despoblacion (1)
- Deuterierung (1)
- Dicarbonsäuren (1)
- Differential forms (1)
- Differenzmenge (1)
- Diffusion (1)
- Diffusion processes (1)
- Diffusionskoeffizient (1)
- Diffusionsmessung (1)
- Diffusionsmodell (1)
- Diffusionsprozess (1)
- Digital technology (1)
- Digitalmodulation (1)
- Dioxin (1)
- Dioxin-like Compounds (1)
- Dipeptide (1)
- Direct Numerical Simulation (1)
- Discrete Event Simulation (DES) (1)
- Discriminatory power (1)
- Diskontinuität (1)
- Diskrete Simulation (1)
- Dislocations (1)
- Dispersionsrelation (1)
- Disproportionierung von Ethylbenzol (1)
- Dissertation (1)
- Distributed Optimization (1)
- Distributed Rendering (1)
- Distributed system (1)
- Disulfidbrücken-Transfer (1)
- Diversifikation (1)
- Domain switching (1)
- Doppelresonanz (1)
- Double Dissociation (1)
- Downlink (1)
- Dreidimensionale Bildverarbeitung (1)
- Dreidimensionale Rekonstruktion (1)
- Dreidimensionale Strömung (1)
- Drohne (1)
- Drone (1)
- Droplet breakage (1)
- Droplet coalescence (1)
- Druckabfall (1)
- Druckkorrektur (1)
- Drug delivery systems (1)
- Dsitribution (1)
- Dual Decomposition (1)
- Dunkelzustandspolariton (1)
- Duplicate Identification (1)
- Duplikaterkennung (1)
- Dynamically reconfigurable analog circuits (1)
- Dyslexie (1)
- Dünnfilmapproximation (1)
- EDA (1)
- EDF observation models (1)
- EEG (1)
- EM algorithm (1)
- EPDM (1)
- EPR (1)
- EPR Spectroscopy (1)
- EPR Spektroskopie (1)
- EROD (1)
- ESR (1)
- Earthworms (1)
- Eastern Boundary Upwelling Systems (1)
- Ebullition time-series (1)
- Ecology (1)
- Eddy (1)
- Edwards Model (1)
- Effective Conductivity (1)
- Effects of Design Choices (1)
- Efficiency (1)
- Efficient Reliability Estimation (1)
- Effizienz (1)
- Eikonal equation (1)
- Einzelmolekülspektroskopie (1)
- Einzelzell-Analyse (1)
- Elastische Deformation (1)
- Electrical model (1)
- Electroless Plating (1)
- Electronically excited states (1)
- Elektrisch (1)
- Elektrohydraulik (1)
- Elektromagnetische Streuung (1)
- Elektronenspinresonanz (1)
- Elektrophysiologie (1)
- Elektroporation (1)
- Eliminationsverfahren (1)
- Elliptische Verteilung (1)
- Elliptisches Randwertproblem (1)
- Emissionsspektroskopie (1)
- Empfangssignalverarbeitung (1)
- Empfehlungssysteme (1)
- Empfängerorientierung (1)
- Enamide (1)
- Endliche Geometrie (1)
- Endliche Gruppe (1)
- Energieeffizienz (1)
- Energy Efficiency (1)
- Energy markets (1)
- Engineering 4.0 (1)
- Ensemble Visualization (1)
- Entscheidungsbaum (1)
- Entscheidungsproblem (1)
- Entscheidungsunterstützung (1)
- Entwicklungspsychologie (1)
- Entwurf (1)
- Entwurfsautomation (1)
- Enumerative Geometrie (1)
- Environmental Psychology (1)
- Environmental inequality (1)
- Environmental stress cracking resistance (1)
- Epiphyten (1)
- Epoxydharz (1)
- Erdöl Prospektierung (1)
- Erfüllbarke (1)
- Erhaltungsgleichungen (1)
- Erkenntnistheorie (1)
- Ermüdungsrisse (1)
- Erreichbarkeit (1)
- Erwachsenenbildung (1)
- Essential m-dissipativity (1)
- Estradiol (1)
- Estradiolrezeptor (1)
- Ethernet (1)
- Etylbenzene disproportionation (1)
- Eupoecilia ambiguella (1)
- European Pollutant Release and Transfer Register (E-PRTR) (1)
- European Territorial Cooperation (1)
- European Union (1)
- European Union policy-making (1)
- European integration (1)
- Europeanisation (1)
- Europäische Territoriale Zusammenarbeit (1)
- Event psychology (1)
- Eventpsychologie (1)
- Eventual consistency (1)
- Evolutionary Algorithm (1)
- Expected shortfall (1)
- Experiential learning (1)
- Experiment (1)
- Experimentation (1)
- Explainability (1)
- Explainable Artificial Intelligence (1)
- Exponential Utility (1)
- Exponentieller Nutzen (1)
- Exposed Datapath Architectures (1)
- Extended Finite-Elemente-Methode (1)
- Extended Kalman Filter (1)
- Extended Mind (1)
- Extreme Events (1)
- Extreme value theory (1)
- Eye-Tracking (1)
- Eyewear Computing (1)
- FPM (1)
- Faden (1)
- Fahrerassistenzsystem (1)
- Fahrtkostenmodelle (1)
- Fattyacids (1)
- Fault Injection (1)
- Fault Tree Analysis (1)
- Feasibility study (1)
- Feature (1)
- Feature Detection (1)
- Feature Extraction (1)
- Feature extraction (1)
- Federated Learning (1)
- Feedfoward Neural Networks (1)
- Fehlerbaumanalyse (1)
- Femtosecond Laser (1)
- Femtosekundenspektroskopie (1)
- Fernerkundung (1)
- Festkörpergrenzschichten (1)
- Fettsäuren (1)
- Feynman Integrals (1)
- Fiber suspension flow (1)
- Fifth generation (5G) mobile networks (1)
- Financial Engineering (1)
- Finanzkrise (1)
- Finanznumerik (1)
- Finite Element Method (1)
- Finite Elemente Methode (1)
- Finite Elementes (1)
- Finite Elements (1)
- Finite element method (1)
- Finite-Elemente-Simulation (1)
- Finite-Punktmengen-Methode (1)
- Firmware (1)
- Firmwertmodell (1)
- First Order Optimality System (1)
- Flachwasser (1)
- Flachwassergleichungen (1)
- Flechten (1)
- Fließanalyse (1)
- Flow Visualization (1)
- Flugzeitmassenspektrometrie (1)
- Fluid dynamics (1)
- Fluid-Feststoff-Strömung (1)
- Fluid-Struktur-Kopplung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Foam decay (1)
- Fokker-Planck-Gleichung (1)
- Forgetting-enabled Information Systems (1)
- Formale Beschreibungstechnik (1)
- Formale Grammatik (1)
- Formale Methode (1)
- Formale Ontologie (1)
- Formale Sprache (1)
- Formaler Beweis (1)
- Fourier-Transformation (1)
- Fracture behavior (1)
- Fragmentierung (1)
- Framework (1)
- Fredholmsche Integralgleichung (1)
- Frequenzsprungverfahren (1)
- Friction (1)
- Functional Safety (1)
- Functional autoregression (1)
- Functional time series (1)
- Funktionenkörper (1)
- Fusion (1)
- Future Internet (1)
- Fußgängerzone Kaiserslautern (1)
- Füllkörpersäule (1)
- GARCH (1)
- GARCH Modelle (1)
- GPU (1)
- Galerkin Verfahren (1)
- Galerkin methods (1)
- Galerkin-Methode (1)
- Gamification (1)
- Gamma-Konvergenz (1)
- Garantiezins (1)
- Garbentheorie (1)
- Gateway (1)
- Gauß-Filter (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gebogener viskoser Faden (1)
- Gedächtnis (1)
- Gefahren- und Risikoanalyse (1)
- Gemeinsame Kanalschaetzung (1)
- Gen-Expression (1)
- Gene Expression (1)
- Gene expression programming (1)
- Generalisierte Plastizität (1)
- Generierung (1)
- Genome analysis (1)
- Genregulation (1)
- Geo-referenced data (1)
- Geodesie (1)
- Geographic Information System (GIS) (1)
- Geometrical Nonlinear Thermomechanics (1)
- Geometrische Ergodizität (1)
- Geovisualization (1)
- German census (1)
- Geschwindigkeitsbegrenzung (1)
- Geschwindigkeitsregelung (1)
- Geschwindigkeitswahrnehmung (1)
- Gewichteter Sobolev-Raum (1)
- Giga bit per second (1)
- Gitterbaufehler (1)
- Gittererzeugung (1)
- Glassy polymers (1)
- Gleichgewichtsstrategien (1)
- Gleichspannungswandler (1)
- Gleitverschleiß (1)
- GlyHis (1)
- Gold nanoparticles (1)
- Google Earth (1)
- Gradient based optimization (1)
- Grand challenge (1)
- Granular (1)
- Granular flow (1)
- Granulat (1)
- Graph Theory (1)
- Gravitationsfeld (1)
- Greater Region Saar-Lor-Lux+ (1)
- Green's functions (1)
- Green-Funktion (1)
- Grenzfläche (1)
- Grenzflächenspannung (1)
- Gromov Witten (1)
- Gromov-Witten-Invariante (1)
- Group and Organizational Learning (1)
- Grouping by similarity (1)
- Große Abweichung (1)
- Großregion Saar-Lor-Lux+ (1)
- Gruppenoperation (1)
- Gruppentheorie (1)
- Gröbner bases (1)
- Gröbner basis (1)
- Grüne Chemie (1)
- Gyroscopic (1)
- H/D exchange (1)
- HAZOP Automation (1)
- HAZOP Automatisierung (1)
- HAZOP Digitalization (1)
- HAZOP-Verfahren (1)
- HCL (1)
- HCVL (1)
- HIF-1 (1)
- HPC (1)
- HSF (1)
- HSF1 (1)
- HSP (1)
- HSP70 (1)
- Hadamard manifold (1)
- Hadamard space (1)
- Hadamard-Mannigfaltigkeit (1)
- Hadamard-Raum (1)
- Hamilton-Jacobi-Differentialgleichung (1)
- Hamiltonian Path Integrals (1)
- Hamiltonian systems (1)
- Hand gestures (1)
- Handelsstrategien (1)
- Hardware Security (1)
- Hardware/Software co-verification (1)
- Hardwareverifikation (1)
- Harmonische Analyse (1)
- Harmonische Spline-Funktion (1)
- Harold Arthur (1)
- Haustoria (1)
- Hazard Analysis (1)
- Hazard Functions (1)
- Heat stress response (1)
- Heat transfer (1)
- Heavy-tailed Verteilung (1)
- Hedging (1)
- Heißes Elektron (1)
- Helmholtz Type Boundary Value Problems (1)
- Heterogene Katalyse (1)
- Heterogeneous (1)
- Hidden Markov models for Financial Time Series (1)
- Hierarchische Matrix (1)
- High Voltage (1)
- High-Spin-Komplexe (1)
- High-cycle fatigue (1)
- Higher education (1)
- Hilbert complexes (1)
- HisGly (1)
- Hitting families (1)
- Hochspannung (1)
- Homogeneous deformation (1)
- Homogenisieren (1)
- Homologische Algebra (1)
- Honduras (1)
- Horizontal gene transfer (1)
- Hub Location Problem (1)
- Human Liver Cell Models (1)
- Human Pose (1)
- Human-Computer Interaction (1)
- Human-centric lighting (1)
- Human-centric virtual lighting (1)
- Humanism (1)
- Hybrid CBR (1)
- Hybrid Models (1)
- Hydratation (1)
- Hydrostatischer Druck (1)
- Hyperelastizität (1)
- Hyperelliptische Kurve (1)
- Hyperflächensingularität (1)
- Hypergraph (1)
- Hyperspektraler Sensor (1)
- Hypocoercivity (1)
- Hysterese (1)
- Härten (1)
- ICT (1)
- IEC 61508 (1)
- IEEE 802.15.4 (1)
- IMU (1)
- IP Address (1)
- IP Traffic Accounting (1)
- IP-XACT (1)
- ISO 26262 (1)
- ISO26262 (1)
- ITSM (1)
- Idealklassengruppe (1)
- Ileostomy (1)
- Illiquidität (1)
- Image Processing (1)
- Image restoration (1)
- Imatinib mesilat (1)
- Imidacloprid (1)
- Immiscible lattice BGK (1)
- Immobilienaktie (1)
- Immobilisierung (1)
- Immunoblot (1)
- Implementierung (1)
- Incremental recomputation (1)
- Index Insurance (1)
- Individual (1)
- Induction heating (1)
- Induktive logische Programmierung (1)
- Industrial Robotics (1)
- Industrial air pollution (1)
- Inflation (1)
- Information Extraction (1)
- Information Management (1)
- Information Visualization (1)
- Informationsübertragung (1)
- Infrared Multi Photon Dissociation (1)
- Infrared Multiphoton Dissociation Spectroscopy (IR-MPD) (1)
- Infrarotspek (1)
- Innovation (1)
- Input-To-State Stability (1)
- Insekten (1)
- Insurance (1)
- Integrative Beleuchtung (1)
- Integrative lighting (1)
- Intensity estimation (1)
- Intensität (1)
- Intentional Forgetting (1)
- Interactive decision support systems (1)
- Interfaces (1)
- Interkulturelle Produktentwicklung (1)
- Intermediate Composition (1)
- Interpolation (1)
- Interpolation Algorithm (1)
- Invariante (1)
- Invariante Momente (1)
- Inverse Problem (1)
- Inverse spin injection (1)
- Ion pairs (1)
- Ionensolvatation (1)
- Irreduzibler Charakter (1)
- Isogeometric Analysis (1)
- Isomerisierung von n-Decan (1)
- Isotopieeffekt (1)
- Jacobigruppe (1)
- Jitter (1)
- John L. (1)
- KCC2 (1)
- Kanalcodierung (1)
- Kanalschätzung (1)
- Kardiotoxizität (1)
- Karhunen-Loève expansion (1)
- Katalyse (1)
- Katalytische Hydrierung (1)
- Kategorientheorie (1)
- Kausale Inferenz (1)
- Kausalmodell (1)
- Kellerautomat (1)
- Kelvin Transformation (1)
- Kirchhoff-Love shell (1)
- Klassifikation (1)
- Klima (1)
- Knochenmetastase (1)
- Knowledge Work (1)
- Knowledge transfer (1)
- Knuth-Bendix completion (1)
- Knuth-Bendix-Vervollständigung (1)
- Kognition (1)
- Kognitive Psychologie (1)
- Kohäsive Grenzschichten (1)
- Kolonisierung (1)
- Kombinatorik (1)
- Kombinierte IR/UV-Spektroskopie (1)
- Kommutative Algebra (1)
- Kompetenz (1)
- Kompression (1)
- Konfigurationskräfte (1)
- Konfigurationsmechanik (1)
- Konjugierte Dualität (1)
- Konstruktion von Hyperflächen (1)
- Kontinuum <Mathematik> (1)
- Kontinuums-Atomistische Kopplung (1)
- Kontinuumsphysik (1)
- Konvergenz (1)
- Konvergenzrate (1)
- Konvergenzverhalten (1)
- Konvexe Optimierung (1)
- Kopplungsmethoden (1)
- Kopplungsproblem (1)
- Kopula <Mathematik> (1)
- Korrelationsanalyse (1)
- Kreitderivaten (1)
- Kristallisationsfouling (1)
- Kryptoanalyse (1)
- Kryptologie (1)
- Krümmung (1)
- Kullback-Leibler divergence (1)
- Kurve (1)
- Kurvenschar (1)
- Künstliche Intelligenz (1)
- LIBOR (1)
- LIDAR (1)
- LIR-Tree (1)
- Lagrangian relaxation (1)
- Lambda-cyhalothrin (1)
- Laminare Grenzschicht (1)
- Land Use Planning (1)
- Landwirtschaft (1)
- Laplace transform (1)
- Large Data (1)
- Large Eddy Simulation (1)
- Large High-Resolution Displays (1)
- Large Synchronous Networks (1)
- Laser Wakefield Particle Accelerator (1)
- Laser spectroscopy (1)
- Lateral superior olive (1)
- Lattice Boltzmann (1)
- Lattice Boltzmann Method (1)
- Lattice-BGK (1)
- Lattice-Boltzmann (1)
- Lead (1)
- Leading-Order Optimality (1)
- Learning Analytics (1)
- Least-squares Monte Carlo method (1)
- Leber (1)
- Leichtbau (1)
- Leichtbaupotenzial (1)
- Leistungseffizienz (1)
- Leistungsmessung (1)
- Leitfähigkeit (1)
- Lesen (1)
- Lesenlernen (1)
- Lesestörung (1)
- Leukämie (1)
- Level set methods (1)
- LiDAR (1)
- Lichtforschung (1)
- Lichtspeicherung (1)
- Lie algebras (1)
- Lie-Typ-Gruppe (1)
- Light Storage (1)
- Lighting (1)
- Lighting Design (1)
- Lighting research (1)
- Linear-Quadratic-Regulator (1)
- Link Metric (1)
- Linked Data (1)
- Linking Data Analysis and Visualization (1)
- Lippmann-Schwinger Equation (1)
- Lippmann-Schwinger equation (1)
- Liquid-Liquid Extraction (1)
- Liquid-liquid dispersion (1)
- Liquid-liquid extraction (1)
- Liquidität (1)
- Literature review (1)
- Liver Toxicity (1)
- Lobesia botrana (1)
- Local continuum (1)
- Locally Supported Zonal Kernels (1)
- Location (1)
- Logiksynthese (1)
- Lokalisierung (1)
- London-Dispersion (1)
- Low Jitter (1)
- Lubricant film thickness (1)
- Lubrication (1)
- Luftschnittstellen (1)
- Lungenkrebs (1)
- Lärmbelastung (1)
- Lärmimmission (1)
- MAC protocols (1)
- MBS (1)
- MIMO Systeme (1)
- MIMO-Antennen (1)
- MIP-Emissionsspektroskopie (1)
- MIP-Massenspektrometrie (1)
- MKS (1)
- ML-estimation (1)
- MO-Theorie (1)
- Macaulay’s inverse system (1)
- Mach-Zehnder-Interferometer (1)
- Magnetfeldbasierter Lokalisierung (1)
- Magnetfelder (1)
- Magneto-Elastic Coupling (1)
- Magnetoelastic coupling (1)
- Magnetoelasticity (1)
- Magnetometer (1)
- Magnetostriction (1)
- Manufacturing (1)
- Manufacturing Control (1)
- Manufacturing System (1)
- MapReduce (1)
- Marangoni-Effekt (1)
- Market Equilibrium (1)
- Markov Chain (1)
- Markov Kette (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Marktmanipulation (1)
- Marktrisiko (1)
- Martensit (1)
- Martensite transformation (1)
- Martingaloptimalitätsprinzip (1)
- Maschinelle Übersetzung (1)
- Maschinelles Lernen (1)
- Mass transfer (1)
- Massenspektrometrie (1)
- Material Modelling (1)
- Material Properties under Exctreme Conditions (1)
- Material-Force-Method (1)
- Materialermüdung (1)
- Materialmodellierung (1)
- Materialsysteme (1)
- Materielle Kräfte (1)
- Mathematical Finance (1)
- Mathematics (1)
- Mathematische Modellierung (1)
- Mathematisches Modell (1)
- Matrix Completion (1)
- Matrixkompression (1)
- Matrizenfaktorisierung (1)
- Matrizenzerlegung (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Intensity Projection (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- Maxwell's equations (1)
- McKay conjecture (1)
- Meaningful Work (1)
- Measurement (1)
- Measurement plattform (1)
- Mechanical (1)
- Mechanics (1)
- Mechanisch (1)
- Medical Image Analysis (1)
- Mehrdimensionale Bildverarbeitung (1)
- Mehrdimensionales Variationsproblem (1)
- Mehrkriterielle Optimierung (1)
- Mehrkörpersimulation (1)
- Mehrkörpersystem (1)
- Mehrphasenströmung (1)
- Mehrskalen (1)
- Mehrtraegeruebertragungsverfahren (1)
- Memory Architecture (1)
- Memory Consistency (1)
- Memory Theory (1)
- Mensch-Maschine-Kommunikation (1)
- Menschenmenge (1)
- Merkmalsraum (1)
- Mesh-Free (1)
- Messplatform (1)
- Metabolismus (1)
- Metabolomics (1)
- Metacontrast Masking (1)
- Metal-Free (1)
- Metallcluster (1)
- Metallschicht (1)
- Metapopulation (1)
- Metaverse (1)
- Metaversum (1)
- Meter (1)
- Methane emissions (1)
- Methode der finiten Elemente (1)
- Micro Cutting (1)
- Micro Grinding (1)
- Micro Lead (1)
- Microelectromechanical Systems (1)
- Microstructure (1)
- Microstructure morphology (1)
- Microsystem Technology (1)
- Mie- and Helmholtz-Representation (1)
- Mie- und Helmholtz-Darstellung (1)
- Mikrodrall (1)
- Mikroelektronik (1)
- Mikroklima (1)
- Mikromorphe Kontinua (1)
- Mikroskopie (1)
- Mikrosystemtechnik (1)
- Mindfulness (1)
- Minimal Cut Set Visualization (1)
- Minimal training (1)
- Mischcluster (1)
- Mitochondria (1)
- Mitochondrien (1)
- Mitochondrium (1)
- Mixed Connectivity (1)
- Mixed integer programming (1)
- Mixed method (1)
- Mixed-integer Programming (1)
- Mobile Communications (1)
- Mobile Machines (1)
- Mobile Robots (1)
- Mobile Telekommunikation (1)
- Mobile system (1)
- Mobiler Roboter (1)
- Mobilfunksysteme (1)
- Mobility (1)
- Model-Dynamics (1)
- Model-driven Engineering (1)
- Modellbasierte Fehlerdiagnose (1)
- Modellbildung (1)
- Modellprädiktive Regelung (1)
- Modes of learning (1)
- Modifiziertes Epoxidharz (1)
- Modularisierung (1)
- Modulationsübertragungsfunktion (1)
- Molecular Dynamics (1)
- Molecular beam (1)
- Molecular dynamics (1)
- Molekularbiologie (1)
- Molekulare Bioinformatik (1)
- Molekülcluster (1)
- Molekülorbital (1)
- Moment Invariants (1)
- Moment-Generating Functions (1)
- Momentum and Mas Transfer (1)
- Monte Carlo (1)
- Monte-Carlo Modelling (1)
- Mood-based Music Recommendations (1)
- Moreau-Yosida regularization (1)
- Morphismus (1)
- Multi Primary and One Second Particle Method (1)
- Multi-Asset Option (1)
- Multi-Edge Graph (1)
- Multi-Field (1)
- Multi-Variate Data (1)
- Multibody (1)
- Multicore Resource Management (1)
- Multicore Scheduling (1)
- Multicriteria optimization (1)
- Multidisciplinary Optimization (1)
- Multifield Data (1)
- Multileaf collimator (1)
- Multiperiod planning (1)
- Multiphase Flows (1)
- Multiple Jobholding (1)
- Multiresolution Analysis (1)
- Multiscale (1)
- Multiscale modelling (1)
- Multiskalen-Entrauschen (1)
- Multispektralaufnahme (1)
- Multispektralfotografie (1)
- Multivariate Analyse (1)
- Multivariate Wahrscheinlichkeitsverteilung (1)
- Multivariates Verfahren (1)
- Mutagenität (1)
- Mutation (1)
- NMR Spectroscopy (1)
- NNK (1)
- Nachhaltigkeit (1)
- Nahrungsnetz (1)
- Namibia (1)
- Nanocomposite (1)
- Nanofaser (1)
- Nanopartikel (1)
- Natural Neighbor (1)
- Natural Neighbor Interpolation (1)
- Natürliche Nachbarn (1)
- Navigation (1)
- Nekrose (1)
- Network (1)
- Network Architecture (1)
- Networks (1)
- Netzwerk (1)
- Netzwerksynthese (1)
- Neural ADC (1)
- Neural Architecture Search (1)
- Neuronales Netz (1)
- New Venture (1)
- Nicht-Desarguessche Ebene (1)
- Nichtglatte Optimierung (1)
- Nichtkommutative Algebra (1)
- Nichtkonvexe Optimierung (1)
- Nichtkonvexes Variationsproblem (1)
- Nichtlineare Approximation (1)
- Nichtlineare Diffusion (1)
- Nichtlineare Dynamik (1)
- Nichtlineare Mechanik (1)
- Nichtlineare Optimierung (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nichtlineare partielle Differentialgleichung (1)
- Nichtpositive Krümmung (1)
- Nichtrauchen (1)
- Niederschlag (1)
- Nilpotent elements (1)
- Ninequilibirum Electron Kinetics (1)
- Nische (1)
- Nitrosamine (1)
- Nitsches method (1)
- No-Arbitrage (1)
- Node-Link Diagram (1)
- Noise control (1)
- Non--local atomistic (1)
- Non-Newtonian (1)
- Non-commutative Computer Algebra (1)
- Nonlinear Optimization (1)
- Nonlinear time series analysis (1)
- Nonparametric time series (1)
- Nonsmooth Optimization (1)
- Nonspecific Adsorption (1)
- North East Lincolnshire (1)
- North Sea (1)
- Nostocales (1)
- Null Modell (1)
- Nulldimensionale Schemata (1)
- Numerical Flow Simulation (1)
- Numerical methods (1)
- Numerische Homogenisierung (1)
- Numerische Integration (1)
- Numerische Mathematik / Algorithmus (1)
- Numerische Simulat (1)
- Numerisches Modell (1)
- Numerisches Verfahren (1)
- Nutzerorientierte Produktentwicklung (1)
- OCR (1)
- OFDM mobile radio systems (1)
- OFDM-Mobilfunksysteme (1)
- OME (1)
- OWL (1)
- Oberflächenmaße (1)
- Oberflächenphysik (1)
- Oberflächenplasmonresonanz (1)
- Oberflächenspannung (1)
- Objekterkennung (1)
- Off-road Robotics (1)
- Off-road Robotik (1)
- Omics data analysis (1)
- Omics-Thechnologie (1)
- Online chain partitioning (1)
- Ontologie (1)
- Ontologiebasierte Kausalmodelle (1)
- Ontology (1)
- Ontology-based causation model (1)
- Optimal Control (1)
- Optimale Kontrolle (1)
- Optimierender Compiler (1)
- Optimization Algorithms (1)
- Option (1)
- Option Valuation (1)
- Optionsbewertung (1)
- Optische Abbildung (1)
- Optische Fernerkundung (1)
- Optische Spektroskopie (1)
- Orchideen (1)
- Order (1)
- Organizational structure (1)
- Organizational value (1)
- Osteoblast (1)
- Osteomimicry (1)
- Oxidant Evolution (1)
- PCDD/Fs (1)
- PCDD/Fs PCBs (1)
- PCM (1)
- PCS (1)
- PDD (1)
- PDE-Constrained Optimization, Robust Design, Multi-Objective Optimization (1)
- POD (1)
- PSPICE (1)
- Packed Columns (1)
- Panama (1)
- Papiermaschine (1)
- Parallel Algorithms (1)
- Paralleler Algorithmus (1)
- Paralleler Hybrid (1)
- Parameter identification (1)
- Parameteridentifikation (1)
- Pareto Optimality (1)
- Partially ordered sets (1)
- Participant Burden (1)
- Participatory Sensing (1)
- Particle (1)
- Partielle Differentialgleichung (1)
- Partikel Methoden (1)
- Passivrauchen (1)
- Patchworking Methode (1)
- Patchworking method (1)
- Pathogen (1)
- Pathwise Optimality (1)
- Pedestrian (1)
- Pedestrian FLow (1)
- Peltier (1)
- Penicillin-resistance (1)
- Peptide synthesis (1)
- Perceptual grouping (1)
- Performance (1)
- Periodic Homogenization (1)
- Permutationsäquivalenz (1)
- Personal Comfort (1)
- Personalisation (1)
- Pervasive health (1)
- Pestizid (1)
- Pestizidbelastung (1)
- Pflanzenfressende Insekten (1)
- Phase Transition Effect (1)
- Phase Transition Effekt (1)
- Phase equilibria (1)
- Phase field method (1)
- Phasenfeld (1)
- Phasmatodea (1)
- Philosophy of Technology (1)
- Photoelektron (1)
- Photonische Kristalle (1)
- Photonischer Kristall (1)
- Phylogenie (1)
- Phylogeny (1)
- Physical activity monitoring (1)
- Physical spaces (1)
- Physiologische Psychologie (1)
- Piezoelectric Materials (1)
- Piezoelectricity (1)
- Piezokeramik (1)
- Planar Pressure (1)
- Planares Polynom (1)
- Planning Support Systems (1)
- Plasmon (1)
- Plastizitätstheorie (1)
- Plate heat exchanger (1)
- Plattenextrusion (1)
- Plattenwärmeübertrager (1)
- Pleurocapsales (1)
- Poisson noise (1)
- Poisson-Gleichung (1)
- Polariton (1)
- Policy implementation (1)
- Poly(butylene adipate-co-terephthalate) (PBAT) (1)
- Poly(lactic acid) (PLA) (1)
- PolyBoRi (1)
- Polymer (1)
- Polymer nanocomposites (1)
- Polymers (1)
- Polypropylen (1)
- Population Balance Equation (1)
- Population balance (1)
- Population balances (1)
- Populationsbilanzmodelle (1)
- Populationswachstum (1)
- Portfolio Optimierung (1)
- Portfoliooptimierung (1)
- Power Efficiency (1)
- Pragmatism (1)
- Preimage of an ideal under a morphism of algebras (1)
- Pressure Drop (1)
- Prichard (1)
- Primary human Hepatocytes (1)
- Privacy (1)
- Probabilistic (1)
- Probust optimization (1)
- Process Data (1)
- Process-Structure-Property relationships (1)
- Processor Architecture (1)
- Processor Architectures (1)
- Produktentwicklung (1)
- Professional development (1)
- Programmverifikation (1)
- Projektionsoperator (1)
- Projektive Fläche (1)
- Property checking (1)
- Property-Driven Design (1)
- Prostatakrebs (1)
- Protein-Tyrosin-Kinasen (1)
- Protein/detergent complexes (1)
- Proteine (1)
- Proteintransport (1)
- Protistan Plankton (1)
- Protocol Compliance (1)
- Protocol Composition (1)
- Protonentransf (1)
- Prototyp (1)
- Prototype (1)
- Prox-Regularisierung (1)
- Prozessvisualisierung (1)
- Psychologie (1)
- Psychology of Perception (1)
- Psychosocial theory (1)
- Pump Intake Flows (1)
- Punktdefekte (1)
- Punktprozess (1)
- QMC (1)
- QVIs (1)
- QoS (1)
- Quadratic Approximation (1)
- Quantenchemie (1)
- Quantencomputer (1)
- Quanteninformatik (1)
- Quantenwell (1)
- Quantile autoregression (1)
- Quantitative Bildanalyse (1)
- Quartz (1)
- Quasi-Newton Methods (1)
- Quasi-Variational Inequalities (1)
- Quenched and tempered steel (1)
- Quicksort (1)
- REMPI (1)
- RH795 (1)
- RKHS (1)
- RNS-Interferenz (1)
- ROS (1)
- RTL (1)
- Radial Basis Functions (1)
- Radiative Cooling (1)
- Radio Resource Managements (1)
- Radiotherapy (1)
- Raman-Spektroskopie (1)
- Random testing (1)
- Randwertproblem (1)
- Randwertproblem / Schiefe Ableitung (1)
- Rank test (1)
- Rapid-Chase Theory (1)
- Rarefied gas (1)
- Rate Gyro (1)
- Ratenabhängigkeit (1)
- Rauchen (1)
- Raucherentwöhnung (1)
- Raumplanung (1)
- Ray tracing (1)
- Reachability (1)
- Reactive Absorption (1)
- Reactive extraction (1)
- Reaktive Sauerstoff Spezies (1)
- Reaktive Sauerstoffspezies (1)
- Reaktivextraktion (1)
- Real-Time (1)
- Real-Time Systems (1)
- Receptor design (1)
- Rechtecksgitter (1)
- Recognition (1)
- Rectilinear Grid (1)
- Red Sea (1)
- Redundanzvermeidung (1)
- Reflexionsspektroskopie (1)
- Regenwurm (1)
- Regime Shifts (1)
- Regime-Shift Modell (1)
- Regionalplanung (1)
- Regularisierung / Stoppkriterium (1)
- Regularität (1)
- Regularization / Stop criterion (1)
- Regularization methods (1)
- Regulatorgen (1)
- Regulatory gene search (1)
- Reibung (1)
- Reinforcement Learning (1)
- Rekollektion (1)
- Relative effect potencies (REPs) (1)
- Representation (1)
- Requirements engineering (1)
- Restricted Regions (1)
- Rhabdomyolyse (1)
- Riemannian manifolds (1)
- Riemannsche Mannigfaltigkeiten (1)
- Rigid Body Motion (1)
- Risikobewertung (1)
- Risikomaße (1)
- Risikotheorie (1)
- Risk Assessment (1)
- Risk Management (1)
- Risk Measures (1)
- Risk Sharing (1)
- Risk assessment (1)
- Rissausbreitung (1)
- Robot Control (1)
- Roboter (1)
- Robotic Manipulators (1)
- Robust smoothing (1)
- Rohstoffhandel (1)
- Rohstoffindex (1)
- Rolling bearing (1)
- Rollreibung (1)
- Rollreibung und -verschleiß (1)
- Rombopak (1)
- Routing (1)
- Rust effector (1)
- Ruthenium (1)
- Ruthenium-Vinyliden (1)
- Rydberg molecule (1)
- SAHARA (1)
- SCAD (1)
- SDL (1)
- SDL extensions (1)
- SM-SQMOM (1)
- SOEP (1)
- SPARQL (1)
- SPARQL query learning (1)
- SQMOM (1)
- SWARM (1)
- Safety (1)
- Safety Analysis (1)
- Sagnac-Effekt (1)
- Sandwiching algorithm (1)
- Satellitenfernerkundung (1)
- Sauerstoff (1)
- Sauerstoffverbrauch (1)
- Scalar (1)
- Scale function (1)
- Scanning Electron Microscope (1)
- Schadensmechanik (1)
- Schaltwerk (1)
- Schaum (1)
- Schaumzerfall (1)
- Scheduler (1)
- Scheduling (1)
- Schema <Informatik> (1)
- Schematisation (1)
- Schematisierung (1)
- Schiefe Ableitung (1)
- Schlagfrequenz (1)
- Schnittstelle (1)
- Schrumpfung (1)
- Schwache Formulierung (1)
- Schwache Konvergenz (1)
- Schwache Lösu (1)
- Schwingungsisolierung (1)
- Schädigung (1)
- Scientific Community Analysis (1)
- Scientific Computing (1)
- Second Order Conditions (1)
- Sediment gas storage (1)
- See (1)
- Self-X (1)
- Self-directed learning (1)
- Self-organization (1)
- Self-splitting objects (1)
- Self-supervised Learning (1)
- Semantic Communications (1)
- Semantic Desktop (1)
- Semantic Index (1)
- Semantic Wikis (1)
- Semantische Modellierung (1)
- Semantische Reasoner (1)
- Semantisches Datenmodell (1)
- Semi-infinite optimization (1)
- Sendesignalverarbeitung (1)
- Sensing (1)
- Sensor (1)
- Sensor Fusion (1)
- Sensors (1)
- Sequenzieller Algorithmus (1)
- Serre functor (1)
- Serumalbumine (1)
- Service-oriented Architecture (1)
- Settlement Appropriateness and Thresholds (1)
- Shallow Water Equations (1)
- Shape Memory Alloy Hybrid Composite (1)
- Shape optimization (1)
- Shared Resource Modeling (1)
- Sheet extrusion (1)
- Shrinking smart (1)
- Sicherheitsanalyse (1)
- Sicherheitstechnik (1)
- Silanisierung (1)
- Silanization (1)
- Silberkomplexe (1)
- Siliciumdioxid (1)
- Silicon dioxide nanoparticles (1)
- Similarity Join (1)
- Similarity Joins (1)
- Simulation acceleration (1)
- Single Cell Analysis (1)
- Singly Occupied Molecular Orbital (SOMO) (1)
- Singular <Programm> (1)
- Singularity theory (1)
- Singularität (1)
- Singularitätentheorie (1)
- Skalar (1)
- Skelettmuskel (1)
- Slender body theory (1)
- Smart City (1)
- Smart Device (1)
- Smart Mobile Device (1)
- Smart Textile (1)
- Smartphone (1)
- Smartwatch (1)
- Sobolev spaces (1)
- Sobolev-Raum (1)
- Social movement (1)
- Socio-Semantic Web (1)
- Soft Spaces (1)
- Software Comprehension (1)
- Software Dependencies (1)
- Software Evolution (1)
- Software Maintenance (1)
- Software Measurement (1)
- Software Testing (1)
- Software Visualization (1)
- Software engineering (1)
- Software transactional memory (1)
- Software-Architektur (1)
- Softwareentwicklung (1)
- Softwaremetrie (1)
- Softwareproduktionsumgebung (1)
- Softwarewartung (1)
- Solvency II (1)
- Solvency-II-Richtlinie (1)
- Sound Simulation (1)
- Soziale Ungleichheit (1)
- Spannungs-Dehn (1)
- Spannungsregelung (1)
- Spatial Statistics (1)
- Spatial regression models (1)
- Species sensitivity distribution (1)
- Spectral Method (1)
- Spectral theory (1)
- Speech recognition (1)
- Speed (1)
- Speed management (1)
- Spektralanalyse <Stochastik> (1)
- Spherical Fast Wavelet Transform (1)
- Spherical Location Problem (1)
- Sphärische Approximation (1)
- Spiders (1)
- Spiking Neural ADC (1)
- Spinnen (1)
- Spiritual leadership (1)
- Spirituality (1)
- Spline-Approximation (1)
- Split Operator (1)
- Splitoperator (1)
- Sprachdefinition (1)
- Sprachprofile (1)
- Spritzgusstechnologie (1)
- Sprödbru (1)
- Stabile Vektorbundle (1)
- Stable vector bundles (1)
- Stadtplanung (1)
- Standard basis (1)
- Standortprobleme (1)
- Static Program Analysis (1)
- Static light scattering (1)
- Statine (1)
- Stationary Light (1)
- Stationäres Licht (1)
- Statistical Independence (1)
- Statistics (1)
- Steady state (1)
- Step-Scan FTIR-Technik (1)
- Steuer (1)
- Stickstoff (1)
- Stickstoffaktivierung (1)
- Stimmungsbasierte Musikempfehlungen (1)
- Stochastic Dependence (1)
- Stochastic Impulse Control (1)
- Stochastic Network Calculus (1)
- Stochastic Processes (1)
- Stochastic optimization (1)
- Stochastische Differentialgleichung (1)
- Stochastische Inhomogenitäten (1)
- Stochastische Processe (1)
- Stochastische optimale Kontrolle (1)
- Stochastischer Prozess (1)
- Stochastisches Modell (1)
- Stoffaustausch (1)
- Stokes Equations (1)
- Stokes-Gleichung (1)
- Stop- und Spieloperator (1)
- Stornierung (1)
- Stoßdämpfer (1)
- Strahlentherapie (1)
- Strahlungstransport (1)
- Streaming (1)
- Streptococcus (1)
- Streptococcus pneumoniae (1)
- Streptomyces (1)
- Stress management (1)
- Structural Reliability (1)
- Structure-property relationships (1)
- Strukturiertes Finanzprodukt (1)
- Strukturiertes Gitter (1)
- Strukturoptimierung (1)
- Strömung (1)
- Strömungsdynamik (1)
- Student types (1)
- Subgradient (1)
- Sublimation (1)
- Subset Simulationen (1)
- Superior olivary complex (1)
- Supramolecular chemistry (1)
- Surface Reconstruction (1)
- Survival Analysis (1)
- Susceptor (1)
- Sustainability (1)
- Swakopmund (1)
- Swift Heavy Ion (1)
- Switched Linear System (1)
- Symbolic execution (1)
- Symmetrie (1)
- Symmetriebrechung (1)
- Symmetry (1)
- Synchronnetze (1)
- Synchronous Control Asynchronous Dataflow (1)
- System Identification (1)
- SystemC (1)
- Systemarchitektur (1)
- Systematics (1)
- Systematik (1)
- Systemdesign (1)
- Systemic Constructivist Approach (1)
- Systemidentifikation (1)
- Systemische Konstruktivistischen Ansatz (1)
- Systems Engineering (1)
- Sägezahneffekt (1)
- TCDD (1)
- TD-CDMA (1)
- TIPARP (1)
- TPC Bauteile (1)
- TTEthernet (1)
- TVET teachers’ education (1)
- Tail Dependence Koeffizient (1)
- Taktrückgewinnungsschaltungen (1)
- Task and Trajectory Planning (1)
- Task-based (1)
- Temporal Decoupling (1)
- Temporal Variational Autoencoders (1)
- Temporal data processing (1)
- Tensor (1)
- Tensorfeld (1)
- Tesselation (1)
- Test for Changepoint (1)
- Tethered Machines (1)
- Tetrachlordibenzodioxine (1)
- Tetraeder (1)
- Tetraedergi (1)
- Tetrahedral Grid (1)
- Tetrahedral Mesh (1)
- Textual CBR (1)
- Texture Orientation (1)
- Texturrichtung (1)
- Thecla (1)
- Thekla (1)
- Thematic analysis (1)
- Themenbasierte Empfehlungen von Ressourcen (1)
- Thermal Comfort (1)
- Thermal conductive polymer composites (1)
- Thermisch leitfähige Polymerkomposite (1)
- Thermodynamics (1)
- Thermodynamik (1)
- Thermomechanische Behandlung (1)
- Thermophoresis (1)
- Thermoplast (1)
- Thermoset (1)
- Thin film approximation (1)
- Thylakoid (1)
- Tichonov-Regularisierung (1)
- Time series classification (1)
- Time-Series (1)
- Time-Triggered (1)
- Time-delay-Netz (1)
- Time-slotted (1)
- Tire-soil interaction (1)
- ToF (1)
- Top-down (1)
- Topic-based Resource Recommendations (1)
- Topologie (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Topology visualization (1)
- Toxizität (1)
- Tracking (1)
- Traffic flow (1)
- Trajektorienplanung (1)
- Trans-European Transport Networks (1)
- Transaction Level Modeling (TLM) (1)
- Transaction costs (1)
- Transaktionskosten (1)
- Transeuropäische Verkehrsnetze (1)
- Transfektion (1)
- Transferred proteins (1)
- Transformation (1)
- Transient modeling (1)
- Transient state (1)
- Transkription (1)
- Transport (1)
- Transport Protocol (1)
- Traversability Analysis (1)
- Trennschärfe <Statistik> (1)
- Tribologie (1)
- Triplettzustand (1)
- Tropfenkoaleszenz (1)
- Tropfenzerfall (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Tube Drawing (1)
- Turnover <Ökologie> (1)
- Two-Scale Convergence (1)
- Two-phase flow (1)
- Type building (1)
- UCP2 (1)
- UML (1)
- UML Activity (1)
- UV-VIS-Spektroskopie (1)
- Ubiquitous system (1)
- Ultraviolettspektroskopie (1)
- Umweltgerechtigkeit (1)
- Umweltpsychologie (1)
- Uncertainty Estimation (1)
- Unobtrusive instrumentations (1)
- Unorganized Data (1)
- Unreinheitsfunktion (1)
- Unspezifische Adsorption (1)
- Unstrukturiertes Gitter (1)
- Untermannigfaltigkeit (1)
- Upper bound (1)
- Upwind-Verfahren (1)
- Urban Flooding (1)
- Urban Water Supply (1)
- Urban design (1)
- Urban sprawl (1)
- UrbanSim (1)
- Usability (1)
- Usage modeling (1)
- User Model (1)
- User-Centred Product Development (1)
- User-Experience (1)
- Utility (1)
- VALBM (1)
- VOF Model (1)
- VOF Modell (1)
- VSCPT (1)
- Validierung (1)
- Value at Risk (1)
- Value at risk (1)
- Value-at-Risk (1)
- Values in Action (VIA) (1)
- Variational autoencoders (1)
- Variationsrechnung (1)
- Vector (1)
- Vector Field (1)
- Vectorfield approximation (1)
- Vegetationsentwicklung (1)
- Vektor (1)
- Vektorfeldapproximation (1)
- Vektorfelder (1)
- Vektorkugelfunktionen (1)
- Verdampfung (1)
- Vergütungsstahl (1)
- Verification (1)
- Verkehrspolitik (1)
- Verkehrssicherheit (1)
- Verschwindungsatz (1)
- Versicherung (1)
- Vertrautheit (1)
- Verzerrungstensor (1)
- Verzweigung <Mathematik> (1)
- Virtual Environments (1)
- Virtual spaces (1)
- Virtuelle Realität (1)
- Virulence (1)
- Viscosity Adaptive Lattice Boltzmann Method (1)
- Viskoelastische Flüssigkeiten (1)
- Viskose Transportschemata (1)
- Viskosität (1)
- Visual Queries (1)
- Visualization Theory (1)
- Vitamin C (1)
- Vitamin C-Derivate (1)
- Vocational education and training (1)
- Voltage Control (1)
- Volume rendering (1)
- Volumen-Rendering (1)
- Vorkonditionierer (1)
- Voronoi diagram (1)
- Voronoi-Diagramm (1)
- Vorverarbeitung (1)
- WCET (1)
- Wahrnehmungspsychologie (1)
- Waldfragmentierung (1)
- Waldökosystem (1)
- Walkability (1)
- Wasserstoffbrückenbindungen (1)
- Water (1)
- Water reservoir management (1)
- Water resources (1)
- Wave Based Method (1)
- Wavelet-Theorie (1)
- Wavelet-Theory (1)
- Weak Memory Model (1)
- Weakest-link model (1)
- Wearable Computing (1)
- Wedderburn number (1)
- Weißes Rauschen (1)
- Wetland Conservation (1)
- Wetting (1)
- White Noise (1)
- White Noise Analysis (1)
- Wi-Fi (1)
- Wide-column stores (1)
- Wirbelabtrennung (1)
- Wirbelströmung (1)
- Wirtsspezifität (1)
- Wissen (1)
- Wissensbasiertes System (1)
- Wissenserwerb (1)
- Worst-Case (1)
- Wälzlager (1)
- Wärmeleitfähigkeit (1)
- Wärmeleitung (1)
- Wärmeübertragung (1)
- XDBMS (1)
- XFEM (1)
- XMCD (1)
- XML (1)
- XML query estimation (1)
- XML summary (1)
- Yaglom limits (1)
- Yaroslavskiy-Bentley-Bloch Quicksort (1)
- Zeitintegrale Modelle (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- Zeolite MCM-71 (1)
- Zeolite SSZ-53 (1)
- Zeolite UTD-1 (1)
- Zeolith (1)
- Zeolith ITQ-21 (1)
- Zeolith MCM-71 (1)
- Zeolith SSZ-53 (1)
- Zeolith UTD-1 (1)
- Zero-dimensional schemes (1)
- Zielverfolgung (1)
- Zigarettenrauchen (1)
- Zigarrenrauchen (1)
- Zufälliges Feld (1)
- Zugesicherte Eigenschaft (1)
- Zustandsgleichung (1)
- Zweiphasenströmung (1)
- Zweiphotonenspektroskopie (1)
- Zwischenmolekulare Kraft (1)
- acetate (1)
- acetylcholine receptor (1)
- acidification (1)
- acoustic modeling (1)
- actively steered implement (1)
- adaptive algorithm (1)
- adhesion (1)
- adhesive joints in concrete (1)
- adulthood (1)
- affective user interface (1)
- affine arithmetic (1)
- aging (1)
- algebraic attack (1)
- algebraic correspondence (1)
- algebraic function fields (1)
- algebraic geometry (1)
- algebraic number fields (1)
- algebraic topology (1)
- algebraische Korrespondenzen (1)
- algebraische Topologie (1)
- algebroid curve (1)
- alkali (1)
- alkin (1)
- alkyne (1)
- alpha shape method (1)
- alternating minimization (1)
- alternating optimization (1)
- amid (1)
- amide (1)
- analoge Mikroelektronik (1)
- analysis (1)
- analysis of algorithms (1)
- angewandte Mathematik (1)
- angewandte Topologie (1)
- anharmonic CH modes (1)
- anharmonic vibrations (1)
- anionic receptors (1)
- anisotropen Viskositätsmodell (1)
- anisotropic viscosity (1)
- anoxia (1)
- anserine (1)
- anthropogenic effects (1)
- apoptosis (1)
- applied mathematics (1)
- apprehension (1)
- aquatic (1)
- arbitrary Lagrangian-Eulerian methods (ALE) (1)
- archimedean copula (1)
- artificial neural network (1)
- aryl hydrocarbon receptor (1)
- ascorbate (1)
- ascorbic acid (1)
- ascorbyl radical (1)
- asian option (1)
- aspartam (1)
- aspartame (1)
- assembly tasks (1)
- associations (1)
- assymmetric carboxylate stretch vibrations (1)
- asymptotic-preserving (1)
- auto-pruning (1)
- automated theorem proving (1)
- automotive (1)
- autonomous networking (1)
- average-case analysis (1)
- axis orientation (1)
- basic carboxylates (1)
- basket option (1)
- beam refocusing (1)
- beating rate (1)
- behaviour-based system (1)
- benders decomposition (1)
- bending strip method (1)
- benzo[a]pyrene (1)
- benzol (1)
- bifurcation (1)
- binary analysis (1)
- binary countdown protocol (1)
- binomial tree (1)
- bioactive metabolites (1)
- bioavailability (1)
- biochemical characterisation (1)
- biology of knowledge (1)
- biomarker (1)
- biomechanics (1)
- biosensors (1)
- bitvector (1)
- black bursts (1)
- blackout period (1)
- bocses (1)
- body-IMU calibration (1)
- boundary value problem (1)
- bounded model checking (1)
- brittle fracture (1)
- bursting disk (1)
- butterfly molecule (1)
- c-Abl (1)
- calving (1)
- canonical ideal (1)
- canonical module (1)
- carboxylate bridge (1)
- carboxylates (1)
- carnosine (1)
- carrier-grade point-to-point radio networks (1)
- catalysis (1)
- cells on chips (1)
- change detection (1)
- changing market coefficients (1)
- changing urbanisation patterns (1)
- character strengths (1)
- characteristic polynomial (1)
- characterization of Structures (1)
- chemical effect prediction (1)
- chemoprevention (1)
- chromium (1)
- climate (1)
- climate change (1)
- closure approximation (1)
- clustering methods (1)
- code coverage analysis (1)
- coffee (1)
- cognition (1)
- cohesive cracks (1)
- cohesive elements (1)
- cohesive interface (1)
- collaborative information visualization (1)
- collaborative mobile sensing (1)
- collective intelligence (1)
- collision induced dissociation (1)
- colonization (1)
- combination band (1)
- combinatorics (1)
- community assembly (1)
- composite (1)
- composite materials (1)
- composites (1)
- computational biology (1)
- computational dynamics (1)
- computational finance (1)
- computational modelling (1)
- computer algebra (1)
- computer-based systems (1)
- computer-supported cooperative work (1)
- computeralgebra (1)
- conceptual process design (1)
- concurrent (1)
- condition number (1)
- configurational mechanics (1)
- conflict (1)
- conserving time integration (1)
- consistent integration (1)
- constrained mechanical systems (1)
- constraint exploration (1)
- content-and-structure summary (1)
- context awareness (1)
- context management (1)
- context-aware topology control (1)
- continuous master theorem (1)
- continuum damage (1)
- continuum damage mechanics (1)
- continuum fracture mechanics (1)
- controller (1)
- convergence behaviour (1)
- convex constraints (1)
- convex optimization (1)
- coordinated backhaul networks in rural areas (1)
- coordinative flexibility (1)
- core strengths (1)
- correlated errors (1)
- coupled problems (1)
- coupling methods (1)
- crack path tracking (1)
- crash (1)
- crash application (1)
- crash hedging (1)
- crashworthiness (1)
- credit risk (1)
- cross section (1)
- crossphase modulation (1)
- crowd condition estimation (1)
- crowd density estimation (1)
- crowd scanning (1)
- crowd sensing (1)
- crowdsourcing (1)
- crystallization (1)
- cumulative IRMPD (1)
- curvature (1)
- curves and surfaces (1)
- cutting edges (1)
- cutting simulation (1)
- cyclic peptides (1)
- cytotoxicity (1)
- damage tolerance (1)
- data annotation (1)
- data race (1)
- data sets (1)
- data-flow (1)
- dataset (1)
- decidability (1)
- decision support (1)
- decision support systems (1)
- decoding (1)
- default time (1)
- defect interaction (1)
- degenerations of an elliptic curve (1)
- dense univariate rational interpolation (1)
- density gradient theory (1)
- dependable systems (1)
- depth sensing (1)
- design (1)
- design automation (1)
- determinant (1)
- deterministic arbitration (1)
- deuterierung (1)
- development (1)
- diatoms (1)
- dielectric elastomers (1)
- diffusion coefficient (1)
- diffusion measurement (1)
- diffusion model (1)
- diffusion models (1)
- digital design (1)
- digital methodologies (1)
- digitale Methodik (1)
- digitaler Entwurf (1)
- dioxin-like compounds (1)
- directed graphs (1)
- dischargeable mass flow rate (1)
- discontinuous finite elements (1)
- discrepancy (1)
- dispersal (1)
- distributed (1)
- distributed real-time systems (1)
- distributed tasks (1)
- disulfide bond transfer (1)
- diurnal cycle (1)
- diversification (1)
- diversity (1)
- domain parametrization (1)
- domain switching (1)
- double exponential distribution (1)
- downward continuation (1)
- driver assistance (1)
- driver status and intention prediction (1)
- drowsiness detection (1)
- dynamic (1)
- dynamic calibration (1)
- dynamic combinatorial chemistry (1)
- dynamic fracture mechanics (1)
- dynamic model (1)
- dysprosium (1)
- echtzeitsystem (1)
- ecology (1)
- economic development (1)
- ecosystem function (1)
- edge computing (1)
- effective refractive index (1)
- efficiency loss (1)
- elastoplasticity (1)
- electrical (1)
- electrical conductivity (1)
- electro-hydraulic systems (1)
- electrolyte solutions (1)
- electronically excited states (1)
- elektronisch angeregte Zustände (1)
- elliptical distribution (1)
- embedded (1)
- embedded mixed-criticality systems (1)
- embedding (1)
- emergent aquatic insects (1)
- emotion visualization (1)
- empirical review (1)
- enamid (1)
- end-to-end learning (1)
- endolithic (1)
- endomorphism ring (1)
- engineering (1)
- enrichment (1)
- ensemble (1)
- entrepreneurial orientation (1)
- entrepreneurship (1)
- enumerative geometry (1)
- environment perception (1)
- environmental noise (1)
- environmental risk assessment (1)
- epiphytes (1)
- epoxy (1)
- equation of state (1)
- equilibrium strategies (1)
- equisingular families (1)
- esterases (1)
- event segmentation (1)
- evolutionary algorithm (1)
- explainability (1)
- face value (1)
- fallible knowledge (1)
- fatigue (1)
- fault-tolerant control (1)
- fehlertolerante Regelung (1)
- fermi resonance (1)
- ferroelectric fatigue (1)
- ferroelektrische Ermüdung (1)
- ferroelektrischer Perowskit (1)
- fiber reinforced silicon carbide (1)
- fibre lay-down dynamics (1)
- fictitious configurations (1)
- filter (1)
- filtration (1)
- financial mathematics (1)
- finite Elasto-Plastizität (1)
- finite elasto-plasticity (1)
- finite groups of Lie type (1)
- finite spin group (1)
- firewall (1)
- first hitting time (1)
- fish (1)
- flexible multibody dynamics (1)
- float glass (1)
- flood risk (1)
- flow cytometry (1)
- flow visualization (1)
- fluid interfaces (1)
- fluid structure (1)
- fluid structure interaction (1)
- fluid-structure interaction (FSI) (1)
- folding rocks (1)
- forest management (1)
- formal (1)
- formal analysis (1)
- formaldehyde (1)
- formale Analyse (1)
- formate (1)
- forward-shooting grid (1)
- foundational translation validation (1)
- fracture mechanics (1)
- fragmentation channel (1)
- free surface (1)
- free-living (1)
- freie Oberfläche (1)
- freshwater lentic systems (1)
- front loader (1)
- functional safety (1)
- fuzzy Q-learning (1)
- fuzzy logic (1)
- gas bearing, aerostatic, porous, theoretical model (1)
- gas phase reaction (1)
- gas transfer at the water-atmosphere interface (1)
- gasphase (1)
- gaussian filter (1)
- gebietszerlegung (1)
- gelonin (1)
- generalized plasticity (1)
- generic character table (1)
- generic self-x sensor systems (1)
- generic sensor interface (1)
- genotoxicity (1)
- geographic information systems (1)
- geology (1)
- geometrically exact beams (1)
- gitter (1)
- glioblastoma (1)
- global tracking (1)
- glycine neurotransmission (1)
- good semigroup (1)
- governance (1)
- grape berry moth (1)
- grapevine moth (1)
- graph drawing algorithm (1)
- graph embedding (1)
- graph layout (1)
- graph p-Laplacian (1)
- gravitation (1)
- greenhouse gases (1)
- group action (1)
- groups of Lie type (1)
- großer Investor (1)
- hand pose, hand shape, depth image, convolutional neural networks (1)
- handover optimzaiion (1)
- haptotaxis (1)
- hardware (1)
- hedging (1)
- heterogeneous access management (1)
- heterogenous catalysis (1)
- heuristic (1)
- hexadiendiale (1)
- hierarchical matrix (1)
- hierarchical structure (1)
- higher order accurate conserving time integrators (1)
- higher-order continuum (1)
- historical documents (1)
- host preference (1)
- host-range (1)
- human body motion tracking (1)
- hybrid lightweight structures (1)
- hybrid material (1)
- hybrid materials (1)
- hybrid structure (1)
- hybride Leichtbaustrukturen (1)
- hydrogen bonds (1)
- hydrogenation (1)
- hyperbolic systems (1)
- hyperelliptic function field (1)
- hyperelliptische Funktionenkörper (1)
- hypergraph (1)
- hyperspectal unmixing (1)
- hypocoercivity (1)
- hypolithic (1)
- hypoxia (1)
- iB2C (1)
- idealclass group (1)
- identity (1)
- image denoising (1)
- imaging (1)
- immobilization (1)
- immunotoxins (1)
- implement (1)
- implementation (1)
- impulse control (1)
- impurity functions (1)
- incompressible elasticity (1)
- inelastic multibody systems (1)
- inelastische Mehrkörpersysteme (1)
- inertial measurement unit (1)
- inertial sensors (1)
- infinite-dimensional analysis (1)
- infinite-dimensional manifold (1)
- inflation-linked product (1)
- information systems (1)
- infrared spectroscopy (1)
- infrarot (1)
- inhibition (1)
- injection molding (1)
- insecticide tolerance (1)
- integer programming (1)
- integral constitutive equations (1)
- intellectual disability (1)
- intensity (1)
- interaction networks (1)
- interface (1)
- interference (1)
- intermediate stops (1)
- internal seiche (1)
- interpolation (1)
- interpretation (1)
- interval arithmetic (1)
- intrusion detection (1)
- invariant (1)
- inverse coordination (1)
- inverse optimization (1)
- inverse problem (1)
- ion-sensitive field-effect transistor (1)
- ionization (1)
- isogeometric analysis (IGA) (1)
- jenseits der dritten Generation (1)
- joint channel estimation (1)
- jump table analysis (1)
- jump-diffusion process (1)
- kalman (1)
- kernel (1)
- kinematic (1)
- kinematic model (1)
- kinetic equations (1)
- kinetic isotope effect (1)
- kinetischer Isotopeneffekt (1)
- konsistente Integration (1)
- kontinuumsatomistischer Ansatz (1)
- lake classification (1)
- lake modeling (1)
- landsat (1)
- language definition (1)
- language modeling (1)
- language profiles (1)
- lanthanide (1)
- large investor (1)
- large neighborhood search (1)
- large scale integer programming (1)
- lattice Boltzmann (1)
- layout (1)
- leaf-cutting ants (1)
- letter (1)
- leukemia (1)
- level K-algebras (1)
- level set method (1)
- life insurance (1)
- life-history (1)
- life-strategy (1)
- light scattering optimization (1)
- limit theorems (1)
- linear code (1)
- linear systems (1)
- linked data (1)
- lipases (1)
- lipid content (1)
- liquid-liquid extraction (1)
- loader (1)
- local-global conjectures (1)
- localizing basis (1)
- logic synthesis (1)
- long short-term memory (1)
- long tail (1)
- longevity bonds (1)
- loss analysis (1)
- low-rank approximation (1)
- lung cancer (1)
- mHealth (1)
- machine code analysis (1)
- machine-checkable proof (1)
- macro derivative (1)
- macroinvertebrate community (1)
- macroinvertebrates (1)
- macrophytes (1)
- magnetic field based localization (1)
- magnetism (1)
- magnetometer calibration (1)
- manganese (1)
- marine bacteria (1)
- market crash (1)
- market manipulation (1)
- martingale optimality principle (1)
- mass spectrometry (1)
- material characterisation (1)
- materielle Kräfte (1)
- mathematical modelling (1)
- mathematical morphology (1)
- matrix problems (1)
- matrix visualization (1)
- matroid flows (1)
- mehreren Uebertragungszweigen (1)
- mesh deformation (1)
- mesoporous (1)
- message-passing (1)
- meta-analysis (1)
- metabolism (1)
- metadata (1)
- metaheuristics (1)
- metal fibre (1)
- metal organic frameworks (1)
- metals (1)
- miRNA (1)
- micro lead (1)
- microelectronics ontology (1)
- micromechanics (1)
- micromorphic continua (1)
- microstructures (1)
- minimal polynomial (1)
- mixed convection (1)
- mixed methods (1)
- mixed multiscale finite element methods (1)
- mixed-signal (1)
- mobile radio (1)
- mobile radio systems (1)
- mobile scale (1)
- mobility robustness optimization (1)
- modal derivatives (1)
- model (1)
- model order reduction (1)
- model-based fault diagnosis (1)
- modularisation (1)
- moduli space (1)
- molecular capsules (1)
- molecular dynamics (1)
- molecular simulations (1)
- molekulare Simulation (1)
- moment (1)
- monotone Konvergenz (1)
- monotropic programming (1)
- muconaldehyde (1)
- multi scale (1)
- multi-asset option (1)
- multi-carrier (1)
- multi-class image segmentation (1)
- multi-core processors (1)
- multi-domain modeling and evaluation methodology (1)
- multi-level Monte Carlo (1)
- multi-object tracking (1)
- multi-phase flow (1)
- multi-scale model (1)
- multi-user (1)
- multicategory (1)
- multicore (1)
- multidimensional datasets (1)
- multifilament superconductor (1)
- multifunctionality (1)
- multigrid method (1)
- multileaf collimator (1)
- multinomial regression (1)
- multiobjective optimization (1)
- multipatch (1)
- multiplicative decomposition (1)
- multiplicative noise (1)
- multiplikative Zerlegung (1)
- multiscale analysis (1)
- multiscale denoising (1)
- multiscale methods (1)
- multitemporal (1)
- multithreading (1)
- multitype code coupling (1)
- multiuser detection (1)
- multiuser transmission (1)
- multivariate chi-square-test (1)
- multiway partitioning (1)
- myasthenia gravis (1)
- n-Decane hydroconversion (1)
- naive diversification (1)
- nanocomposites (1)
- nanofiber (1)
- nanoparticle (1)
- natural products (1)
- necrosis (1)
- negative refraction (1)
- neonatal rat ventricular cardiomyocytes (1)
- neonatale ventrikuläre Kardiomyozyten der Ratte (1)
- nestable tangibles (1)
- network flows (1)
- network synthesis (1)
- netzgenerierung (1)
- neural networks (1)
- neurotrophin 3 (1)
- nicht-newtonsche Strömungen (1)
- nichtlineare Druckkorrektor (1)
- nichtlineare Modellreduktion (1)
- nichtlineare Netzwerke (1)
- nickel (1)
- niob (1)
- non square linear system solving (1)
- non-conventional (1)
- non-desarguesian plane (1)
- non-equilibrium thermodynamics (1)
- non-newtonian flow (1)
- nonconvex optimization (1)
- nonlinear circuits (1)
- nonlinear diffusion filtering (1)
- nonlinear elasticity (1)
- nonlinear elastodynamics (1)
- nonlinear model reduction (1)
- nonlinear pressure correction (1)
- nonlinear term structure dependence (1)
- nonlinear vibration analysis (1)
- nonlocal filtering (1)
- nonnegative matrix factorization (1)
- nonwovens (1)
- normalization (1)
- nucleofection (1)
- null model (1)
- number fields (1)
- numerical irreducible decomposition (1)
- numerical methods (1)
- numerical time integration (1)
- numerische Dynamik (1)
- numerische Strömungssimulation (1)
- numerisches Verfahren (1)
- oblique derivative (1)
- optical code multiplex (1)
- optical imaging (1)
- optimal (1)
- optimal capital structure (1)
- optimal consumption and investment (1)
- optiman stopping (1)
- optimization (1)
- optimization correctness (1)
- option pricing (1)
- option valuation (1)
- orbit (1)
- organic micropollutants (1)
- oscillating magnetic fields (1)
- out-of-order (1)
- output feedback approximation (1)
- overtone (1)
- oxidative DNA Damage (1)
- oxidative DNA Schäden (1)
- oxo centered transition metal complexes (1)
- oxygen consumption (1)
- p300 (1)
- p53 (1)
- parallel (1)
- parametric design (1)
- paramtrisches Design (1)
- partial hydrolysis (1)
- partial information (1)
- participatory sensing (1)
- particle dynamics (1)
- particle finite element method (1)
- particle size distribution (1)
- path (1)
- path cost models (1)
- path relinking (1)
- path tracking (1)
- path-dependent options (1)
- pattern (1)
- pattern recognition (1)
- penalty methods (1)
- penalty-free formulation (1)
- peripheral blood mononuclear cells (1)
- pesticides (1)
- pesticides and wastewater (1)
- petroleum exploration (1)
- phase equilibria (1)
- phase equilibrium (1)
- phase field modeling (1)
- phenothiazine (1)
- photonic crystals (1)
- photonic crystals filter (1)
- photonic structures (1)
- photonics (1)
- piezoelectricity (1)
- pivot sampling (1)
- planar polynomial (1)
- planning (1)
- planning systems (1)
- planning theory (1)
- plant-herbivore interactions (1)
- plasticity (1)
- platin (1)
- platinum (1)
- point cloud (1)
- point defects (1)
- political ecology (1)
- polymer blends (1)
- polymer compound (1)
- polymer morphology (1)
- polymer nanocomposites (1)
- polyphenol (1)
- population balance modelling (1)
- population genetics (1)
- poroelasticity (1)
- porous media (1)
- portfolio (1)
- portfolio decision (1)
- portfolio-optimization (1)
- poröse Medien (1)
- position detection (1)
- posterior collapse (1)
- potential (1)
- preconditioners (1)
- preprocessing (1)
- pressure correction (1)
- pressure drop (1)
- pressure relief (1)
- preventive maintenance (1)
- primal-dual algorithm (1)
- probabilistic modeling (1)
- probability distribution (1)
- probability of dangerous failure on demand (1)
- probe pruning (1)
- processing (1)
- projective surfaces (1)
- proof generating optimizer (1)
- propagating discontinuities (1)
- property cheking (1)
- protein adducts (1)
- protein analysis (1)
- protein conjugate (1)
- proximation (1)
- proxy modeling (1)
- pulsed and stirred columns (1)
- pulsierte und gerührte Kolonen (1)
- pyrrolizidine alkaloids (1)
- quadrinomial tree (1)
- quality assurance (1)
- quantitative analysis (1)
- quantum gas (1)
- quasi-Monte Carlo (1)
- quasi-variational inequalities (1)
- quasihomogeneity (1)
- quasiregular group (1)
- quasireguläre Gruppe (1)
- radiation therapy (1)
- radiotherapy (1)
- rainfall rate (1)
- rank-one convexity (1)
- rare disasters (1)
- rat liver cell systems (1)
- rate of convergence (1)
- raum-zeitliche Analyse (1)
- ray casting (1)
- ray tracing (1)
- reaction coordinate (1)
- reaction kinetics (1)
- reactive oxygen species (1)
- readout system (1)
- reaktionskinetik (1)
- real quadratic number fields (1)
- real-tiem (1)
- real-time scheduling (1)
- real-time tasks (1)
- reasoning (1)
- receiver orientation (1)
- receptors for anions (1)
- reconstruction (1)
- reconstructions (1)
- redundant constraint (1)
- reflectionless boundary condition (1)
- reflexionslose Randbedingung (1)
- regime-shift model (1)
- regional planning (1)
- regularity (1)
- regularization methods (1)
- reinforcement learning (1)
- relative effect potencies (1)
- relative toxic potencies (1)
- relaxed memory models (1)
- remote sensing (1)
- resilience (1)
- respiratory chain (1)
- reverse (1)
- reverse logistics (1)
- rhabdomyolysis (1)
- rheology (1)
- ribosome-inactivating proteins (1)
- riparian food web (1)
- risk analysis (1)
- risk management (1)
- risk measures (1)
- risk reduction (1)
- river typology system (1)
- robustness (1)
- runtime monitoring (1)
- rupture disk (1)
- ruthenium-vinylidene (1)
- safety and security (1)
- safety-related systems (1)
- sampling (1)
- satisfiability (1)
- sawtooth effect (1)
- scalar and vectorial wavelets (1)
- scalar field (1)
- scaled boundary isogeometric analysis (1)
- scaled boundary parametrizations (1)
- scene flow (1)
- seasonal variability (1)
- second class group (1)
- secondary structure prediction (1)
- seismic tomography (1)
- self calibration (1)
- self-optimizing networks (1)
- self-regulation (1)
- semigroup of values (1)
- semisprays (1)
- sensitization effect (1)
- sequential circuit (1)
- serum albumin (1)
- service area (1)
- sheaf theory (1)
- short scales (1)
- shrinking cities (1)
- silica (1)
- silicon nanowire (1)
- similarity measures (1)
- singularities (1)
- skeletal muscle cells (1)
- sliding wear (1)
- small-multiples node-link visualization (1)
- smart decline (1)
- social cohesion (1)
- social-ecological systems (1)
- software (1)
- software comprehension (1)
- software engineering (1)
- software engineering task (1)
- solid interfaces (1)
- solvation (1)
- sparse interpolation of multivariate rational functions (1)
- sparse multivariate polynomial interpolation (1)
- sparse-to-dense (1)
- sparsity (1)
- spatial statistics (1)
- spectroscopy (1)
- spherical approximation (1)
- spin (1)
- spin flip (1)
- sputtering process (1)
- srtm (1)
- stability (1)
- stabilization (1)
- star-shaped domain (1)
- static instrumentation (1)
- static software structure (1)
- statin (1)
- stationary sensing (1)
- stationär (1)
- statistics (1)
- steel fibre (1)
- stochastic coefficient (1)
- stochastic optimal control (1)
- stochastic processes (1)
- stop- and play-operator (1)
- strain localization (1)
- stratifolds (1)
- stream pollution (1)
- streams (1)
- structural summary (1)
- structural tensors (1)
- students (1)
- subgradient (1)
- subjective evaluation (1)
- subjectivity (1)
- sulfonic (1)
- superposed fluids (1)
- supramolecular chemistry (1)
- surface measures (1)
- surface tension (1)
- surrender options (1)
- surrogate algorithm (1)
- suzuki coupling (1)
- symbolic simulation (1)
- symmetrc carboxylate stretch vibrations (1)
- symmetry (1)
- synchronization (1)
- synchronous (1)
- system architecture (1)
- syzygies (1)
- tabletop (1)
- tail dependence coefficient (1)
- target sensitivity (1)
- task sequence (1)
- tax (1)
- technische und berufliche Aus- und Weiterbildung Lehrer lernen (1)
- technology mapping (1)
- tensions (1)
- tensor (1)
- tensorfield (1)
- terrain rendering (1)
- tetrachlorodibenzo-p-dioxin (1)
- texture orientation (1)
- thermal analysis (1)
- thermodynamic model (1)
- thermoplastische Verbundwerkstoffe (1)
- thiazolium (1)
- thiol-disulfide exchange (1)
- time delays (1)
- time utility functions (1)
- time-dependent (1)
- time-varying flow fields (1)
- timeliness (1)
- tipping points (1)
- top-down (1)
- topological asymptotic expansion (1)
- topological insulator (1)
- toric geometry (1)
- torische Geometrie (1)
- total suspended solids (1)
- total variation (1)
- total variation spatial regularization (1)
- touch surfaces (1)
- toxic equivalency factor (TEF) concept (1)
- toxicity (1)
- tracking (1)
- trade-off (1)
- traffic safety (1)
- transfer film (1)
- transfer hydrogenation (1)
- transient (1)
- transition metal (1)
- transition metal complexes (1)
- transition metals (1)
- translation contract (1)
- translation invariant spaces (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- transmission conditions (1)
- transport (1)
- tropical ecology (1)
- tropical geometry (1)
- tropical mountain reservoirs (1)
- tropical rainforest (1)
- tropischer Regenwald (1)
- ultrasound signals (1)
- unimodular certification (1)
- unimodularity (1)
- urban planning (1)
- urban policy (1)
- urban stormwater quality (1)
- user-centered design (1)
- value semigroup (1)
- valuing contracts (1)
- variable neighborhood search (1)
- variable selection (1)
- variational methods (1)
- variational model (1)
- vector bundles (1)
- vector field visualization (1)
- vector spherical harmonics (1)
- vectorfield (1)
- vectorial wavelets (1)
- vehicle routing (1)
- vertical velocity (1)
- vertikale Geschwindigkeiten (1)
- virtual reality (1)
- virtual training (1)
- viscoelastic fluids (1)
- viscoelastic modeling (1)
- viscosity model (1)
- visual analytics (1)
- visual structure (1)
- voltage sensitive dye (1)
- vortex seperation (1)
- waveguides (1)
- wavelength multiplex (1)
- weak localization (1)
- wear (1)
- wearable systems (1)
- weighing (1)
- weighted finite-state transducers (1)
- well-posedness (1)
- wheel side-slip estimation (1)
- whole genome microarray analysis (1)
- wireless communications system (1)
- wireless networks (1)
- wireless sensor network (1)
- wireless signal (1)
- worker assistance (1)
- worst-case (1)
- worst-case scenario (1)
- xai (1)
- zeitabhängige Strömungen (1)
- zinc (1)
- Ähnlichkeit (1)
- Äquisingularität (1)
- Ökodesign (1)
- Ökologie (1)
- Ökosystem (1)
- Ökotoxizität (1)
- Überflutung (1)
- Überflutungsrisiko (1)
- Übergangsbedingungen (1)
- Übergangsmetall (1)
- Übersetzung (1)
Faculty / Organisational entity
- Kaiserslautern - Fachbereich Mathematik (278)
- Kaiserslautern - Fachbereich Informatik (218)
- Kaiserslautern - Fachbereich Maschinenbau und Verfahrenstechnik (143)
- Kaiserslautern - Fachbereich Chemie (79)
- Kaiserslautern - Fachbereich Elektrotechnik und Informationstechnik (64)
- Kaiserslautern - Fachbereich Biologie (53)
- Kaiserslautern - Fachbereich Sozialwissenschaften (26)
- Landau - Fachbereich Natur- und Umweltwissenschaften (22)
- Kaiserslautern - Fachbereich Wirtschaftswissenschaften (19)
- Kaiserslautern - Fachbereich Physik (9)
The aim of this dissertation is to explain processes in recruitment by gaining a better understanding of how perceptions evolve and how recruitment outcomes and perceptions are influenced. To do so, this dissertation takes a closer look at the formation of fit perceptions, the effects of top employer awards on pre-hire recruitment outcomes, and on how perceptions about external sources are influenced.
Matter-wave Optics of Dark-state Polaritons: Applications to Interferometry and Quantum Information
(2006)
The present work "Materwave Optics with Dark-state Polaritons: Applications to Interferometry and Quantum Information" deals in a broad sense with the subject of dark-states and in particular with the so-called dark-state polaritons introduced by M. Fleischhauer and M. D. Lukin. The dark-state polaritons can be regarded as a combined excitation of electromagnetic fields and spin/matter-waves. Within the framework of this thesis the special optical properties of the combined excitation are studied. On one hand a new procedure to spatially manipulate and to increase the excitation density of stored photons is described and on the other hand the properties are used to construct a new type of Sagnac Hybrid interferometer. The thesis is devided into four parts. In the introduction all notions necessary to understand the work are described, e.g.: electromagnetically induced transparency (EIT), dark-state polaritons and the Sagnac effect. The second chapter considers the method developed by A. Andre and M. D. Lukin to create stationary light pulses in specially dressed EIT-media. In a first step a set of field equations is derived and simplified by introducing a new set of normal modes. The absorption of one of the normal modes leads to the phenomenon of pulse-matching for the other mode and thereby to a diffusive spreading of its field envelope. All these considerations are based on a homogeneous field setup of the EIT preparation laser. If this restriction is dismissed one finds that a drift motion is superimposed to the diffusive spreading. By choosing a special laser configuration the drift motion can be tailored such that an effective force is created that counteracts the spreading. Moreover, the force can not only be strong enough to compensate the diffusive spreading but also to exceed this dynamics and hence to compress the field envelope of the excitation. The compression can be discribed using a Fokker-Planck equation of the Ornstein-Uhlenbeck type. The investigations show that the compression leads to an excitation of higher-order modes which decay very fast. In the last section of the chapter this exciation will be discussed in more detail and conditions will be given how the excitation of higher-order modes can be avoided or even suppressed. All results given in the chapter are supported by numerical simulatons. In the third chapter the matterwave optical properties of the dark-state polaritons will be studied. They will be used to construct a light-matterwave hybrid Sagnac interferometer. First the principle setup of such an interferometer will be sketched and the relevant equations of motion of light-matter interaction in a rotating frame will be derived. These form the basis of the following considerations of the dark-state polariton dynamics with and without the influence of external trapping potentials on the matterwave part of the polariton. It will be shown that a sensitivity enhancement compared to a passive laser gyroscope can be anticipated if the gaseous medium is initially in a superfluid quantum state in a ring-trap configuration. To achieve this enhancement a simultaneous coherence and momentum transfer is furthermore necessary. In the last part of the chapter the quantum sensitivity limit of the hybrid interferometer is derived using the one-particle density matrix equations incorporating the motion of the particles. To this end the Maxwell-Bloch equations are considered perturbatively in the rotation rate of the noninertial frame of reference and the susceptibility of the considered 3-level \(\Lambda\)-type system is derived in arbitrary order of the probe-field. This is done to determine the optimum operation point. With its help the anticipated quantum sensitivity of the light-matterwave hybrid Sagnac interferometer is calculated at the shot-noise limit and the results are compared to state-of-the-art laser and matterwave Sagnac interferometers. The last chapter of the thesis originates from a joint theoretical and experimental project with the AG Bergmann. This chapter does no longer consider the dark-state polaritons of the last two chapters but deals with the more general concept of dark states and in particular with the transient velocity selective dark states as introduced by E. Arimondo et al. In the experiment we could for the first time measure these states. The chapter starts with an introduction into the concept of velocity selective dark states as they occur in a \(\Lambda\)-configuration. Then we introduce the transient velocity selective dark-states as they occur in an particular extension of the \(\Lambda\)-system. For later use in the simulations the relevant equations of motion are derived in detail. The simulations are based on the solution of the generalized optical Bloch equations. Finally the experimental setup and procedure are explained and the theoretical and experimental results are compared.
A series of (oligo)phenthiazines, thiazolium salts and sulfonic acid functionalized organic/inorganic hybrid materials were synthesized. The organic groups were covalently bound on the inorganic surface through reactions of organosilane precursors with TEOS or with the silanol groups of material surface. These synthetic methods are called the co-condensation process and the post grafting. The structures and the textural parameters of the generated hybrid materials were characterized by XRD, N2 adsorption-desorption measurements, SEM and TEM. The incorporations of the organic groups were verified by elemental analysis, thermogravimetric analysis, FT-IR, UV-Vis, EPR, CV, as well as by 13C CP-MAS NMR and 29Si CP-MAS NMR spectroscopy. Introduction of various organic groups endow different phsysical, chemical properties to these hybrid materials. The (oligo)phenothiazines provide a group of novel redox acitive hybrid materials with special electronic and optic properties. The thiazolium salts modified materials were applied as heterogenized organo catalysts for the benzoin condensation and the cross-coupling of aldehydes with acylimines to yield a-amido ketones. The sulfonic acid containing materials can not only be used as Broensted acid catalysts, but also can serve as ion exchangable supports for further modifications and applications.
Nanoparticle-Filled Thermoplastics and Thermoplastic Elastomer: Structure-Property Relationships
(2012)
The present work focuses on the structure-property relationships of
particulate-filled thermoplastics and thermoplastic elastomer (TPE). In this work
two thermoplastics and one TPE were used as polymer matrices, i.e. amorphous
bisphenol-A polycarbonate (PC), semi-crystalline isotactic polypropylene (iPP),
and a block copolymer poly(butylene terephthalate)-block-poly(tetramethylene
glycol) TPE(PBT-PTMG). For PC, a selected type of various Aerosil® nano-SiO2
types was used as filler to improve the thermal and mechanical properties by
maintaining the transparency of PC matrix. Different types of SiO2 and TiO2
nanoparticles with different surface polarity were used for iPP. The goal was to
examine the influence of surface polarity and chemical nature of nanoparticles on
the thermal, mechanical and morphological properties of iPP composites. For
TPE(PBT-PTMG), three TiO2 particles were used, i.e. one grade with hydroxyl
groups on the particle surface and the other two grades are surface-modified with
metal and metal oxides, respectively. The influence of primary size and dispersion
quality of TiO2 particles on the properties of TPE(PBT-PTMG)/TiO2 composites
were determined and discussed.
All polymer composites were produced by direct melt blending in a twin-screw
extruder via masterbatch technique. The dispersion of particles was examined by
using scanning electron microscopy (SEM) and micro-computerized tomography
(μCT). The thermal and crystalline properties of polymer composites were characterized by using thermogravimetric analysis (TGA) and differential
scanning calorimetry (DSC). The mechanical and thermomechanical properties
were determined by using mechanical tensile testing, compact tension and
Charpy impact as well as dynamic-mechanical thermal analysis (DMTA).
The SEM results show that the unpolar-surface modified nanoparticles are better
dispersed in polymer matrices as iPP than polar-surface nanoparticles, especially
in case of using Aeroxide® TiO2 nanoparticles. The Aeroxide® TiO2 nanoparticles
with a polar surface due to Ti-OH groups result in a very high degree of
agglomeration in both iPP and TPE matrices because of strong van der Waals
interactions among particles (hydrogen bonding). Compared to unmodified
Aeroxide® TiO2 nanoparticles, the other grades of surface modified TiO2 particles
are very homogenously dispersed in used iPP and TPE(PBT-PTMG). The
incorporation of SiO2 nanoparticles into bisphenol-A PC significantly increases
the mechanical properties of PC/SiO2 nanocomposites, particularly the resistance
against environmental stress crazing (ESC). However, the transparency of
PC/SiO2 nanocomposites decreases with increasing nanoparticle content and
size due to a mismatch of infractive indices of PC and SiO2 particles. The different
surface polarity of nanoparticles in iPP shows evident influence on properties of
iPP composites. Among iPP/SiO2 nanocomposites, the nanocomposite
containing SiO2 nanoparticles with a higher degree of hydrophobicity shows
improved fracture and impact toughness compared to the other iPP/SiO2
composites. The TPE(PBT-PTMG)/TiO2 composites show much better thermal and mechanical properties than neat TPE(PBT-PTMG) due to strong chemical
interactions between polymer matrix and TiO2 particles. In addition, better
dispersion quality of TiO2 particles in used TPE(PBT-PTMG) leads to dramatically
improved mechanical properties of TPE(PBT-PTMG)/TiO2 composites.
Planar force or pressure is a fundamental physical aspect during any people-vs-people and people-vs-environment activities and interactions. It is as significant as the more established linear and angular acceleration (usually acquired by inertial measurement units). There have been several studies involving planar pressure in the discipline of activity recognition, as reviewed in the first chapter. These studies have shown that planar pressure is a promising sensing modality for activity recognition. However, they still take a niche part in the entire discipline, using ad hoc systems and data analysis methods. Mostly these studies were not followed by further elaborative works. The situation calls for a general framework that can help push planar pressure sensing into the mainstream.
This dissertation systematically investigates using planar pressure distribution sensing technology for ubiquitous and wearable activity recognition purposes. We propose a generic Textile Pressure Mapping (TPM) Framework, which encapsulates (1) design knowledge and guidelines, (2) a multi-layered tool including hardware, software and algorithms, and (3) an ensemble of empirical study examples. Through validation with various empirical studies, the unified TPM framework covers the full scope of application recognition, including the ambient, object, and wearable subspaces.
The hardware part constructs a general architecture and implementations in the large-scale and mobile directions separately. The software toolkit consists of four heterogeneous tiers: driver, data processing, machine learning, visualization/feedback. The algorithm chapter describes generic data processing techniques and a unified TPM feature set. The TPM framework offers a universal solution for other researchers and developers to evaluate TPM sensing modality in their application scenarios.
The significant findings from the empirical studies have shown that TPM is a versatile sensing modality. Specifically, in the ambient subspace, a sports mat or carpet with TPM sensors embedded underneath can distinguish different sports activities or different people's gait based on the dynamic change of body-print; a pressure sensitive tablecloth can detect various dining actions by the force propagated from the cutlery through the plates to the tabletop. In the object subspace, swirl office chairs with TPM sensors under the cover can be used to detect the seater's real-time posture; TPM can be used to detect emotion-related touch interactions for smart objects, toys or robots. In the wearable subspace, TPM sensors can be used to perform pressure-based mechanomyography to detect muscle and body movement; it can also be tailored to cover the surface of a soccer shoe to distinguish different kicking angles and intensities.
All the empirical evaluations have resulted in accuracies well-above the chance level of the corresponding number of classes, e.g., the `swirl chair' study has classification accuracy of 79.5% out of 10 posture classes and in the `soccer shoe' study the accuracy is 98.8% among 17 combinations of angle and intensity.
Whole-body vibrations (WBV) have adverse effects on ride comfort and human health. Suspension seats have an important influence on the WBV severity. In this study, WBV were measured on a medium-sized compact wheel loader (CWL) in its typical operations. The effect of short-term exposure to the WBV on the ride comfort was evaluated according to ISO 2631-1:1985 and ISO 2631-1:1997. ISO 2631-1:1997 and ISO 2631-5:2004 were adopted to evaluate the effect of long-term exposure to the WBV on the human health. Reasons for the different evaluation results obtained according to ISO 2631-1:1997 and ISO 2631-5:2004 were explained in this study. The WBV measurements were carried out in cases where the driver wore a lap belt or a four-point seat harness and in the case where the driver did not wear any safety belt. The seat effective amplitude transmissibility (SEAT) and the seat transmissibility in the frequency domain in these three cases were analyzed to investigate the effect of a safety belt on the seat transmissibility. Seat tests were performed on a multi-axis shaking table in laboratory to study the dynamic behavior of a suspension seat under the vibration excitations measured on the CWL. The WBV intensity was reduced by optimizing the vertical and the longitudinal seat suspension systems with the help of computational simulations. For the optimization multi-body models of the seat-dummy system in the laboratory seat tests and the seat-driver system in the field vibration measurements were built and validated.
The main purpose of the study was to improve the physical properties of the modelling of compressed materials, especially fibrous materials. Fibrous materials are finding increasing application in the industries. And most of the materials are compressed for different applications. For such situation, we are interested in how the fibre arranged, e.g. with which distribution. For given materials it is possible to obtain a three-dimensional image via micro computed tomography. Since some physical parameters, e.g. the fibre lengths or the directions for points in the fibre, can be checked under some other methods from image, it is beneficial to improve the physical properties by changing the parameters in the image.
In this thesis, we present a new maximum-likelihood approach for the estimation of parameters of a parametric distribution on the unit sphere, which is various as some well known distributions, e.g. the von-Mises Fisher distribution or the Watson distribution, and for some models better fit. The consistency and asymptotic normality of the maximum-likelihood estimator are proven. As the second main part of this thesis, a general model of mixtures of these distributions on a hypersphere is discussed. We derive numerical approximations of the parameters in an Expectation Maximization setting. Furthermore we introduce a non-parametric estimation of the EM algorithm for the mixture model. Finally, we present some applications to the statistical analysis of fibre composites.
In recent years, nanofiller-reinforced polymer composites have attracted considerable
interest from numerous researchers, since they can offer unique mechanical,
electrical, optical and thermal properties compared to the conventional polymer
composites filled with micron-sized particles or short fibers. With this background, the
main objective of the present work was to investigate the various mechanical
properties of polymer matrices filled with different inorganic rigid nanofillers, including
SiOB2B, TiOB2B, AlB2BOB3B and multi-walled carbon nanotubes (MWNT). Further, special
attention was paid to the fracture behaviours of the polymer nanocomposites. The
polymer matrices used in this work contained two types of epoxy resin (cycloaliphatic
and bisphenol-F) and two types of thermoplastic polymer (polyamide 66 and isotactic
polypropylene).
The epoxy-based nanocomposites (filled with nano-SiOB2B) were formed in situ by a
special sol-gel technique supplied by nanoresins AG. Excellent nanoparticle
dispersion was achieved even at rather high particle loading. The almost
homogeneously distributed nanoparticles can improve the elastic modulus and
fracture toughness (characterized by KBICB and GBICB) simultaneously. According to
dynamic mechanical and thermal analysis (DMTA), the nanosilica particles in epoxy
resins possessed considerable "effective volume fraction" in comparison with their
actual volume fraction, due to the presence of the interphase. Moreover, AFM and
high-resolution SEM observations also suggested that the nanosilica particles were
coated with a polymer layer and therefore a core-shell structure of particle-matrix was
expected. Furthermore, based on SEM fractography, several toughening
mechanisms were considered to be responsible for the improvement in toughness,
which included crack deflection, crack pinning/bowing and plastic deformation of
matrix induced by nanoparticles.
The PA66 or iPP-based nanocomposites were fabricated by a conventional meltextrusion
technique. Here, the nanofiller content was set constant as 1 vol.%. Relatively good particle dispersion was found, though some small aggregates still
existed. The elastic modulus of both PA66 and iPP was moderately improved after
incorporation of the nanofillers. The fracture behaviours of these materials were
characterized by an essential work fracture (EWF) approach. In the case of PA66
system, the EWF experiments were carried out over a broad temperature range
(23~120 °C). It was found that the EWF parameters exhibited high temperature
dependence. At most testing temperatures, a small amount of nanoparticles could
produce obvious toughening effects at the cost of reduction in plastic deformation of
the matrix. In light of SEM fractographs and crack opening tip (COD) analysis, the
crack blunting induced by nanoparticles might be the major source of this toughening.
The fracture behaviours of PP filled with MWNTs were investigated over a broad
temperature range (-196~80 °C) in terms of notched impact resistance. It was found
that MWNTs could enhance the notched impact resistance of PP matrix significantly
once the testing temperature was higher than the glass transition temperature (TBgB) of
neat PP. At the relevant temperature range, the longer the MWNTs, the better was
the impact resistance. SEM observation revealed three failure modes of nanotubes:
nanotube bridging, debonding/pullout and fracture. All of them would contribute to
impact toughness to a degree. Moreover, the nanotube fracture was considered as
the major failure mode. In addition, the smaller spherulites induced by the nanotubes
would also benefit toughness.
Nowadays, accounting, charging and billing users' network resource consumption are commonly used for the purpose of facilitating reasonable network usage, controlling congestion, allocating cost, gaining revenue, etc. In traditional IP traffic accounting systems, IP addresses are used to identify the corresponding consumers of the network resources. However, there are some situations in which IP addresses cannot be used to identify users uniquely, for example, in multi-user systems. In these cases, network resource consumption can only be ascribed to the owners of these hosts instead of corresponding real users who have consumed the network resources. Therefore, accurate accountability in these systems is practically impossible. This is a flaw of the traditional IP address based IP traffic accounting technique. This dissertation proposes a user based IP traffic accounting model which can facilitate collecting network resource usage information on the basis of users. With user based IP traffic accounting, IP traffic can be distinguished not only by IP addresses but also by users. In this dissertation, three different schemes, which can achieve the user based IP traffic accounting mechanism, are discussed in detail. The inband scheme utilizes the IP header to convey the user information of the corresponding IP packet. The Accounting Agent residing in the measured host intercepts IP packets passing through it. Then it identifies the users of these IP packets and inserts user information into the IP packets. With this mechanism, a meter located in a key position of the network can intercept the IP packets tagged with user information, extract not only statistic information, but also IP addresses and user information from the IP packets to generate accounting records with user information. The out-of-band scheme is a contrast scheme to the in-band scheme. It also uses an Accounting Agent to intercept IP packets and identify the users of IP traffic. However, the user information is transferred through a separated channel, which is different from the corresponding IP packets' transmission. The Multi-IP scheme provides a different solution for identifying users of IP traffic. It assigns each user in a measured host a unique IP address. Through that, an IP address can be used to identify a user uniquely without ambiguity. This way, traditional IP address based accounting techniques can be applied to achieve the goal of user based IP traffic accounting. In this dissertation, a user based IP traffic accounting prototype system developed according to the out-of-band scheme is also introduced. The application of user based IP traffic accounting model in the distributed computing environment is also discussed.
This thesis is devoted to deal with the stochastic optimization problems in various situations with the aid of the Martingale method. Chapter 2 discusses the Martingale method and its applications to the basic optimization problems, which are well addressed in the literature (for example, [15], [23] and [24]). In Chapter 3, we study the problem of maximizing expected utility of real terminal wealth in the presence of an index bond. Chapter 4, which is a modification of the original research paper joint with Korn and Ewald [39], investigates an optimization problem faced by a DC pension fund manager under inflationary risk. Although the problem is addressed in the context of a pension fund, it presents a way of how to deal with the optimization problem, in the case there is a (positive) endowment. In Chapter 5, we turn to a situation where the additional income, other than the income from returns on investment, is gained by supplying labor. Chapter 6 concerns a situation where the market considered is incomplete. A trick of completing an incomplete market is presented there. The general theory which supports the discussion followed is summarized in the first chapter.
Automata theory has given rise to a variety of automata models that consist
of a finite-state control and an infinite-state storage mechanism. The aim
of this work is to provide insights into how the structure of the storage
mechanism influences the expressiveness and the analyzability of the
resulting model. To this end, it presents generalizations of results about
individual storage mechanisms to larger classes. These generalizations
characterize those storage mechanisms for which the given result remains
true and for which it fails.
In order to speak of classes of storage mechanisms, we need an overarching
framework that accommodates each of the concrete storage mechanisms we wish
to address. Such a framework is provided by the model of valence automata,
in which the storage mechanism is represented by a monoid. Since the monoid
serves as a parameter to specifying the storage mechanism, our aim
translates into the question: For which monoids does the given
(automata-theoretic) result hold?
As a first result, we present an algebraic characterization of those monoids
over which valence automata accept only regular languages. In addition, it
turns out that for each monoid, this is the case if and only if valence
grammars, an analogous grammar model, can generate only context-free
languages.
Furthermore, we are concerned with closure properties: We study which
monoids result in a Boolean closed language class. For every language class
that is closed under rational transductions (in particular, those induced by
valence automata), we show: If the class is Boolean closed and contains any
non-regular language, then it already includes the whole arithmetical
hierarchy.
This work also introduces the class of graph monoids, which are defined by
finite graphs. By choosing appropriate graphs, one can realize a number of
prominent storage mechanisms, but also combinations and variants thereof.
Examples are pushdowns, counters, and Turing tapes. We can therefore relate
the structure of the graphs to computational properties of the resulting
storage mechanisms.
In the case of graph monoids, we study (i) the decidability of the emptiness
problem, (ii) which storage mechanisms guarantee semilinear Parikh images,
(iii) when silent transitions (i.e. those that read no input) can be
avoided, and (iv) which storage mechanisms permit the computation of
downward closures.
Continuum Mechanical Modeling of Dry Granular Systems: From Dilute Flow to Solid-Like Behavior
(2014)
In this thesis, we develop a granular hydrodynamic model which covers the three principal regimes observed in granular systems, i.e. the dilute flow, the dense flow and the solid-like regime. We start from a kinetic model valid at low density and extend its validity to the granular solid-like behavior. Analytical and numerical results show that this model reproduces a lot of complex phenomena like for instance slow viscoplastic motion, critical states and the pressure dip in sand piles. Finally we formulate a 1D version of the full model and develop a numerical method to solve it. We present two numerical examples, a filling simulation and the flow on an inclined plane where the three regimes are included.
Today, information systems are often distributed to achieve high availability and low latency.
These systems can be realized by building on a highly available database to manage the distribution of data.
However, it is well known that high availability and low latency are not compatible with strong consistency guarantees.
For application developers, the lack of strong consistency on the database layer can make it difficult to reason about their programs and ensure that applications work as intended.
We address this problem from the perspective of formal verification.
We present a specification technique, which allows specifying functional properties of the application.
In addition to data invariants, we support history properties.
These let us express relations between events, including invocations of the application API and operations on the database.
To address the verification problem, we have developed a proof technique that handles concurrency using invariants and thereby reduces the problem to sequential verification.
The underlying system semantics, technique and its soundness proof are all formalized in the interactive theorem prover Isabelle/HOL.
Additionally, we have developed a tool named Repliss which uses the proof technique to enable partially automated verification and testing of applications.
For verification, Repliss generates verification conditions via symbolic execution and then uses an SMT solver to discharge them.
Fucoidan is a class of biopolymers mainly found in brown seaweeds. Due to its diverse medical importance, homogenous supply as well as a GMP-compliant product is of a special interest. Therefore, in addition to optimization of its extraction and purification from classical resources, other techniques were tried (e.g., marine tissue culture and heterologous expression of enzymes involved in its biosynthesis). Results showed that 17.5% (w/w) crude fucoidan after pre-treatment and extraction was obtained from the brown macroalgae F. vesiculosus. Purification by affinity chromatography improved purity relative to the commercial purified product. Furthermore, biological investigations revealed improved anti-coagulant and anti-viral activities compared with crude fucoidan. Furthermore, callus-like and protoplast cultures as well as bioreactor cultivation were developed from F. vesiculosus representing a new horizon to produce fucoidan biotechnologically. Moreover, heterologous expression of several enzymes involved in its biosynthesis by E. coli (e.g., FucTs and STs) demonstrated the possibility to obtain active enzymes that could be utilized in enzymatic in vitro synthesis of fucoidan. All these competitive techniques could provide the global demands from fucoidan.
The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major
interest from both academic and industrial stakeholders.
Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions
and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed.
End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests
such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex
spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects.
To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while
trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis.
On the Extended Finite Element Method for the Elasto-Plastic Deformation of Heterogeneous Materials
(2015)
This thesis is concerned with the extended finite element method (XFEM) for deformation analysis of three-dimensional heterogeneous materials. Using the "enhanced abs enrichment" the XFEM is able to reproduce kinks in the displacements and therewith jumps in the strains within elements of the underlying tetrahedral finite element mesh. A complex model for the micro structure reconstruction of aluminum matrix composite AMC225xe and the modeling of its macroscopic thermo-mechanical plastic deformation behavior is presented, using the XFEM. Additionally, a novel stabilization algorithm is introduced for the XFEM. This algorithm requires preprocessing only.
With the technological advancement in the field of robotics, it is now quite practical to acknowledge the actuality of social robots being a part of human's daily life in the next decades. Concerning HRI, the basic expectations from a social robot are to perceive words, emotions, and behaviours, in order to draw several conclusions and adapt its behaviour to realize natural HRI. Henceforth, assessment of human personality traits is essential to bring a sense of appeal and acceptance towards the robot during interaction.
Knowledge of human personality is highly relevant as far as natural and efficient HRI is concerned. The idea is taken from human behaviourism, with humans behaving differently based on the personality trait of the communicating partners. This thesis contributes to the development of personality trait assessment system for intelligent human-robot interaction.
The personality trait assessment system is organized in three separate levels. The first level, known as perceptual level, is responsible for enabling the robot to perceive, recognize and understand human actions in the surrounding environment in order to make sense of the situation. Using psychological concepts and theories, several percepts have been extracted. A study has been conducted to validate the significance of these percepts towards personality traits.
The second level, known as affective level, helps the robot to connect the knowledge acquired in the first level to make higher order evaluations such as assessment of human personality traits. The affective system of the robot is responsible for analysing human personality traits. To the best of our knowledge, this thesis is the first work in the field of human-robot interaction that presents an automatic assessment of human personality traits in real-time using visual information. Using psychology and cognitive studies, many theories has been studied. Two theories have been been used to build the personality trait assessment system: Big Five personality traits assessment and temperament framework for personality traits assessment.
By using the information from the perceptual and affective level, the last level, known as behavioural level, enables the robot to synthesize an appropriate behaviour adapted to human personality traits. Multiple experiments have been conducted with different scenarios. It has been shown that the robot, ROBIN, assesses personality traits correctly during interaction and uses the similarity-attraction principle to behave with similar personality type. For example, if the person is found out to be extrovert, the robot also behaves like an extrovert. However, it also uses the complementary attraction theory to adapt its behaviour and complement the personality of the interaction partner. For example, if the person is found out to be self-centred, the robot behaves like an agreeable in order to flourish human-robot interaction.
This thesis focuses on novel methods to establish the utility of wearable devices along with machine learning and pattern recognition methods for formal education and address the open research questions posed by existing methods. Firstly, state-of-the-art methods are proposed to analyse the cognitive activities in the learning process, i.e., reading, writing, and their correlation. Furthermore, this thesis presents real-time applications in wearable space as an experimental tool in Physics education, and an air-writing system.
There are two critical components in analysing the reading behaviour, i.e., WHERE a person looks at (gaze analysis) and WHAT a person looks at (content analysis). This thesis proposes novel methods to classify the reading content to address the WHAT AT component. The proposed methods are based on a hybrid approach, which fuses the traditional computer vision methods with deep neural networks. These methods, when evaluated on publicly available datasets, yield state-of-the-art results to define the structure of the document images. Moreover, extensive efforts were made to refine and correct ICDAR2017-POD dataset along with a completely new FFD dataset.
Traditionally, handwriting research focuses on character and number recognition without looking into the type of writing, i.e. text, math, and drawing. This thesis reports multiple contributions for on-line handwriting classification. First, it presents a public dataset for on-line handwriting classification OnTabWriter, collected using iPen and an iPad. In addition, a new feature set is introduced for on-line handwriting classification to establish the benchmark on the proposed dataset to classify handwriting as plain text, mathematical expression, and plot/graph. An ablation study is made to evaluate the performance of the proposed feature set in comparison to existing feature sets. Lastly, this thesis evaluates the importance of context for on-line handwriting classification.
Analysing reading and writing activities individually is not enough to provide insights to identify the student's expertise unless their correlations are analysed. This thesis presents a study where reading data from wearable eye-trackers and writing data from sensor pen are analysed together in correlation to correlate the expertise of the users in Physics education with their actual knowledge. Initial results show a strong correlation between individual's expertise and understanding of the subject.
Augmented reality & virtual applications can play a vital role in making classroom environments more interactive and engaging both for teachers and learners. To validate the hypothesis, different applications are developed and evaluated. First, smart glasses are used as an experimental tool in Physics education to help the learners perform experiments by providing assistance and feedback on head mounted display in understanding acoustics concepts. Second, a real-time application of air-writing with the finger on an imaginary canvas using a single IMU as the FAirWrite system is also presented. FAirWrite system is further equipped with DL methods to classify the air-written characters.
Recent studies on the environmental performance of additive manufacturing (AM) have shown that AM exhibits both complex potentials and challenges at different life stages compared to conventional manufacturing. To assess and ensure the environmental benefits of AM during the design phase, an eco-design approach is required. Existing eco-design for AM approaches described in the literature mainly focus on the use of lifecycle assessment (LCA) to analyze the environmental impacts of AM-specific design solutions. However, since LCA requires a full-process chain model and detailed inventory data, it can only be performed after the design process or in a subsequent design stage. To integrate evaluation activities into the middle stage of the design process, energy performance assessment can be used as an alternative evaluation tool in eco-design for AM. However, the literature still lacks an eco-design for AM method based on energy performance quantification and assessment. By addressing this research problem, this dissertation contributes to the development of a holistic framework to implement eco-design for AM using energy performance assessment. This framework consists of the following three parts: a simulation tool for energy prediction in the design phase; an energy performance assessment model for AM; and a method for carrying out activities in eco-design for AM. To demonstrate the feasibility of the proposed method, three use cases are performed. Based on these use cases, it is concluded that with the use of the proposed method, AM designers will be able to select and develop optimal design solutions based on the energy performance of AM in the middle design stage.
Distributed Optimization of Constraint-Coupled Systems via Approximations of the Dual Function
(2024)
This thesis deals with the distributed optimization of constraint-coupled systems. This problem class is often encountered in systems consisting of multiple individual subsystems, which are coupled through shared limited resources. The goal is to optimize each subsystem in a distributed manner while still ensuring that system-wide constraints are satisfied. By introducing dual variables for the system-wide constraints the system-wide problem can be decomposed into individual subproblems. These resulting subproblems can then be coordinated by iteratively adapting the dual variables. This thesis presents two new algorithms that exploit the properties of the dual optimization problem. Both algorithms compute a quadratic surrogate function of the dual function in each iteration, which is optimized to adapt the dual variables. The Quadratically Approximated Dual Ascent (QADA) algorithm computes the surrogate function by solving a regression problem, while the Quasi-Newton Dual Ascent (QNDA) algorithm updates the surrogate function iteratively via a quasi-Newton scheme. Both algorithms employ cutting planes to take the nonsmoothness of the dual function into account. The proposed algorithms are compared to algorithms from the literature on a large number of different benchmark problems, showing superior performance in most cases. In addition to general convex and mixed-integer optimization problems, dual decomposition-based distributed optimization is applied to distributed model predictive control and distributed K-means clustering problems.
This work introduces a promising concept for the preparation of new nano-sized receptors. Mixed monolayer protected gold nanoparticles (AuNPs) for low molecular weight compounds were prepared featuring functional groups on their surfaces. It has been shown that these AuNPs can engage in interactions with peptides in aqueous media. Quantitative binding information was obtained from DOSY-NMR titrations indicating that nanoparticles containing a combination of three orthogonal functional groups are more efficient in binding to dipeptides than mono or difunctionalised analogues. The strategy is highly modular and easily allows adapting the receptor selectivity to a
given substrate by varying the type, number, and ratio of binding sites on the nanoparticle
surface.
The safety of embedded systems is becoming more and more important nowadays. Fault Tree Analysis (FTA) is a widely used technique for analyzing the safety of embedded systems. A standardized tree-like structure called a Fault Tree (FT) models the failures of the systems. The Component Fault Tree (CFT) provides an advanced modeling concept for adapting the traditional FTs to the hierarchical architecture model in system design. Minimal Cut Set (MCS) analysis is a method that works for qualitative analysis based on the FTs. Each MCS represents a minimal combination of component failures of a system called basic events, which may together cause the top-level system failure. The ordinary representations of MCSs consist of plain text and data tables with little additional supporting visual and interactive information. Importance analysis based on FTs or CFTs estimates the contribution of each potential basic event to a top-level system failure. The resulting importance values of basic events are typically represented in summary views, e.g., data tables and histograms. There is little visual integration between these forms and the FT (or CFT) structure. The safety of a system can be improved using an iterative process, called the safety improvement process, based on FTs taking relevant constraints into account, e.g., cost. Typically, relevant data regarding the safety improvement process are presented across multiple views with few interactive associations. In short, the ordinary representation concepts cannot effectively facilitate these analyses.
We propose a set of visualization approaches for addressing the issues above mentioned in order to facilitate those analyses in terms of the representations.
Contribution:
1. To support the MCS analysis, we propose a matrix-based visualization that allows detailed data of the MCSs of interest to be viewed while maintaining a satisfactory overview of a large number of MCSs for effective navigation and pattern analysis. Engineers can also intuitively analyze the influence of MCSs of a CFT.
2. To facilitate the importance analysis based on the CFT, we propose a hybrid visualization approach that combines the icicle-layout-style architectural views with the CFT structure. This approach facilitates to identify the vulnerable components taking the hierarchies of system architecture into account and investigate the logical failure propagation of the important basic events.
3. We propose a visual safety improvement process that integrates an enhanced decision tree with a scatter plot. This approach allows one to visually investigate the detailed data related to individual steps of the process while maintaining the overview of the process. The approach facilitates to construct and analyze improvement solutions of the safety of a system.
Using our visualization approaches, the MCS analysis, the importance analysis, and the safety improvement process based on the CFT can be facilitated.
The noise issue in manufacturing system is widely discussed from legal and health aspects. Regarding the existing laws and guidelines, various investigation methods are implemented in industry. The sound pressure level can be measured and reduced by using established approaches in reality. However, a straightforward and low cost approach to study noise issue using existing digital factory models is not found.
This thesis attempts to develop a novel concept for sound pressure level investigation in a virtual environment. With this, the factory planners are able to investigate the noise issue during factory design and layout planning phase.
Two computer aided tools are used in this approach: acoustic simulation and virtual reality (VR). The former enables the planner to simulate the sound pressure level by given factory layout and facility sound features. And the latter provides a visualization environment to view and explore the simulation results. The combination of these two powerful tools provides the planners a new possibility to analyze the noise in a factory.
To validate the simulations, the acoustic measurements are implemented in a real factory. Sound pressure level and sound intensity are determined respectively. Furthermore, a software tool is implemented using the introduced concept and approach. With this software, the simulation results are represented in a Cave Automatic Virtual Environment (CAVE).
This thesis describes the development of the approach, the measurement of sound features, the design of visualization framework, and the implementation of VR software. Based on this know-how, the industry users are able to design their own method and software for noise investigation and analysis.
The broad engineering applications of polymers and composites have become the
state of the art due to their numerous advantages over metals and alloys, such as
lightweight, easy processing and manufacturing, as well as acceptable mechanical
properties. However, a general deficiency of thermoplastics is their relatively poor
creep resistance, impairing service durability and safety, which is a significant barrier
to further their potential applications. In recent years, polymer nanocomposites have
been increasingly focused as a novel field in materials science. There are still many
scientific questions concerning these materials leading to the optimal property
combinations. The major task of the current work is to study the improved creep
resistance of thermoplastics filled with various nanoparticles and multi-walled carbon
nanotubes.
A systematic study of three different nanocomposite systems by means of
experimental observation and modeling and prediction was carried out. In the first
part, a nanoparticle/PA system was prepared to undergo creep tests under different
stress levels (20, 30, 40 MPa) at various temperatures (23, 50, 80 °C). The aim was
to understand the effect of different nanoparticles on creep performance. 1 vol. % of
300 nm and 21 nm TiO2 nanoparticles and nanoclay was considered. Surface
modified 21 nm TiO2 particles were also investigated. Static tensile tests were
conducted at those temperatures accordingly. It was found that creep resistance was
significantly enhanced to different degrees by the nanoparticles, without sacrificing
static tensile properties. Creep was characterized by isochronous stress-strain curves,
creep rate, and creep compliance under different temperatures and stress levels.
Orientational hardening, as well as thermally and stress activated processes were
briefly introduced to further understanding of the creep mechanisms of these
nanocomposites. The second material system was PP filled with 1 vol. % 300 nm and 21 nm TiO2
nanoparticles, which was used to obtain more information about the effect of particle
size on creep behavior based on another matrix material with much lower Tg. It was
found especially that small nanoparticles could significantly improve creep resistance.
Additionally, creep lifetime under high stress levels was noticeably extended by
smaller nanoparticles. The improvement in creep resistance was attributed to a very
dense network formed by the small particles that effectively restricted the mobility of
polymer chains. Changes in the spherulite morphology and crystallinity in specimens
before and after creep tests confirmed this explanation.
In the third material system, the objective was to explore the creep behavior of PP
reinforced with multi-walled carbon nanotubes. Short and long aspect ratio nanotubes
with 1 vol. % were used. It was found that nanotubes markedly improved the creep
resistance of the matrix, with reduced creep deformation and rate. In addition, the
creep lifetime of the composites was dramatically extended by 1,000 % at elevated
temperatures. This enhancement contributed to efficient load transfer between
carbon nanotubes and surrounding polymer chains.
Finally, a modeling analysis and prediction of long-term creep behaviors presented a
comprehensive understanding of creep in the materials studied here. Both the
Burgers model and Findley power law were applied to satisfactorily simulate the
experimental data. The parameter analysis based on Burgers model provided an
explanation of structure-to-property relationships. Due to their intrinsic difference, the
power law was more capable of predicting long-term behaviors than Burgers model.
The time-temperature-stress superposition principle was adopted to predict long-term
creep performance based on the short-term experimental data, to make it possible to
forecast the future performance of materials.
Estimation and Portfolio Optimization with Expert Opinions in Discrete-time Financial Markets
(2021)
In this thesis, we mainly discuss the problem of parameter estimation and
portfolio optimization with partial information in discrete-time. In the portfolio optimization problem, we specifically aim at maximizing the utility of
terminal wealth. We focus on the logarithmic and power utility functions. We consider expert opinions as another observation in addition to stock returns to improve estimation of drift and volatility parameters at different times and for the purpose of asset optimization.
In the first part, we assume that the drift term has a fixed distribution, and
the volatility term is constant. We use the Kalman filter to combine the two
types of observations. Moreover, we discuss how to transform this problem
into a non-linear problem of Gaussian noise when the expert opinion is uniformly distributed. The generalized Kalman filter is used to estimate the parameters in this problem.
In the second part, we assume that drift and volatility of asset returns are both driven by a Markov chain. We mainly use the change-of-measure technique to estimate various values required by the EM algorithm. In addition,
we focus on different ways to combine the two observations, expert opinions and asset returns. First, we use the linear combination method. At the same time, we discuss how to use a logistic regression model to quantify expert
opinions. Second, we consider that expert opinions follow a mixed Dirichlet distribution. Under this assumption, we use another probability measure to
estimate the unnormalized filters, needed for the EM algorithm.
In the third part, we assume that expert opinions follow a mixed Dirichlet distribution and focus on how we can obtain approximate optimal portfolio
strategies in different observation settings. We claim the approximate strategies from the dynamic programming equations in different settings and analyze the dependence on the discretization step. Finally we compute different
observation settings in a simulation study.
Elastomers and their various composites, and blends are frequently used as engineering working parts subjected to rolling friction movements. This fact already substantiates the importance of a study addressing the rolling tribological properties of elastomers and their compounds. It is worth noting that until now the research and development works on the friction and wear of rubber materials were mostly focused on abrasion and to lesser extent on sliding type of loading. As the tribological knowledge acquired with various counterparts, excluding rubbers, can hardly be adopted for those with rubbers, there is a substantial need to study the latter. Therefore, the present work was aimed at investigating the rolling friction and wear properties of different kinds of elastomers against steel under unlubricated condition. In the research the rolling friction and wear properties of various rubber materials were studied in home-made rolling ball-on-plate test configurations under dry condition. The materials inspected were ethylene/propylene/diene rubber (EPDM) without and with carbon black (EPDM_CB), hydrogenated acrylonitrile/butadiene rubber (HNBR) without and with carbon black/silica/multiwall carbon nanotube (HNBR_CB/silica/MWCNT), rubber-rubber hybrid (HNBR and fluororubber (HNBR-FKM)) and rubber-thermoplastic blend (HNBR and cyclic butylene terephthalate oligomers (HNBR-CBT)). The dominant wear mechanisms were investigated by scanning electron microscopy (SEM), and analyzed as a function of composition and testing conditions. Differential scanning calorimetry (DSC), dynamic-mechanical thermal analysis (DMTA), atomic force microscopy (AFM), and transmission electron microscopy (TEM) along with other auxiliary measurements, were adopted to determine the phase structure and network-related properties of the rubber systems. The changes of the friction and wear as a function of type and amount of the additives were explored. The friction process of selected rubbers was also modelled by making use of the finite element method (FEM). The results show that incorporation of filler enhanced generally the wear resistance, hardness, stiffness (storage modulus), and apparent crosslinking of the related rubbers (EPDM-, HNBR- and HNBR-FKM based ones), but did not affect their glass transition temperature. Filling of rubbers usually reduced the coefficient of friction (COF). However, the tribological parameters strongly depended also on the test set-up and test duration. High wear loss was noticed for systems showing the occurrence of Schallamach-type wavy pattern. The blends HNBR-FKM and HNBR-CBT were two-phase structured. In HNBR-FKM, the FKM was dispersed in form of large microscaled domains in the HNBR matrix. This phase structure did not change by incorporation of MWCNT. It was established that the MWCNT was preferentially embedded in the HNBR matrix. Blending HNBR with FKM reduced the stiffness and degree of apparent crosslinking of the blend, which was traced to the dilution of the cure recipe with FKM. The coefficient of friction increased with increasing FKM opposed to the expectation. On the other hand, the specific wear rate (Ws) changed marginally with increasing content of FKM. In HNBR-CBT hybrids the HNBR was the matrix, irrespective to the rather high CBT content. Both the partly and mostly polymerized CBT ((p)CBT and pCBT, respectively) in the hybrids worked as active filler and thus increased the stiffness and hardness. The COF and Ws decreased with increasing CBT content. The FEM results in respect to COF achieved on systems possessing very different structures and thus properties (EPDM_30CB, HNBR-FKM 100-100 and HNBR-(p)CBT 100-100, respectively) were in accordance with the experimental results. This verifies that FEM can be properly used to consider the complex viscoelastic behaviour of rubber materials under dry rolling condition.
Indoor positioning system (IPS) is becoming more and more popular in recent years in industrial, scientific and medical areas. The rapidly growing demand of accurate position information attracts much attention and effort in developing various kinds of positioning systems that are characterized by parameters like accuracy,robustness,
latency, cost, etc. These systems have been successfully used in many applications such as automation in manufacturing, patient tracking in hospital, action detection for human-machine interacting and so on.
The different performance requirements in various applications lead to existence of greatly diverse technologies, which can be categorized into two groups: inertial positioning(involving momentum sensors embedded on the object device to be located) and external sensing (geometry estimation based on signal measurement). In positioning
systems based on external sensing, the input signal used for locating refers to many sources, such as visual or infrared signal in optical methods, sound or ultra-sound in acoustic methods and radio frequency based methods. This dissertation gives a recapitulative survey of a number of existence popular solutions for indoor positioning systems. Basic principles of individual technologies are demonstrated and discussed. By comparing the performances like accuracy, robustness, cost, etc., a comprehensive review of the properties of each technologies is presented, which concludes a guidance for designing a location sensing systems for indoor applications. This thesis will lately focus on presenting the development of a high precision IPS
prototype system based on RF signal from the concept aspect to the implementation up to evaluation. Developing phases related to this work include positioning scenario, involved technologies, hardware development, algorithms development, firmware generation, prototype evaluation, etc.. The developed prototype is a narrow band RF system, and it is suitable for a flexible frequency selection in UHF (300MHz3GHz) and SHF (3GHz30GHz) bands, enabling this technology to meet broad service preferences. The fundamental of the proposed system classified itself as a hyperbolic position fix system, which estimates a location by solving non-linear equations derived from time difference of arrival (TDoA) measurements. As the positioning accuracy largely depends on the temporal resolution of the signal acquisition, a dedicated RF front-end system is developed to achieve a time resolution in range of multiple picoseconds down to less than 1 pico second. On the algorithms aspect, two processing units: TDoA estimator and the Hyperbolic equations solver construct the digital signal processing system. In order to implement a real-time positioning system, the processing system is implemented on a FPGA platform. Corresponding firmware is generated from the algorithms modeled in MATLAB/Simulink, using the high level synthesis (HLS) tool HDL Coder. The prototype system is evaluated and an accuracy of better than 1 cm is achieved. A better performance is potential feasible by manipulating some of the controlling conditions such as ADC sampling rate, ADC resolution, interpolation process, higher frequency, more stable antenna, etc. Although the proposed system is initially dedicated to indoor applications, it could also be a competitive candidate for an outdoor positioning service.
The objective of this thesis consists in developing systematic event-triggered control designs for specified event generators, which is an important alternative to the traditional periodic sampling control. Sporadic sampling inherently arising in event-triggered control is determined by the event-triggering conditions. This feature invokes the desire of
finding new control theory as the traditional sampled-data theory in computer control.
Developing controller coupling with the applied event-triggering condition to maximize the control performance is the essence for event-triggered control design. In the design the stability of the control system needs to be ensured with the first priority. Concerning variant control aims they should be clearly incorporated in the design procedures. Considering applications in embedded control systems efficient implementation requires a low complexity of embedded software architectures. The thesis targets at offering such a design to further complete the theory of event-triggered control designs.
Agricultural intensification has increased substantially in the last century to meet the globally growing demand for food, fodder, and bioenergy, thus agricultural cropland became the largest terrestrial biome globally. Pesticides became a central tool to this intensification strategy, thus pesticide application rose drastically over the last sixty years to secure or increase crop yields. However, pesticides are by design biologically active and known to contaminate non-target ecosystems, thereby adversely affecting their function or structure. Even though ecotoxicological knowledge about probable fate and effects has grown, little remains known about the spatiotemporal occurrence, potential effects, and risk drivers of pesticides on larger, i.e. macro, scales.
Consequently, the thesis gathered primarily pesticide exposure data via meta-analysis and from public monitoring databases to describe (i) detailed risks in aquatic ecosystems, (ii) the underlying risk drivers, (iii) associated spatiotemporal trends, (iv) the effect of land use and land-protection and (v) the protectiveness of regulatory frameworks. First, a meta-analysis of insecticides occurring in US surface waters (n = 5,817, 259 studies) revealed large-scale risks for aquatic ecosystems based on the exceedance of regulatory threshold levels (RTL) and identified high-risk substances, particularly pyrethroids, with increasing application trends (publication I). Following this, spatiotemporal factors driving insecticide risks were identified via model-building demonstrating that toxicity-weighted pesticide use was the primary driver in surface waters with subsequent model application generating a spatially comprehensive risk assessment for the United States (publication II). The toxicity-weighted pesticide use was subsequently expanded to an ongoing project covering additional species groups and all pesticides used in the US from 1992 – 2016, highlighting a drastic shift of toxic pressures from vertebrates to aquatic invertebrates. Large-scale monitoring data from European surface waters (n > 8.3 million) of 352 organic chemicals identified pesticides as the main class or organic contaminants causing risks in aquatic ecosystems. Additional analyses established links between agricultural intensity and resulting environmental risks for aquatic invertebrates and plants on this macro scale (publication III). Finally, high-resolution monitoring data from Saxony, Germany, provided, for the first time, detailed insights into the occurrence and resulting risks of organic contaminants (primarily pesticides) in protected surface waters of nature conservation areas (publication IV).
In summary, the thesis gathered and used large-scale datasets to analyze the impact of agricultural intensification – and later anthropogenic land use – on ecosystems to reduce knowledge deficits in ecotoxicology on macro scales. Insecticides were shown to be important and spatially extensive agents of impairments to surface water quality and being directly linked to their use in respective landscapes. Changes in the pesticide use composition over time shifted environmental risks from vertebrates to other central species groups (e.g. aquatic invertebrates), highlighting a new challenge to the integrity of aquatic environments. The thesis provided novel insights into contaminants' individual risk characteristics, their interaction with various spatiotemporal drivers and their relevance on various macro scales. Overall, a discrepancy remains evident between estimated environmental impacts of pesticides derived during regulatory approval processes contrasted by a posteriori field measurements detailing larger than assumed adverse exposures and effects. This discrepancy led to pesticides being the most impactful chemical stressor for aquatic ecosystems compared to other organic contaminants on a continental scale; a threat that even increased for some species groups. The extensive use of pesticides has reached levels where even strictly protected surface waters in Germany are regularly exposed adversely, hence threatening conservation areas’ function as ecological refugia. Taken together, the thesis provides new macro-scale evidence regarding the contribution of pesticides (and associated drivers) to large-scale changes in biological systems evidenced over the last decades, underlining their likely contribution to the ongoing freshwater biodiversity crisis globally. Particularly agricultural systems will require substantial changes going forward to protect or reestablish the integrity of aquatic ecosystems and their provision of vital ecological services.
In this thesis, a new concept to prove Mosco convergence of gradient-type Dirichlet forms within the \(L^2\)-framework of K.~Kuwae and T.~Shioya for varying reference measures is developed.
The goal is, to impose as little additional conditions as possible on the sequence of reference measure \({(\mu_N)}_{N\in \mathbb N}\), apart from weak convergence of measures.
Our approach combines the method of Finite Elements from numerical analysis with the topic of Mosco convergence.
We tackle the problem first on a finite-dimensional substructure of the \(L^2\)-framework, which is induced by finitely many basis functions on the state space \(\mathbb R^d\).
These are shifted and rescaled versions of the archetype tent function \(\chi^{(d)}\).
For \(d=1\) the archetype tent function is given by
\[\chi^{(1)}(x):=\big((-x+1)\land(x+1)\big)\lor 0,\quad x\in\mathbb R.\]
For \(d\geq 2\) we define a natural generalization of \(\chi^{(1)}\) as
\[\chi^{(d)}(x):=\Big(\min_{i,j\in\{1,\dots,d\}}\big(\big\{1+x_i-x_j,1+x_i,1-x_i\big\}\big)\Big)_+,\quad x\in\mathbb R^d.\]
Our strategy to obtain Mosco convergence of
\(\mathcal E^N(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu_N\) towards \(\mathcal E(u,v)=\int_{\mathbb R^d}\langle\nabla u,\nabla v\rangle_\text{euc}d\mu\) for \(N\to\infty\)
involves as a preliminary step to restrict those bilinear forms to arguments \(u,v\) from the vector space spanned by the finite family \(\{\chi^{(d)}(\frac{\,\cdot\,}{r}-\alpha)\) \(|\alpha\in Z\}\) for
a finite index set \(Z\subset\mathbb Z^d\) and a scaling parameter \(r\in(0,\infty)\).
In a diagonal procedure, we consider a zero-sequence of scaling parameters and a sequence of index sets exhausting \(\mathbb Z^d\).
The original problem of Mosco convergence, \(\mathcal E^N\) towards \(\mathcal E\) w.r.t.~arguments \(u,v\) form the respective minimal closed form domains extending the pre-domain \(C_b^1(\mathbb R^d)\), can be solved
by such a diagonal procedure if we ask for some additional conditions on the Radon-Nikodym derivatives \(\rho_N(x)=\frac{d\mu_N(x)}{d x}\), \(N\in\mathbb N\). The essential requirement reads
\[\frac{1}{(2r)^d}\int_{[-r,r]^d}|\rho_N(x)- \rho_N(x+y)|d y \quad \overset{r\to 0}{\longrightarrow} \quad 0 \quad \text{in } L^1(d x),\,
\text{uniformly in } N\in\mathbb N.\]
As an intermediate step towards a setting with an infinite-dimensional state space, we let $E$ be a Suslin space and analyse the Mosco convergence of
\(\mathcal E^N(u,v)=\int_E\int_{\mathbb R^d}\langle\nabla_x u(z,x),\nabla_x v(z,x)\rangle_\text{euc}d\mu_N(z,x)\) with reference measure \(\mu_N\) on \(E\times\mathbb R^d\) for \(N\in\mathbb N\).
The form \(\mathcal E^N\) can be seen as a superposition of gradient-type forms on \(\mathbb R^d\).
Subsequently, we derive an abstract result on Mosco convergence for classical gradient-type Dirichlet forms
\(\mathcal E^N(u,v)=\int_E\langle \nabla u,\nabla v\rangle_Hd\mu_N\) with reference measure \(\mu_N\) on a Suslin space $E$ and a tangential Hilbert space \(H\subseteq E\).
The preceding analysis of superposed gradient-type forms can be used on the component forms \(\mathcal E^{N}_k\), which provide the decomposition
\(\mathcal E^{N}=\sum_k\mathcal E^{N}_k\). The index of the component \(k\) runs over a suitable orthonormal basis of admissible elements in \(H\).
For the asymptotic form \(\mathcal E\) and its component forms \(\mathcal E^k\), we have to assume \(D(\mathcal E)=\bigcap_kD(\mathcal E^k)\) regarding their domains, which is equivalent to the Markov uniqueness of \(\mathcal E\).
The abstract results are tested on an example from statistical mechanics.
Under a scaling limit, tightness of the family of laws for a microscopic dynamical stochastic interface model over \((0,1)^d\) is shown and its asymptotic Dirichlet form identified.
The considered model is based on a sequence of weakly converging Gaussian measures \({(\mu_N)}_{N\in\mathbb N}\) on \(L^2((0,1)^d)\), which are
perturbed by a class of physically relevant non-log-concave densities.
Production, purification and analysis of novel peptide antibiotics from terrestrial cyanobacteria
(2024)
Cyanobacteria are a known source for bioactive compounds, of which several also show antibiotic activity. In regard to the growing number of multi-resistant pathogens, the search for novel antibiotic substances is of great importance and unexploited sources should be explored. So, this thesis initially dealt with the identification of productive strains, especially within the group of the terrestrial cyanobacteria, which are less well studied than marine and freshwater strains. Amongst these, Chroococcidiopsis cubana, an extremely desiccation and radiation tolerant, unicellular cyanobacterium was found to produce an extracellular antimicrobial metabolite effective against the Gram-positive indicator bacterium Micrococcus luteus as well as the pathogenic yeast Candida auris. However, as the sole identification of a productive cyanobacterium is not sufficient for further analysis and a future production scale-up, the second part of this thesis targeted the identification of compound synthesis prerequisites. As a result, a limitation of nitrogen was shown to be the production trigger, a finding that was used for the establishment of a continuous production system. The increased compound formation was then used for purification and analysis steps. As a second approach, in silico identified bacteriocin gene clusters from C. cubana were cloned and heterologously expressed in Escherichia coli. By this, the bacteriocin B135CC was identified as a strong bacteriolytic agent, active predominantly against the Gram-positive strains Staphylococcus aureus and Mycobacterium phlei. The peptide showed no cytotoxic effects against mouse neuroblastoma (N2a-) cells and a high temperature tolerance up to 60 °C. In order to facilitate the whole project, two standard protocols, specifically adapted for the work with cyanobacteria, were established. First, a method for a quick and easy in vivo vitality estimation of phototrophic cells and second, an approach for a high throughput determination of nitrate concentrations in microalgal cultures. Both methods greatly helped to proceed the main objectives of this work, the first one by simplifying the development of suitable cryopreservation protocols for individual cyanobacteria strains and the second one by accelerating the determination of the optimal nitrate concentration for the production of the antimicrobial compound from C. cubana. In the course of this cultivation optimization, the ability of cyanobacteria to utilize organic carbon sources for an accelerated cell growth was examined in greater detail. It could be shown that C. cubana reaches significantly higher growth rates when mixotrophically cultivated with fructose or glucose. Interestingly, this effect was even further enhanced when light intensity was decreased. Under these low-light conditions, phototrophically cultivated C. cubana cells showed a clearly decreased cell growth. This effect might be extremely useful for a quick and economic preparation of precultures.
Modern digital imaging technologies, such as digital microscopy or micro-computed tomography, deliver such large amounts of 2D and 3D-image data that manual processing becomes infeasible. This leads to a need for robust, flexible and automatic image analysis tools in areas such as histology or materials science, where microstructures are being investigated (e.g. cells, fiber systems). General-purpose image processing methods can be used to analyze such microstructures. These methods usually rely on segmentation, i.e., a separation of areas of interest in digital images. As image segmentation algorithms rarely adapt well to changes in the imaging system or to different analysis problems, there is a demand for solutions that can easily be modified to analyze different microstructures, and that are more accurate than existing ones. To address these challenges, this thesis contributes a novel statistical model for objects in images and novel algorithms for the image-based analysis of microstructures. The first contribution is a novel statistical model for the locations of objects (e.g. tumor cells) in images. This model is fully trainable and can therefore be easily adapted to many different image analysis tasks, which is demonstrated by examples from histology and materials science. Using algorithms for fitting this statistical model to images results in a method for locating multiple objects in images that is more accurate and more robust to noise and background clutter than standard methods. On simulated data at high noise levels (peak signal-to-noise ratio below 10 dB), this method achieves detection rates up to 10% above those of a watershed-based alternative algorithm. While objects like tumor cells can be described well by their coordinates in the plane, the analysis of fiber systems in composite materials, for instance, requires a fully three dimensional treatment. Therefore, the second contribution of this thesis is a novel algorithm to determine the local fiber orientation in micro-tomographic reconstructions of fiber-reinforced polymers and other fibrous materials. Using simulated data, it will be demonstrated that the local orientations obtained from this novel method are more robust to noise and fiber overlap than those computed using an established alternative gradient-based algorithm, both in 2D and 3D. The property of robustness to noise of the proposed algorithm can be explained by the fact that a low-pass filter is used to detect local orientations. But even in the absence of noise, depending on fiber curvature and density, the average local 3D-orientation estimate can be about 9° more accurate compared to that alternative gradient-based method. Implementations of that novel orientation estimation method require repeated image filtering using anisotropic Gaussian convolution filters. These filter operations, which other authors have used for adaptive image smoothing, are computationally expensive when using standard implementations. Therefore, the third contribution of this thesis is a novel optimal non-orthogonal separation of the anisotropic Gaussian convolution kernel. This result generalizes a previous one reported elsewhere, and allows for efficient implementations of the corresponding convolution operation in any dimension. In 2D and 3D, these implementations achieve an average performance gain by factors of 3.8 and 3.5, respectively, compared to a fast Fourier transform-based implementation. The contributions made by this thesis represent improvements over state-of-the-art methods, especially in the 2D-analysis of cells in histological resections, and in the 2D and 3D-analysis of fibrous materials.
The thesis at hand deals with the numerical solution of multiscale problems arising in the modeling of processes in fluid and thermo dynamics. Many of these processes, governed by partial differential equations, are relevant in engineering, geoscience, and environmental studies. More precisely, this thesis discusses the efficient numerical computation of effective macroscopic thermal conductivity tensors of high-contrast composite materials. The term "high-contrast" refers to large variations in the conductivities of the constituents of the composite. Additionally, this thesis deals with the numerical solution of Brinkman's equations. This system of equations adequately models viscous flows in (highly) permeable media. It was introduced by Brinkman in 1947 to reduce the deviations between the measurements for flows in such media and the predictions according to Darcy's model.
Most of today’s wireless communication devices operate on unlicensed bands with uncoordinated spectrum access, with the consequence that RF interference and collisions are impairing the overall performance of wireless networks. In the classical design of network protocols, both packets in a collision are considered lost, such that channel access mechanisms attempt to avoid collisions proactively. However, with the current proliferation of wireless applications, e.g., WLANs, car-to-car networks, or the Internet of Things, this conservative approach is increasingly limiting the achievable network performance in practice. Instead of shunning interference, this thesis questions the notion of „harmful“ interference and argues that interference can, when generated in a controlled manner, be used to increase the performance and security of wireless systems. Using results from information theory and communications engineering, we identify the causes for reception or loss of packets and apply these insights to design system architectures that benefit from interference. Because the effect of signal propagation and channel fading, receiver design and implementation, and higher layer interactions on reception performance is complex and hard to reproduce by simulations, we design and implement an experimental platform for controlled interference generation to strengthen our theoretical findings with experimental results. Following this philosophy, we introduce and evaluate a system architecture that leverage interference.
First, we identify the conditions for successful reception of concurrent transmissions in wireless networks. We focus on the inherent ability of angular modulation receivers to reject interference when the power difference of the colliding signals is sufficiently large, the so-called capture effect. Because signal power fades over distance, the capture effect enables two or more sender–receiver pairs to transmit concurrently if they are positioned appropriately, in turn boosting network performance. Second, we show how to increase the security of wireless networks with a centralized network access control system (called WiFire) that selectively interferes with packets that violate a local security policy, thus effectively protecting legitimate devices from receiving such packets. WiFire’s working principle is as follows: a small number of specialized infrastructure devices, the guardians, are distributed alongside a network and continuously monitor all packet transmissions in the proximity, demodulating them iteratively. This enables the guardians to access the packet’s content before the packet fully arrives at the receiver. Using this knowledge the guardians classify the packet according to a programmable security policy. If a packet is deemed malicious, e.g., because its header fields indicate an unknown client, one or more guardians emit a limited burst of interference targeting the end of the packet, with the objective to introduce bit errors into it. Established communication standards use frame check sequences to ensure that packets are received correctly; WiFire leverages this built-in behavior to prevent a receiver from processing a harmful packet at all. This paradigm of „over-the-air“ protection without requiring any prior modification of client devices enables novel security services such as the protection of devices that cannot defend themselves because their performance limitations prohibit the use of complex cryptographic protocols, or of devices that cannot be altered after deployment.
This thesis makes several contributions. We introduce the first software-defined radio based experimental platform that is able to generate selective interference with the timing precision needed to evaluate the novel architectures developed in this thesis. It implements a real-time receiver for IEEE 802.15.4, giving it the ability to react to packets in a channel-aware way. Extending this system design and implementation, we introduce a security architecture that enables a remote protection of wireless clients, the wireless firewall. We augment our system with a rule checker (similar in design to Netfilter) to enable rule-based selective interference. We analyze the security properties of this architecture using physical layer modeling and validate our analysis with experiments in diverse environmental settings. Finally, we perform an analysis of concurrent transmissions. We introduce a new model that captures the physical properties correctly and show its validity with experiments, improving the state of the art in the design and analysis of cross-layer protocols for wireless networks.
Dual-Pivot Quicksort and Beyond: Analysis of Multiway Partitioning and Its Practical Potential
(2016)
Multiway Quicksort, i.e., partitioning the input in one step around several pivots, has received much attention since Java 7’s runtime library uses a new dual-pivot method that outperforms by far the old Quicksort implementation. The success of dual-pivot Quicksort is most likely due to more efficient usage of the memory hierarchy, which gives reason to believe that further improvements are possible with multiway Quicksort.
In this dissertation, I conduct a mathematical average-case analysis of multiway Quicksort including the important optimization to choose pivots from a sample of the input. I propose a parametric template algorithm that covers all practically relevant partitioning methods as special cases, and analyze this method in full generality. This allows me to analytically investigate in depth what effect the parameters of the generic Quicksort have on its performance. To model the memory-hierarchy costs, I also analyze the expected number of scanned elements, a measure for the amount of data transferred from memory that is known to also approximate the number of cache misses very well. The analysis unifies previous analyses of particular Quicksort variants under particular cost measures in one generic framework.
A main result is that multiway partitioning can reduce the number of scanned elements significantly, while it does not save many key comparisons; this explains why the earlier studies of multiway Quicksort did not find it promising. A highlight of this dissertation is the extension of the analysis to inputs with equal keys. I give the first analysis of Quicksort with pivot sampling and multiway partitioning on an input model with equal keys.
In the first part of the thesis we develop the theory of standard bases in free modules over (localized) polynomial rings. Given that linear equations are solvable in the coefficients of the polynomials, we introduce an algorithm to compute standard bases with respect to arbitrary (module) monomial orderings. Moreover, we take special care to principal ideal rings, allowing zero divisors. For these rings we design modified algorithms which are new and much faster than the general ones. These algorithms were motivated by current limitations in formal verification of microelectronic System-on-Chip designs. We show that our novel approach using computational algebra is able to overcome these limitations in important classes of applications coming from industrial challenges.
The second part is based on research in collaboration with Jason Morton, Bernd Sturmfels and Anne Shiu. We devise a general method to describe and compute a certain class of rank tests motivated by statistics. The class of rank tests may loosely be described as being based on computing the number of linear extensions to given partial orders. In order to apply these tests to actual data we developed two algorithms and used our implementations to apply the methodology to gene expression data created at the Stowers Institute for Medical Research. The dataset is concerned with the development of the vertebra. Our rankings proved valuable to the biologists.
Within toxicology, reproductive toxicology is a highly relevant and socially particularly sensitive field.
It encompasses all toxicological processes within the reproductive cycle and therefore includes many effects and modes of action. This makes the assessment of reproductive toxicity very challenging despite the established in vivo studies. In addition, the in vivo studies are very demanding both in terms of their conduct and interpretation, and there is scope for decision-making on both aspects. As a result, the interpretation of study results may vary from laboratory to laboratory. For the final classification, the assessment of relevance for men is decisive. The problem here is that relatively little is known about the species differences between men and the
usual test animals (rat and rabbit). The rabbit in particular has hardly been researched in molecular biology. The aim of the dissertation was to develop approaches for a better assessment of
reproductive toxicity, with two different foci: The first aim was to investigate species differences, focusing on the expression of xenobiotic transporters during ontogeny. Xenobiotic transporters, of the superfamily of ATP-binding cassette transporters (ABC) or solute carriers (SLC), are known to transport exogenous substances in
addition to their endogenous substrates and therefore play an important role in the absorption, distribution and excretion of xenobiotics. Species differences in kinetics can in turn have a major
impact on toxic effects. In the study, the expression of 20 xenobiotic transporters during ontogeny was investigated at the mRNA level in the liver, kidney and placenta of rats and rabbits and compared with that of men. This revealed major differences in the expression of the transporters between the species. However, further studies on the functionality and activity of the xenobiotic transporters are needed to fully assess the kinetic impact of the observed species differences. Overall, the study provides a valid starting point for further systematic investigations of species differences at the protein level. Furthermore, it provides previously unavailable data on the expression of xenobiotic transporters during ontogeny in rabbits, which is an important step in the molecular biological study of this species.
The second part focused on investigating the predictive power of in silico models for reproductive
toxicology in relation to pesticides. Both the commercial and the freely available models did not
perform adequately in the evaluation. Three reasons could be identified for this: 1. many pesticides
are outside the chemical space of the models, 2. different definition/assessment of reproductive
toxicity and 3. problems in detecting similarity between molecules. To solve these problems, an
extension of the databases on reproductive toxicity in relation to pesticides, respecting a uniform
nomenclature, is needed. Furthermore, endpoint-specific models should be developed which, in
addition to the usual structure-based fingerprints, use descriptors for, for example, biological
activity.
Overall, the dissertation shows how essential it is to further research the modes of action of
reproductive toxicity. This knowledge is necessary to correctly assess in vivo studies and their
relevance to men, as well as to improve the predictive power of in silico models by incorporating
this information.
Model uncertainty is a challenge that is inherent in many applications of mathematical models in various areas, for instance in mathematical finance and stochastic control. Optimization procedures in general take place under a particular model. This model, however, might be misspecified due to statistical estimation errors and incomplete information. In that sense, any specified model must be understood as an approximation of the unknown "true" model. Difficulties arise since a strategy which is optimal under the approximating model might perform rather bad in the true model. A natural way to deal with model uncertainty is to consider worst-case optimization.
The optimization problems that we are interested in are utility maximization problems in continuous-time financial markets. It is well known that drift parameters in such markets are notoriously difficult to estimate. To obtain strategies that are robust with respect to a possible misspecification of the drift we consider a worst-case utility maximization problem with ellipsoidal uncertainty sets for the drift parameter and with a constraint on the strategies that prevents a pure bond investment.
By a dual approach we derive an explicit representation of the optimal strategy and prove a minimax theorem. This enables us to show that the optimal strategy converges to a generalized uniform diversification strategy as uncertainty increases.
To come up with a reasonable uncertainty set, investors can use filtering techniques to estimate the drift of asset returns based on return observations as well as external sources of information, so-called expert opinions. In a Black-Scholes type financial market with a Gaussian drift process we investigate the asymptotic behavior of the filter as the frequency of expert opinions tends to infinity. We derive limit theorems stating that the information obtained from observing the discrete-time expert opinions is asymptotically the same as that from observing a certain diffusion process which can be interpreted as a continuous-time expert. Our convergence results carry over to convergence of the value function in a portfolio optimization problem with logarithmic utility.
Lastly, we use our observations about how expert opinions improve drift estimates for our robust utility maximization problem. We show that our duality approach carries over to a financial market with non-constant drift and time-dependence in the uncertainty set. A time-dependent uncertainty set can then be defined based on a generic filter. We apply this to various investor filtrations and investigate which effect expert opinions have on the robust strategies.
In the presented work, I evaluate if and how Virtual Reality (VR) technologies can be used to support researchers working in the geosciences by providing immersive, collaborative visualization systems as well as virtual tools for data analysis. Technical challenges encountered in the development of theses systems are identified and solutions for these are provided.
To enable geologists to explore large digital terrain models (DTMs) in an immersive, explorative fashion within a VR environment, a suitable terrain rendering algorithm is required. For realistic perception of planetary curvature at large viewer altitudes, spherical rendering of the surface is necessary. Furthermore, rendering must sustain interactive frame rates of about 30 frames per second to avoid sensory confusion of the user. At the same time, the data structures used for visualization should also be suitable for efficiently computing spatial properties such as height profiles or volumes in order to implement virtual analysis tools. To address these requirements, I have developed a novel terrain rendering algorithm based on tiled quadtree hierarchies using the HEALPix parametrization of a sphere. For evaluation purposes, the system is applied to a 500 GiB dataset representing the surface of Mars.
Considering the current development of inexpensive remote surveillance equipment such as quadcopters, it seems inevitable that these devices will play a major role in future disaster management applications. Virtual reality installations in disaster management headquarters which provide an immersive visualization of near-live, three-dimensional situational data could then be a valuable asset for rapid, collaborative decision making. Most terrain visualization algorithms, however, require a computationally expensive pre-processing step to construct a terrain database.
To address this problem, I present an on-the-fly pre-processing system for cartographic data. The system consists of a frontend for rendering and interaction as well as a distributed processing backend executing on a small cluster which produces tiled data in the format required by the frontend on demand. The backend employs a CUDA based algorithm on graphics cards to perform efficient conversion from cartographic standard projections to the HEALPix-based grid used by the frontend.
Measurement of spatial properties is an important step in quantifying geological phenomena. When performing these tasks in a VR environment, a suitable input device and abstraction for the interaction (a “virtual tool”) must be provided. This tool should enable the user to precisely select the location of the measurement even under a perspective projection. Furthermore, the measurement process should be accurate to the resolution of the data available and should not have a large impact on the frame rate in order to not violate interactivity requirements.
I have implemented virtual tools based on the HEALPix data structure for measurement of height profiles as well as volumes. For interaction, a ray-based picking metaphor was employed, using a virtual selection ray extending from the user’s hand holding a VR interaction device. To provide maximum accuracy, the algorithms access the quad-tree terrain database at the highest available resolution level while at the same time maintaining interactivity in rendering.
Geological faults are cracks in the earth’s crust along which a differential movement of rock volumes can be observed. Quantifying the direction and magnitude of such translations is an essential requirement in understanding earth’s geological history. For this purpose, geologists traditionally use maps in top-down projection which are cut (e.g. using image editing software) along the suspected fault trace. The two resulting pieces of the map are then translated in parallel against each other until surface features which have been cut by the fault motion come back into alignment. The amount of translation applied is then used as a hypothesis for the magnitude of the fault action. In the scope of this work it is shown, however, that performing this study in a top-down perspective can lead to the acceptance of faulty reconstructions, since the three-dimensional structure of topography is not considered.
To address this problem, I present a novel terrain deformation algorithm which allows the user to trace a fault line directly within a 3D terrain visualization system and interactively deform the terrain model while inspecting the resulting reconstruction from arbitrary perspectives. I demonstrate that the application of 3D visualization allows for a more informed interpretation of fault reconstruction hypotheses. The algorithm is implemented on graphics cards and performs real-time geometric deformation of the terrain model, guaranteeing interactivity with respect to all parameters.
Paleoceanography is the study of the prehistoric evolution of the ocean. One of the key data sources used in this research are coring experiments which provide point samples of layered sediment depositions at the ocean floor. The samples obtained in these experiments document the time-varying sediment concentrations within the ocean water at the point of measurement. The task of recovering the ocean flow patterns based on these deposition records is a challenging inverse numerical problem, however.
To support domain scientists working on this problem, I have developed a VR visualization tool to aid in the verification of model parameters by providing simultaneous visualization of experimental data from coring as well as the resulting predicted flow field obtained from numerical simulation. Earth is visualized as a globe in the VR environment with coring data being presented using a billboard rendering technique while the
time-variant flow field is indicated using Line-Integral-Convolution (LIC). To study individual sediment transport pathways and their correlation with the depositional record, interactive particle injection and real-time advection is supported.
In this dissertation we consider complex, projective hypersurfaces with many isolated singularities. The leading questions concern the maximal number of prescribed singularities of such hypersurfaces in a given linear system, and geometric properties of the equisingular stratum. In the first part a systematic introduction to the theory of equianalytic families of hypersurfaces is given. Furthermore, the patchworking method for constructing hypersurfaces with singularities of prescribed types is described. In the second part we present new existence results for hypersurfaces with many singularities. Using the patchworking method, we show asymptotically proper results for hypersurfaces in P^n with singularities of corank less than two. In the case of simple singularities, the results are even asymptotically optimal. These statements improve all previous general existence results for hypersurfaces with these singularities. Moreover, the results are also transferred to hypersurfaces defined over the real numbers. The last part of the dissertation deals with the Castelnuovo function for studying the cohomology of ideal sheaves of zero-dimensional schemes. Parts of the theory of this function for schemes in P^2 are generalized to the case of schemes on general surfaces in P^3. As an application we show an H^1-vanishing theorem for such schemes.
Accurate path tracking control of tractors became a key technology for automation in agriculture. Increasingly sophisticated solutions, however, revealed that accurate path tracking control of implements is at least equally important. Therefore, this work focuses on accurate path tracking control of both tractors and implements. The latter, as a prerequisite for improved control, are equipped with steering actuators like steerable wheels or a steerable drawbar, i.e. the implements are actively steered. This work contributes both new plant models and new control approaches for those kinds of tractor-implement combinations. Plant models comprise dynamic vehicle models accounting for forces and moments causing the vehicle motion as well as simplified kinematic descriptions. All models have been derived in a systematic and automated manner to allow for variants of implements and actuator combinations. Path tracking controller design begins with a comprehensive overview and discussion of existing approaches in related domains. Two new approaches have been proposed combining the systematic setup and tuning of a Linear-Quadratic-Regulator with the simplicity of a static output feedback approximation. The first approach ensures accurate path tracking on slopes and curves by including integral control for a selection of controlled variables. The second approach, instead, ensures this by adding disturbance feedforward control based on side-slip estimation using a non-linear kinematic plant model and an Extended Kalman Filter. For both approaches a feedforward control approach for curved path tracking has been newly derived. In addition, a straightforward extension of control accounting for the implement orientation has been developed. All control approaches have been validated in simulations and experiments carried out with a mid-size tractor and a custom built demonstrator implement.
To support scientific work with large and complex data the field of scientific visualization emerged in computer science and produces images through computational analysis of the data. Frameworks for combination of different analysis and visualization modules allow the user to create flexible pipelines for this purpose and set the standard for interactive scientific visualization used by domain scientists.
Existing frameworks employ a thread-parallel message-passing approach to parallel and distributed scalability, leaving the field of scientific visualization in high performance computing to specialized ad-hoc implementations. The task-parallel programming paradigm proves promising to improve scalability and portability in high performance computing implementations and thus, this thesis aims towards the creation of a framework for distributed, task-based visualization modules and pipelines.
The major contribution of the thesis is the establishment of modules for Merge Tree construction and (based on the former) topological simplification. Such modules already form a necessary first step for most visualization pipelines and can be expected to increase in importance for larger and more complex data produced and/or analysed by high performance computing.
To create a task-parallel, distributed Merge Tree construction module the construction process has to be completely revised. We derive a novel property of Merge Tree saddles and introduce a novel task-parallel, distributed Merge Tree construction method that has both good performance and scalability. This forms the basis for a module for topological simplification which we extend by introducing novel alternative simplification parameters that aim to reduce the importance of prior domain knowledge to increase flexibility in typical high performance computing scenarios.
Both modules lay the groundwork for continuative analysis and visualization steps and form a fundamental step towards an extensive task-parallel visualization pipeline framework for high performance computing.
Crowd condition monitoring concerns the crowd safety and concerns business performance metrics. The research problem to be solved is a crowd condition estimation approach to enable and support the supervision of mass events by first-responders and marketing experts, but is also targeted towards supporting social scientists, journalists, historians, public relations experts, community leaders, and political researchers. Real-time insights of the crowd condition is desired for quick reactions and historic crowd conditions measurements are desired for profound post-event crowd condition analysis.
This thesis aims to provide a systematic understanding of different approaches for crowd condition estimation by relying on 2.4 GHz signals and its variation in crowds of people, proposes and categorizes possible sensing approaches, applies supervised machine learning algorithms, and demonstrates experimental evaluation results. I categorize four sensing approaches. Firstly, stationary sensors which are sensing crowd centric signals sources. Secondly, stationary sensors which are sensing other stationary signals sources (either opportunistic or special purpose signal sources). Thirdly, a few volunteers within the crowd equipped with sensors which are sensing other surrounding crowd centric device signals (either individually, in a single group or collaboratively) within a small region. Fourthly, a small subset of participants within the crowd equipped with sensors and roaming throughout a whole city to sense wireless crowd centric signals.
I present and evaluate an approach with meshed stationary sensors which were sensing crowd centric devices. This was demonstrated and empirically evaluated within an industrial project during three of the world-wide largest automotive exhibitions. With over 30 meshed stationary sensors in an optimized setup across 6400m2 I achieved a mean absolute error of the crowd density of just 0.0115
people per square meter which equals to an average of below 6% mean relative error from the ground truth. I validate the contextual crowd condition anomaly detection method during the visit of chancellor Mrs. Merkel and during a large press conference during the exhibition. I present the approach of opportunistically sensing stationary based wireless signal variations and validate this during the Hannover CeBIT exhibition with 80 opportunistic sources with a crowd condition estimation relative error of below 12% relying only on surrounding signals in influenced by humans. Pursuing this approach I present an approach with dedicated signal sources and sensors to estimate the condition of shared office environments. I demonstrate methods being viable to even detect low density static crowds, such as people sitting at their desks, and evaluate this on an eight person office scenario. I present the approach of mobile crowd density estimation by a group of sensors detecting other crowd centric devices in the proximity with a classification accuracy of the crowd density of 66 % (improvement of over 22% over a individual sensor) during the crowded Oktoberfest event. I propose a collaborative mobile sensing approach which makes the system more robust against variations that may result from the background of the people rather than the crowd condition with differential features taking information about the link structure between actively scanning devices, the ratio between values observed by different devices, ratio of discovered crowd devices over time, team-wise diversity of discovered devices, number of semi- continuous device visibility periods, and device visibility durations into account. I validate the approach on multiple experiments including the Kaiserslautern European soccer championship public viewing event and evaluated the collaborative mobile sensing approach with a crowd condition estimation accuracy of 77 % while outperforming previous methods by 21%. I present the feasibility of deploying the wireless crowd condition sensing approach to a citywide scale during an event in Zurich with 971 actively sensing participants and outperformed the reference method by 24% in average.
The present work investigated three important constructs in the field of psychology: creativity, intelligence and giftedness. The major objective was to clarify some aspects about each one of these three constructs, as well as some possible correlations between them. Of special interest were: (1) the relationship between creativity and intelligence - particularly the validity of the threshold theory; (2) the development of these constructs within average and above-average intelligent children and throughout grade levels; and (3) the comparison between the development of intelligence and creativity in above-average intelligent primary school children that participated in a special program for children classified as “gifted”, called Entdeckertag (ET), against an age-class- and-IQ matched control group. The ET is a pilot program which was implemented in 2004 by the Ministry for Education, Science, Youth and Culture of the state of Rhineland-Palatinate, Germany. The central goals of this program are the early recognition of gifted children and intervention, based on the areas of German language, general science and mathematics, and also to foster the development of a child’s creativity, social ability, and more. Five hypotheses were proposed and analyzed, and reported separately within five chapters. To analyze these hypotheses, a sample of 217 children recruited from first to fourth grade, and between the ages of six and ten years, was tested for intelligence and creativity. Children performed three tests: Standard Progressive Matrices (SPM) for the assessment of classical intelligence, Test of Creative Thinking – Drawing Production (TCT-DP) for the measurement of classical creativity, and Creative Reasoning Task (CRT) for the evaluation of convergent and divergent thinking, both in open problem spaces. Participants were divided according to two general cohorts: Intervention group (N = 43), composed of children participating in the Entdeckertag program, and a non-intervention group (N = 174), composed of children from the regular primary school. For the testing of the hypotheses, children were placed into more specific groups according to the particular hypothesis that was being tested. It could be concluded that creativity and intelligence were not significantly related and the threshold theory was not confirmed. Additionally, intelligence accounted for less than 1% of the variance within creativity; moreover, scores on intelligence were unable to predict later creativity scores. The development of classical intelligence and classical creativity throughout grade levels also presented a different pattern; intelligence grew increasingly and continually, whereas creativity stagnated after the third grade. Finally, the ET program proved to be beneficial for classical intelligence after two years of attendance, but no effect was found for creativity. Overall, results indicate that organizations and institutions such as schools should not look solely to intelligence performance, especially when aiming to identify and foster gifted or creative individuals.
Backward compatibility of class libraries ensures that an old implementation of a library can safely be replaced by a new implementation without breaking existing clients.
Formal reasoning about backward compatibility requires an adequate semantic model to compare the behavior of two library implementations.
In the object-oriented setting with inheritance and callbacks, finding such models is difficult as the interface between library implementations and clients are complex.
Furthermore, handling these models in a way to support practical reasoning requires appropriate verification tools.
This thesis proposes a formal model for library implementations and a reasoning approach for backward compatibility that is implemented using an automatic verifier. The first part of the thesis develops a fully abstract trace-based semantics for class libraries of a core sequential object-oriented language. Traces abstract from the control flow (stack) and data representation (heap) of the library implementations. The construction of a most general context is given that abstracts exactly from all possible clients of the library implementation.
Soundness and completeness of the trace semantics as well as the most general context are proven using specialized simulation relations on the operational semantics. The simulation relations also provide a proof method for reasoning about backward compatibility.
The second part of the thesis presents the implementation of the simulation-based proof method for an automatic verifier to check backward compatibility of class libraries written in Java. The approach works for complex library implementations, with recursion and loops, in the setting of unknown program contexts. The verification process relies on a coupling invariant that describes a relation between programs that use the old library implementation and programs that use the new library implementation. The thesis presents a specification language to formulate such coupling invariants. Finally, an application of the developed theory and tool to typical examples from the literature validates the reasoning and verification approach.
For many decades, the search for language classes that extend the
context-free laguages enough to include various languages that arise in
practice, while still keeping as many of the useful properties that
context-free grammars have - most notably cubic parsing time - has been
one of the major areas of research in formal language theory. In this thesis
we add a new family of classes to this field, namely
position-and-length-dependent context-free grammars. Our classes use the
approach of regulated rewriting, where derivations in a context-free base
grammar are allowed or forbidden based on, e.g., the sequence of rules used
in a derivation or the sentential forms, each rule is applied to. For our
new classes we look at the yield of each rule application, i.e. the
subword of the final word that eventually is derived from the symbols
introduced by the rule application. The position and length of the yield
in the final word define the position and length of the rule application and
each rule is associated a set of positions and lengths where it is allowed
to be applied.
We show that - unless the sets of allowed positions and lengths are really
complex - the languages in our classes can be parsed in the same time as
context-free grammars, using slight adaptations of well-known parsing
algorithms. We also show that they form a proper hierarchy above the
context-free languages and examine their relation to language classes
defined by other types of regulated rewriting.
We complete the treatment of the language classes by introducing pushdown
automata with position counter, an extension of traditional pushdown
automata that recognizes the languages generated by
position-and-length-dependent context-free grammars, and we examine various
closure and decidability properties of our classes. Additionally, we gather
the corresponding results for the subclasses that use right-linear resp.
left-linear base grammars and the corresponding class of automata, finite
automata with position counter.
Finally, as an application of our idea, we introduce length-dependent
stochastic context-free grammars and show how they can be employed to
improve the quality of predictions for RNA secondary structures.
Industrial robots are vital in automation technology, but their limitations become evident in applications requiring high path accuracy. This research focuses on improving the dynamic path accuracy of industrial robots by integrating additional sensor technology and employing intelligent feed-forward control. Specifically, the inclusion of secondary encoder sensors enables explicit measurement and compensation of robot gear deformations. Three types of model-based feed-forward controllers, namely physics-based, data-based, and hybrid, are developed to effectively counteract dynamic effects.
Firstly, a physics-based feed-forward control method is proposed, explicitly modeling joint deformations, hydraulic weight compensation, and other relevant features. Nonlinear friction parameters are accurately identified using a globally optimized design of experiments. The resulting physics-based model is fully continuously differentiable, facilitating its transformation into a code-optimized flatness-based feed-forward control.
Secondly, a data-based feed-forward control approach is introduced, leveraging a continuous-time neural network. The continuous-time approach demonstrates enhanced model generalization capabilities even with limited data. Furthermore, a time domain normalization method is introduced, significantly improving numerical properties by concurrently normalizing measurement timelines, robot states, and state derivatives. Based on previous work, a method ensuring input-to-state and global-asymptotic stability is presented, employing a Lyapunov function. Model stability is enforced already during training using constrained optimization techniques. Moreover, the data-based methods are evaluated on public benchmarks, extending its applicability beyond the field of robotics.
Both the physics-based and data-based models are combined into a hybrid model. Comparative analysis of the three models reveals that the continuous-time neural network yields the highest model accuracy, while the physics-based model delivers the best safety properties. The effectiveness of all three models is experimentally validated using an industrial robot.
In this thesis we present a new method for nonlinear frequency response analysis of mechanical vibrations.
For an efficient spatial discretization of nonlinear partial differential equations of continuum mechanics we employ the concept of isogeometric analysis. Isogeometric finite element methods have already been shown to possess advantages over classical finite element discretizations in terms of exact geometry representation and higher accuracy of numerical approximations using spline functions.
For computing nonlinear frequency response to periodic external excitations, we rely on the well-established harmonic balance method. It expands the solution of the nonlinear ordinary differential equation system resulting from spatial discretization as a truncated Fourier series in the frequency domain.
A fundamental aspect for enabling large-scale and industrial application of the method is model order reduction of the spatial discretization of the equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information. We investigate the concept of modal derivatives theoretically and using computational examples we demonstrate the applicability and accuracy of the reduction method for nonlinear static computations and vibration analysis.
Furthermore, we extend nonlinear vibration analysis to incompressible elasticity using isogeometric mixed finite element methods.
Utilization of Correlation Matrices in Adaptive Array Processors for Time-Slotted CDMA Uplinks
(2002)
It is well known that the performance of mobile radio systems can be significantly enhanced by the application of adaptive antennas which consist of multi-element antenna arrays plus signal processing circuitry. In the thesis the utilization of such antennas as receive antennas in the uplink of mobile radio air interfaces of the type TD-CDMA is studied. Especially, the incorporation of covariance matrices of the received interference signals into the signal processing algorithms is investigated with a view to improve the system performance as compared to state of the art adaptive antenna technology. These covariance matrices implicitly contain information on the directions of incidence of the interference signals, and this information may be exploited to reduce the effective interference power when processing the signals received by the array elements. As a basis for the investigations, first directional models of the mobile radio channels and of the interference impinging at the receiver are developed, which can be implemented on the computer at low cost. These channel models cover both outdoor and indoor environments. They are partly based on measured channel impulse responses and, therefore, allow a description of the mobile radio channels which comes sufficiently close to reality. Concerning the interference models, two cases are considered. In the one case, the interference signals arriving from different directions are correlated, and in the other case these signals are uncorrelated. After a visualization of the potential of adaptive receive antennas, data detection and channel estimation schemes for the TD-CDMA uplink are presented, which rely on such antennas under the consideration of interference covariance matrices. Of special interest is the detection scheme MSJD (Multi Step Joint Detection), which is a novel iterative approach to multi-user detection. Concerning channel estimation, the incorporation of the knowledge of the interference covariance matrix and of the correlation matrix of the channel impulse responses is enabled by an MMSE (Minimum Mean Square Error) based channel estimator. The presented signal processing concepts using covariance matrices for channel estimation and data detection are merged in order to form entire receiver structures. Important tasks to be fulfilled in such receivers are the estimation of the interference covariance matrices and the reconstruction of the received desired signals. These reconstructions are required when applying MSJD in data detection. The considered receiver structures are implemented on the computer in order to enable system simulations. The obtained simulation results show that the developed schemes are very promising in cases, where the impinging interference is highly directional, whereas in cases with the interference directions being more homogeneously distributed over the azimuth the consideration of the interference covariance matrices is of only limited benefit. The thesis can serve as a basis for practical system implementations.
Today, polygonal models occur everywhere in graphical applications, since they are easy
to render and to compute and a very huge set of tools are existing for generation and
manipulation of polygonal data. But modern scanning devices that allow a high quality
and large scale acquisition of complex real world models often deliver a large set of
points as resulting data structure of the scanned surface. A direct triangulation of those
point clouds does not always result in good models. They often contain problems like
holes, self-intersections and non manifold structures. Also one often looses important
surface structures like sharp corners and edges during a usual surface reconstruction.
So it is suitable to stay a little longer in the point based world to analyze the point cloud
data with respect to such features and apply a surface reconstruction method afterwards
that is known to construct continuous and smooth surfaces and extend it to reconstruct
sharp features.
This dissertation was developed in the context of the BMBF and EU/ECSEL funded
projects GENIAL! and Arrowhead Tools. In these projects the chair examines methods
of specifications and cooperations in the automotive value chain from OEM-Tier1-Tier2.
Goal of the projects is to improve communication and collaborative planning, especially
in early development stages. Besides SysML, the use of agreed vocabularies and on-
tologies for modeling requirements, overall context, variants, and many other items, is
targeted. This thesis proposes a web database, where data from the collaborative requirements elicitation is combined with an ontology-based approach that uses reasoning
capabilities.
For this purpose, state-of-the-art ontologies have been investigated and integrated that
entail domains like hardware/software, roadmapping, IoT, context, innovation and oth-
ers. New ontologies have been designed like a HW / SW allocation ontology and a
domain-specific "eFuse ontology" as well as some prototypes. The result is a modular
ontology suite and the GENIAL! Basic Ontology that allows us to model automotive
and microelectronic functions, components, properties and dependencies based on the
ISO26262 standard among these elements. Furthermore, context knowledge that influences design decisions such as future trends in legislation, society, environment, etc. is
included. These knowledge bases are integrated in a novel tool that allows for collabo-
rative innovation planning and requirements communication along the automotive value
chain. To start off the work of the project, an architecture and prototype tool was developed. Designing ontologies and knowing how to use them proved to be a non-trivial
task, requiring a lot of context and background knowledge. Some of this background
knowledge has been selected for presentation and was utilized either in designing models
or for later immersion. Examples are basic foundations like design guidelines for ontologies, ontology categories and a continuum of expressiveness of languages and advanced
content like multi-level theory, foundational ontologies and reasoning.
Finally, at the end, we demonstrate the overall framework, and show the ontology with
reasoning, database and APPEL/SysMD (AGILA ProPErty and Dependency Descrip-
tion Language / System MarkDown) and constraints of the hardware / software knowledge base. There, by example, we explore and solve roadmap constraints that are coupled
with a car model through a constraint solver.
Software is becoming increasingly concurrent: parallelization, decentralization, and reactivity necessitate asynchronous programming in which processes communicate by posting messages/tasks to others’ message/task buffers. Asynchronous programming has been widely used to build fast servers and routers, embedded systems and sensor networks, and is the basis of Web programming using Javascript. Languages such as Erlang and Scala have adopted asynchronous programming as a fundamental concept with which highly scalable and highly reliable distributed systems are built.
Asynchronous programs are challenging to implement correctly: the loose coupling between asynchronously executed tasks makes the control and data dependencies difficult to follow. Even subtle design and programming mistakes on the programs have the capability to introduce erroneous or divergent behaviors. As asynchronous programs are typically written to provide a reliable, high-performance infrastructure, there is a critical need for analysis techniques to guarantee their correctness.
In this dissertation, I provide scalable verification and testing tools to make asyn- chronous programs more reliable. I show that the combination of counter abstraction and partial order reduction is an effective approach for the verification of asynchronous systems by presenting PROVKEEPER and KUAI, two scalable verifiers for two types of asynchronous systems. I also provide a theoretical result that proves a counter-abstraction based algorithm called expand-enlarge-check, is an asymptotically optimal algorithm for the coverability problem of branching vector addition systems as which many asynchronous programs can be modeled. In addition, I present BBS and LLSPLAT, two testing tools for asynchronous programs that efficiently uncover many subtle memory violation bugs.
The present PhD thesis is mainly focused on synthesis, characterization and catalytic application of functionalized triphenylphosphine (TPP) ligands and their complexes. We developed a simple and effective strategy to immobilize TPP: A methylester group attached to one of the phenyl rings of TPP allowes the derivatization of the ligand with 3-trimethoxysilylpropylamine, a typical silane coupling agent used for the covalent immobilization of organic compounds on silica surfaces. The resulting functionalized TPP was further coordinated to Pd, Rh and Ru precursors to achieve homogeneous complexes which can be tethered on silica by the post synthetic grafting method and co-condensation method. The obtained heterogeneous catalysts exhibited excellent activity, selectivity and reusability in Suzuki, hydrogenation and transfer hydrogenation reactions. In order to investigate the stability of the catalysts, different types of characterizations such as TEM, solid state NMR of the used catalysts as well as AAS of filtrate and leaching tests were carried out. The results prove the practicability and efficiency of our method. This strategy was further modified to generate an anionic side chain linked to the TPP core by simply replacing the trimethoxysilylpropylamine group by sodium(3-amino- 1-propanesulfonate), which allowes the immobilization on imidazolium modified SBA-15 through electrostatic interaction. The obtained material was further reacted with PdCl2(CNPh)2 and the resulting hybrid material was used for the hydrogenation of olefins allowing mild reaction conditions. The catalyst shows excellent activity, selectivity and stability and it can furthermore be reused for at least ten times without any loss of activity. TEM images of the used catalyst clearly show the absence of palladium nanoparticles, proving the high stability of the palladium compound. By AAS no palladium could be detected in the products and further leaching tests very- fied the reaction to be truly heterogeneous. This concept of non-covalent immobili- zation guarantees a tight bonding of the catalytically active species to the surface in combination with a high mobility, which should be favorable for other catalyses.
This thesis consists of two parts, i.e. the theoretical background of (R)ABSDE including basic theorems, theoretical proofs and properties (Chapter 2-4), as well as numerical algorithms and simulations for (R)ABSDES (Chapter 5). For the theoretical part, we study ABSDEs (Chapter 2), RABSDEs with one obstacle (Chapter 3)and RABSDEs with two obstacles (Chapter 4) in the defaultable setting respectively, including the existence and uniqueness theorems, applications, the comparison theorem for ABSDEs, their relations with PDEs and stochastic differential delay equations (SDDE). The numerical algorithm part (Chapter 5) introduces two main algorithms, a discrete penalization scheme and a discrete reflected scheme based on a random walk approximation of the Brownian motion as well as a discrete approximation of the default martingale; we give the convergence results of the algorithms, provide a numerical example and an application in American game options in order to illustrate the performance of the algorithms.
The work presented in this thesis discusses the thermal and power management of multi-core processors (MCPs) with both two dimensional (2D) package and there dimensional (3D) package chips. The power and thermal management/balancing is of increasing concern and is a technological challenge to the MCP development and will be a main performance bottleneck for the development of MCPs. This thesis develops optimal thermal and power management policies for MCPs. The system thermal behavior for both 2D package and 3D package chips is analyzed and mathematical models are developed. Thereafter, the optimal thermal and power management methods are introduced.
Nowadays, the chips are generally packed in 2D technique, which means that there is only one layer of dies in the chip. The chip thermal behavior can be described by a 3D heat conduction partial differential equation (PDE). As the target is to balance the thermal behavior and power consumption among the cores, a group of one dimensional (1D) PDEs, which is derived from the developed 3D PDE heat conduction equation, is proposed to describe the thermal behavior of each core. Therefore, the thermal behavior of the MCP is described by a group of 1D PDEs. An optimal controller is designed to manage the power consumption and balance the temperature among the cores based on the proposed 1D model.
3D package is an advanced package technology, which contains at least 2 layers of dies stacked in one chip. Different from 2D package, the cooling system should be installed among the layers to reduce the internal temperature of the chip. In this thesis, the micro-channel liquid cooling system is considered, and the heat transfer character of the micro-channel is analyzed and modeled as an ordinary differential equation (ODE). The dies are discretized to blocks based on the chip layout with each block modeled as a thermal resistance and capacitance (R-C) circuit. Thereafter, the micro-channels are discretized. The thermal behavior of the whole system is modeled as an ODE system. The micro-channel liquid velocity is set according to the workload and the temperature of the dies. Under each velocity, the system can be described as a linear ODE model system and the whole system is a switched linear system. An H-infinity observer is designed to estimate the states. The model predictive control (MPC) method is employed to design the thermal and power management/balancing controller for each submodel.
The models and controllers developed in this thesis are verified by simulation experiments via MATLAB. The IBM cell 8 cores processor and water micro-channel cooling system developed by IBM Research in collaboration with EPFL and ETHZ are employed as the experiment objects.
In the present work, the phase transitions in different Fe/FeC systems were studied by using the molecular dynamics simulation and the Meyer-Entel interaction potential (also the Johnson potential for Fe-C interaction). Fe-bicrystal, thin film, Fe-C bulk and Fe-C nanowire systems were investigated to study the behaviour of the phase transition, where the energetics, dynamics and transformations pathways were analysed.
Sterisch anspruchsvolle Cyclopentadienyl-Liganden wurden zur Stabilisierung neuer Mono(cyclopentadienyl) Verbindungen der schweren Erdalkalimetalle eingesetzt und deren Funktionalisierbarkeit dieser Spezies wurde exemplarisch durch die Synthese neutraler Tripeldecker-Sandwichkomplexe demonstriert. Die dabei ausgebildeten Molekülstrukturen lassen sich mittels DFT-Rechnungen zuverlässig vorhersagen. In diesem Zusammenhang wurde ebenfalls der Cyclononatetraenyl-Ligand, dessen Komplexeigenschaften bisher nur unzureichend untersucht wurden, eingesetzt. Im Rahmen dieser Arbeit gelang die Synthese des Bis(cyclononatetraenyl)bariums, Ba(C9H9)2, und dessen spektroskopische Charakterisierung. DFT-Rechnungen sagen für diesen Komplex eine Metallocenstruktur mit nahezu parallelen Ringen und einem Ba-Ring Abstand von 2.37 Å voraus. Durch den Einsatz des Tetraisopropylcyclopentadienyl (4Cp) und Tri(tert.-butyl)cyclopentadienyl (Cp’)-Liganden gelang die Synthese von Bis- und Monocyclopentadienyl-Verbindungen der frühen und späten Lanthanoide. Besonders interessant in diesem Zusammenhang ist die erfolgreiche Darstellung des Azido-Clusters, [Na(dme)3]2[4Cp6Yb6(N3)14] (4Cp= (Me2CH)4C5H), der die unterschiedlichen Koordinationsmöglichkeiten des Azido-Liganden in einem einzigen Komplex vereint. Vergleichbare Komplexe waren in der Organolanthanoidchemie bisher unbekannt. Durch Substitution am Cyclopentadienyl-System lassen sich dessen elektronische und sterische Eigenschaften signifikant verändern. Die Auswirkungen dieser Effekte können sehr eindrucksvoll an Manganocen-Komplexen demonstriert werden, in denen sich der low- und high-spin Zustand energetisch nur sehr wenig unterscheiden. Der elektronische Grundzustand einer Reihe unterschiedlich substituierter Manganocen-Komplexe wurde mittels Festkörpermagnetismus, ESR, Röntgenstrukturanalyse, EXAFS und variabler Temperatur UV-Vis Spektroskopie bestimmt, und mit dem Substitutionsmuster am Cyclopentadienyl-System korreliert. Spin-Gleichgewichte ließen sich für [(Me3C)C5H4]2Mn, [(Me3C)2C5H3]2Mn und [(Me3C)(Me3Si)C5H3]2Mn nachweisen. Theoretische Rechnungen postulieren, dass Cerocen, Ce(C8H8)2, ein Beispiel für Moleküle mit gemischt-konfiguriertem Grundzustand sei, der durch 80 % [(Ce)f1e2u(cot)e2u3] und 20 % [(Ce)f0e2u(cot)e2u4] beschreiben werden könne. Obwohl dieses Molekül bereits seit 1976 bekannt ist, ist dessen elektronische Struktur bis heute sehr umstritten. Im Rahmen dieser Arbeit wurden neue Synthesekonzepte für diese Verbindung entwickelt und die elektronische Struktur mittels magnetischer Messungen im Festkörper, EXAFS und XANES Studien untersucht. Die dabei erhaltenen Daten sind in sehr guter Übereinstimmung mit den theoretischen Rechnungen und belegen die Bedeutung eines gemischt-konfigurierten Grundzustandes bei der Bindung in Organometallkomplexen der f-Block Metalle. Während in Cerocen nur ein temperaturunabhängiger Paramagnetismus (TIP) beobachtet werden kann, findet man eine starke Temperaturabhängigkeit der magnetischen Suszeptibilität in Ytterbium Systemen des Typs Cp’2Yb(bipy’) [Cp´ und bipy´ sind substituierte Cyclopentadienyl- oder 4,4’-substituierter 2,2’-Bipyridyl-Liganden]. Temperaturabhängige XANES-Experimenten belegen, dass auch in diesen Systemen ein gemischt-konfigurierter Grundzustand vorliegt, der durch [(Yb)f14(bipy)b1()0] und [(Yb)f13(bipy)b1()1] beschreiben werden kann. Der relative Anteil beider Wellenfunktionen zum Grundzustand wird durch Substitution am 2,2’-Bipyridyl- oder Cyclopentadienyl-System signifikant beeinflusst. Modelle, mit denen sich dieses Verhalten qualitativ beschreiben lässt, wurden im Rahmen dieser Arbeit entwickelt. Ein kinetisch stabilisiertes, adduktfreies Titanocen wurde unter Verwendung des Di(tert.-butyl)cyclopentadienyl Liganden hergestellt und dessen Reaktivität gegenüber kleinen Molekülen, z.B. CO, N2 und H2 untersucht. Im Rahmen der Reaktivitätsstudien wurden ebenfalls 2,2’-Bipyridyl Addukte an das Cp’2Ti Fragment synthetisiert und deren magnetische Eigenschaften erforscht. Durch Variationen am 2,2’-Bipyridyl System lässt sich das Singlet-Triplet Splitting in diesem System gezielt steuern.
Due to tremendous improvements of high-performance computing resources as well
as numerical advances computational simulations became a common tool for modern
engineers. Nowadays, simulation of complex physics is more and more substituting a
large amount of physical experiments. While the vast compute power of large-scale
high-performance systems enabled for simulating more complex numerical equations,
handling the ever increasing amount of data with spatial and temporal resolution
burdens new challenges to scientists. Huge hardware and energy costs desire for
ecient utilization of high-performance systems. However, increasing complexity of
simulations raises the risk of failing simulations resulting in a single simulation to be
restarted multiple times. Computational Steering is a promising approach to interact
with running simulations which could prevent simulation crashes. The large amount
of data expands gaps in the amount of data that can be calculated and the amount of
data that can be processed. Extreme-scale simulations produce more data that can
even be stored. In this thesis, I propose several methods that enhance the process
of steering, exploring, visualizing, and analyzing ongoing numerical simulations.
In this study, 27 marine bacteria were screened for production of bioactive metabolites. Two strains from the surface of the soft coral Sinularia polydactyla, collected from the Red Sea, and three strains from different habitats in the North Sea were selected as a promising candidates for isolation of antimicrobial substances. A total of 50 compounds were isolated from the selected bacterial strains. From these metabolites 25 substances were known from natural sources, 10 substances were known as synthetic chemical and herein are reported as new natural products, and 13 metabolites are new. Two substances are still under elucidation. All new compounds were chemically and biologically characterized. Pseudoalteromonas sp. T268 produced simple phenol and oxindole derivatives. Production of homogentisic acid and WZ 268S-6 from this bacteria was affected by the salinity stress. WZ 268S-6 shows antimicrobial and cytotoxic activities. Its target is still unclear. Isolation of isatin from this strain points out for the possibility of using this substance as a chemotaxonomical marker for Alteromonas-like bacteria. A large number of nitro-substituted aromatic compounds were isolated from both Salegentibacter sp. T436 and Vibrio sp. WMBA1-4. They may be derived from metabolism of phenylalanine or tyrosine. From Salegentibacter sp. T436, 24 compounds were isolated, of which four compounds are new and six compounds were known as synthetic chemicals. WZ 436S-16 (dinitro-β-styrene) is the most potent antimicrobial and cytotoxic compound. It inhibits the oxygen uptake by N. coryli and causes apoptosis in the human promyelocytic leukaemia (HL-60 cells). From Vibrio sp. WMBA1-4, 13 new alkaloids were isolated, of which four were known as synthetic products and herein are reported as new substances from natural sources. The majority of these compounds show antimicrobial and cytotoxic activities. The cytotoxic activity of WMB4S-11 against the mouse lymphocytic leukaemia (L1210 cells) is due to the inhibition in the protein biosynthesis, while the remaining cytotoxic alkaloids have no effect on the synthesis of macromolecules in this cell line. The antibacterial activity of WMB4S-2, -11, -12, -13 and the antifungal activity of WMB4S-9 are not due to the inhibition in the macromolecules biosynthesis or in the oxygen uptake by the microorganisms. The biological activity of these nitro-aromatic compounds from Salegentibacter sp. T436 and Vibrio sp. WMBA1-4 is influenced by the presence of a nitro group and its position in respect to the hydroxyl group, number of the nitro groups, and the type of substitutions on the side chain. In diaryl-maleimide derivatives, types and position of substitution on the aryl rings, on the maleimide moity, and the hydrophobicity of the aryl ring itself lead to variations in the extent of the bioactivity of these derivatives. This is the first time that vibrindole (WMB4S-14) and turbomycin B or its noncationic form (WMB4S-15), isolated from Vibrio sp., are reported as cytotoxic compounds. WMB4S-15 inhibits the biosynthesis of macromolecules in L1210 cells. The structural similarity between some of the metabolites in this study and previously reported compounds from sponges, ascidians, and bryozoan indicates that the microbial origin of these compounds must be considered.
In this text we survey some large deviation results for diffusion processes. The first chapters present results from the literature such as the Freidlin-Wentzell theorem for diffusions with small noise. We use these results to prove a new large deviation theorem about diffusion processes with strong drift. This is the main result of the thesis. In the later chapters we give another application of large deviation results, namely to determine the exponential decay rate for the Bayes risk when separating two different processes. The final chapter presents techniques which help to experiment with rare events for diffusion processes by means of computer simulations.
In this thesis we explicitly solve several portfolio optimization problems in a very realistic setting. The fundamental assumptions on the market setting are motivated by practical experience and the resulting optimal strategies are challenged in numerical simulations.
We consider an investor who wants to maximize expected utility of terminal wealth by trading in a high-dimensional financial market with one riskless asset and several stocks.
The stock returns are driven by a Brownian motion and their drift is modelled by a Gaussian random variable. We consider a partial information setting, where the drift is unknown to the investor and has to be estimated from the observable stock prices in addition to some analyst’s opinion as proposed in [CLMZ06]. The best estimate given these observations is the well known Kalman-Bucy-Filter. We then consider an innovations process to transform the partial information setting into a market with complete information and an observable Gaussian drift process.
The investor is restricted to portfolio strategies satisfying several convex constraints.
These constraints can be due to legal restrictions, due to fund design or due to client's specifications. We cover in particular no-short-selling and no-borrowing constraints.
One popular approach to constrained portfolio optimization is the convex duality approach of Cvitanic and Karatzas. In [CK92] they introduce auxiliary stock markets with shifted market parameters and obtain a dual problem to the original portfolio optimization problem that can be better solvable than the primal problem.
Hence we consider this duality approach and using stochastic control methods we first solve the dual problems in the cases of logarithmic and power utility.
Here we apply a reverse separation approach in order to obtain areas where the corresponding Hamilton-Jacobi-Bellman differential equation can be solved. It turns out that these areas have a straightforward interpretation in terms of the resulting portfolio strategy. The areas differ between active and passive stocks, where active stocks are invested in, while passive stocks are not.
Afterwards we solve the auxiliary market given the optimal dual processes in a more general setting, allowing for various market settings and various dual processes.
We obtain explicit analytical formulas for the optimal portfolio policies and provide an algorithm that determines the correct formula for the optimal strategy in any case.
We also show optimality of our resulting portfolio strategies in different verification theorems.
Subsequently we challenge our theoretical results in a historical and an artificial simulation that are even closer to the real world market than the setting we used to derive our theoretical results. However, we still obtain compelling results indicating that our optimal strategies can outperform any benchmark in a real market in general.
In group theory, a big and important family of infinite groups is given by the algebraic groups. These groups and their structures are already well-understood. In representation theory, the study of the unipotent variety in algebraic groups - and by extension the study of the nilpotent variety in the associated Lie algebra - is of particular interest.
Let \( G \) be a connected reductive algebraic group over an algebraically closed field \(\mathbf{k}\), and let \(\operatorname{Lie}(G)\) be its associated Lie algebra. By now, the orbits in the nilpotent and unipotent variety under the action of \(G\) are completely known and can be found for example in a book of Liebeck and Seitz. There exists, however, no uniform description of these orbits that holds in both good and bad characteristic. With this in mind, Lusztig defined a partition of the unipotent variety of \(G\) in 2011. Equivalently, one can consider certain subsets of the nilpotent variety of \(\operatorname{Lie}(G)\) called the nilpotent pieces. This approach appears in the same paper by Lusztig in which he explicitly determines the nilpotent pieces for simple algebraic groups of classical type.
The nilpotent pieces for the exceptional groups of type \(G_2, F_4, E_6, E_7,\) and \(E_8\) in bad characteristic have not yet been determined.
This thesis gives an introduction to the definition of the nilpotent pieces and presents a solution to this problem for groups of type \(G_2, F_4, E_6\), and partly for \(E_7\). The solution relies heavily on computational work which we elaborate on in later chapters.
The biodiversity of the cyanobacterial lichen flora of Vietnam is chronically understudied. Previous studies often neglected the lichens that inhabit lowlands especially outcrops and sand dunes that are common habitats in Vietnam.
A cyanolichen collection was gathered from lowlands of central and southern Vietnam to study their diversity and distribution. At the same time, cultured photobionts from those lichens were used for olyphasic taxonomic approach.
A total of 66 cyanolichens were recorded from lowland regions in central and southern of Vietnam, doubles the number of cyanolichens for Vietnam. 80% of them are new records for Vietnam in which a new species Pyrenopsis melanophthalma and two new unidentified lichinacean taxa were described.
A notably floristic segregation by habitats was indicated in the communities. Saxicolous Lichinales dominated in coastal outcrops that corresponded to 56% of lichen species richness. Lecanoralean cyanolichens and basidiolichens were found in the lowland forests. Precipitation correlated negatively to species richness in this study, indicating a competitive relationship.
Eleven cyanobacterial strains including 8 baeocyte-forming members of the genus Chroococcidiopsis and 3 heterocyte-forming species of the genera Nostoc and Scytonema were successfully isolated from lichens.
Phylogenetic and morphological analyses indicated that Chroococcidiopsis was the unique photobiont in Peltula. New mophological characters were found in two Chroococcidiopsis strains: (1) the purple content of cells in one photobiont strain that was isolated from a new lichinacean taxon, and (2) the pseudofilamentous feature by binary division from a strain that was isolated from Porocyphus dimorphus.
With respect to heterocyte-forming cyanobiont, Scytonema was confirmed as the photobiont in the ascolichen Heppia lutosa applying the polyphasic method. The genus Scytonema in the basidiolichens Cyphellostereum was morphologically examinated in lichen thalli. For the first time the intracellular haustorial system of basidiolichen genus Cyphellostereum was noted and investigated.
Phylogenetic analysis of photobiont strains Nostoc from Pannaria tavaresii and Parmeliella brisbanensis indicated that a high selectivity occurred in Parmeliella brisbanensis that were from different regions of the world, while low photobiont selectivity occurred among Pannaria tavaresii samples from different geographical regions.
The herewith presented dissertation is therefore an important contribution to the lichen flora of Vietnam and a significant improvement of the actual knowledge about cyanolichens in this country.
In current practices of system-on-chip (SoC) design a trend can be observed to integrate more and more low-level software components into the system hardware at different levels of granularity. The implementation of important control functions and communication structures is frequently shifted from the SoC’s hardware into its firmware. As a result, the tight coupling of hardware and software at a low level of granularity raises substantial verification challenges since the conventional practice of verifying hardware and software independently is no longer sufficient. This calls for new methods for verification based on a joint analysis of hardware and software.
This thesis proposes hardware-dependent models of low-level software for performing formal verification. The proposed models are conceived to represent the software integrated with its hardware environment according to the current SoC design practices. Two hardware/software integration scenarios are addressed in this thesis, namely, speed-independent communication of the processor with its hardware periphery and cycle-accurate integration of firmware into an SoC module. For speed-independent hardware/software integration an approach for equivalence checking of hardware-dependent software is proposed and an evaluated. For the case of cycle-accurate hardware/software integration, a model for hardware/software co-verification has been developed and experimentally evaluated by applying it to property checking.
Nitrogen removal from wastewater is increasingly important to protect natural water sources and has proven a challenge for wastewater treatment plants in different countries. Strict discharge norms for nitrogen components and unfavourable wastewater quality are among the main challenges observed.
An example WWTP (450,000 PECOD,120), representative of these challenges (i.e. strict discharge norm for NH4-N and TN, partially unfavourable wastewater composition for upstream denitrification) was modelled with the software SIMBA. The model was calibrated, and validated, using different statistical parameters. The model was used for dynamic simulation to test different operational and automation strategies, to improve nitrogen removal.
The tested strategies considered the bypass of primary clarifiers, changes in the anaerobic, anoxic, and aerobic reactors configuration, changes in the aeration system (DO setpoint, the inclusion of online sensors and different control approaches in the aeration loop), the adjustment of the internal recirculation rate, the implementation of intermittent denitrification, among others. The addition of an anaerobic digestion stage, considering the adjustment of the sludge age in the biological treatment and the treatment of the centrate (including nitrogen backload), was tested as well.
To evaluate the strategies' performance, an evaluation criteria chart was created to select the best strategies from an overall perspective, considering the improvements or deterioration in norm compliance, aeration requirements, pollutant emissions to the environment, and biogas production (if applicable).
The best overall results were obtained with strategies that aimed to improve the denitrification capacity (e.g. increase anoxic volume by reducing aerobic volume), adjusted the air requirements (e.g. inclusion of an NH4-N online measurement in the aeration control loop), and provided flexibility (e.g. intermittent denitrification). With the right combination of strategies, the norm compliance was significantly improved e.g. reduced from 31 to 4 in a year, as well as the emissions to the environment.
The inclusion of an anaerobic digestion stage for sewage sludge treatment challenges the nitrogen removal even further, but similar optimisation strategies, based on the same approach were able to improve norm compliance.
However, none of the combinations, with or without anaerobic digestion, achieved total norm compliance. Therefore, a different technology than A2/O, an SBR treatment stage was designed, providing increased operational flexibility. The A2/O system in the computer model was replaced by an SBR process. This showed the best results, based on the criteria previously defined, with total norm compliance.
Based on the learnings of the design, redesign, and strategies tested, a guideline for an integral optimisation of nitrogen removal was developed, based on six pillars, considering a detailed WWTP operational analysis, the use of dynamic simulation as a tool, the testing of known and simple optimization approaches, the definition of clear and objective evaluation criteria, the consideration of anaerobic digestion (and the backload) and finally the re-evaluation of the type of technology for biological wastewater treatment.
Induktionsschweißen kann sowohl für das Schweißen von thermoplastischen Faser-
Kunststoff-Verbunden als auch für das Verbinden von Metall/Faser-Kunststoff-
Verbunden eingesetzt werden. Nach Betrachtung der Möglichkeiten einer solchen
Verbindung wurde festgestellt, dass die Verbindungsqualität durch die
Oberflächenvorbehandlung des metallischen und des polymeren Fügepartners und
durch die Prozessbedingungen bestimmt wird.
Verschiedene neue Werkzeuge (z.B. spezielle Probenhalterungen, temperierbarer
Anpressstempel, Erwärmungs- und Konsolidierungsrolle) wurden entwickelt und in
die Induktionsschweißanlage zur Herstellung von Metall/Faser-Kunststoff-Verbunden
integriert. Topografische Analysen mittels Rasterelektronenmikroskopie und
Laserprofilometrie zeigen einen großen Einfluss der Vorbehandlungsmethoden auf
die Oberflächenrauhigkeit. Zusätzlich ändert die Vorbehandlung die physikalischen
(Oberflächenenergie) und die chemischen Eigenschaften (Atomkonzentration). Die
Eigenschaften der Verbindungen wurden zuerst anhand von Zugscherprüfungen und
parallel durch Oberflächenanalysen untersucht. Die Ergebnisse dieser
Untersuchungen zeigen:
• Die Vorbehandlungsmethoden Korundstrahlen und Sauerbeizen führen bei
dem metallischen Fügepartner zu den höchsten Verbundfestigkeiten. Die
Atmosphären-Plasmareinigung des polymeren Fügepartners ergibt eine
Zunahme der Zugscherfestigkeit von ca. 10 % sowie auch eine Verkleinerung
des Vertrauensbereiches.
• Die Zugscherfestigkeit hängt vom Prozessdruck und damit vom Fließverhalten
des Polymers in der Fügezone ab.
• Die Orientierung der Prüfkraft relativ zur Faserorientierung hat keinen Einfluss
auf die Zugscherfestigkeit der eingesetzten faserverstärkten Materialien.
• Die Leinwand-Bindung, mit mehr polymerreichen Zonen, führt zu einem
geringen Anstieg der Zugscherfestigkeit im Vergleich zu einer Atlas 1/4-
Bindung. Die Gelege-Struktur ergibt durch Faserverschiebungen ähnliche
Festigkeiten wie die Leinwand-Bindung. Es zeigt sich, dass die
Verbundfestigkeit durch das Polymer bestimmt wird. • Die Zugscherfestigkeit gewinnt einen großen Anstieg durch eine zusätzliche
Polymerfolie in der Fügezone. Die Schliffbilder zeigen eine polymere
Zwischenschichtdicke von 5 bis 20 μm für AlMg3-CF/PA66.
• Durch den gezielten Einsatz verschiedener Vorbehandlungsmethoden
(Korundstrahlen mit zusätzlichem Polymer) kann die Zugscherfestigkeit auf bis
zu 14 MPa für AlMg3-CF/PA66-Verbunde und 18 MPa für DC01-CF/PEEKVerbunde
gegenüber dem unbehandelten Zustand verdoppelt werden. Weitere Untersuchungen an den Prozessparametern ergaben für DC01-CF/PEEKVerbunde,
dass folgende Einstellungen zu einer weiteren Steigerung der
Zugscherfestigkeit auf 19 MPa führen:
• Eine Starttemperatur des Anpresstempels von 370 °C.
• Eine Haltezeit von 7 Minuten.
• Eine Abkühlrate von 6 °C/min.
Für AlMg3-CF/PA66 zeigte sich, dass eine Anpresstemperatur von 10 °C zu einer
Zugscherfestigkeit von 14,5 MPa führt. Diese beiden Zugscherfestigkeiten sind
lediglich 10 – 15 % geringer als die unter optimalen Bedingungen hergestellten
Klebeverbindungen.
Erste Untersuchungen zeigen, dass bei galvanischer Korrosion von Metall/FKVVerbunden
eine schnelle Abnahme der Zugscherfestigkeit erfolgt. Hierfür wurden die
Proben drei Wochen in Wasser gelagert. Beim direkten Kontakt zwischen
Kohlenstofffaser und Aluminium erklärt sich dies durch Korrosion in der Fügezone.
Dabei sinken die Zugscherfestigkeiten der Proben bis auf 5 MPa. Bei Proben mit
einer Glasfaserlage als Isolationsschicht zeigen sich keine Korrosionsprodukte und
die Zugscherfestigkeit nimmt um 30 % bis auf 8 – 9 MPa ab.
Bei in Salzwasser gelagerten Proben ist die galvanische Korrosion deutlich stärker
ausgeprägt. Bereits nach einer Woche besitzen die acetongereinigten Proben mit
zusätzlichem Polymer lediglich eine Restzugscherfestigkeit von 3 bis 4 MPa. Die
korundgestrahlten Proben zeigen Korrosionsprodukte am Rande der Fügezone und
in der Fügezone, weisen aber dennoch eine Zugscherfestigkeit von ca. 10 MPa auf.
Die glasfaserverstärkten Proben zeigen weder Korrosionsprodukte noch eine
Abnahme der Zugscherfestigkeit. Dynamisch thermografische Analysen wurden in verschiedenen Umgebungsgasen
durchgeführt, um die Zersetzungstemperatur des faserverstärkten Polymers zu
bestimmen. Im Falle von CF/PA66 führte dies nicht zu einer Vergrößerung des
Prozessfensters, da die Zersetzung hauptsächlich thermisch und nicht thermooxidativ
ist. Die festgestellte Zersetzungstemperatur von CF/PEEK in Luft betrug
550 °C. Die Vergrößerung des Prozessfensters ist für CF/PA66 gering und zeigte
auch keinen Anstieg in der Zugscherfestigkeit nach dem Schweißen in Stickstoff.
Trotzdem hat das Induktionsschweißen unter Schutzgas ein großes Potential für
gesättigte Kohlenwasserstoffe wie z.B. glasfaserverstärktes Polypropylen. Hier wurde
die Zersetzungstemperatur von 230 °C in Luft auf 390 °C in Stickstoff erhöht.
Es wurde ein Demonstrator bestehend aus einem Aluminium-Profil und einer
CF/PA66-Platte hergestellt, womit gezeigt werden konnte, dass die erworbenen
Kenntnisse auch für die industrielle Anwendung umsetzbar sind. Mittels analytischer
Modelle und FE-Berechnungen wurde die induktive Erwärmung erfolgreich
nachgebildet.
It is well known that the structure at a microscopic point of view strongly influences the
macroscopic properties of materials. Moreover, the advancement in imaging technologies allows
to capture the complexity of the structures at always decreasing scales. Therefore, more
sophisticated image analysis techniques are needed.
This thesis provides tools to geometrically characterize different types of three-dimensional
structures with applications to industrial production and to materials science. Our goal is to
enhance methods that allow the extraction of geometric features from images and the automatic
processing of the information.
In particular, we investigate which characteristics are sufficient and necessary to infer
the desired information, such as particles classification for technical cleanliness and
fitting of stochastic models in materials science.
In the production line of automotive industry, dirt particles collect on the surface of mechanical
components. Residual dirt might reduce the performance and durability of assembled products.
Geometric characterization of these particles allows to identify their potential danger.
While the current standards are based on 2d microscopic images, we extend the characterization
to 3d.
In particular, we provide a collection of parameters that exhaustively describe size and shape
of three-dimensional objects and can be efficiently estimated from binary images. Furthermore,
we show that only a few features are sufficient to classify particles according to the standards
of technical cleanliness.
In the context of materials science, we consider two types of microstructures: fiber systems
and foams.
Stochastic geometry grants the fundamentals for versatile models able to encompass the
geometry observed in the samples. To allow automatic model fitting, we need rules stating which
parameters of the model yield the best-fitting characteristics. However, the validity of such
rules strongly depends on the properties of the structures and on the choice of the model.
For instance, isotropic orientation distribution yields the best theoretical results for Boolean
models and Poisson processes of cylinders with circular cross sections. Nevertheless, fiber
systems in composites are often anisotropic.
Starting from analytical results from the literature, we derive formulae for anisotropic
Poisson processes of cylinders with polygonal cross sections that can be directly used in
applications. We apply this procedure to a sample of medium density fiber board. Even
if image resolution does not allow to estimate reliably characteristics of the singles fibers,
we can fit Boolean models and Poisson cylinder processes. In particular, we show the complete
model fitting and validation procedure with cylinders with circular and squared cross sections.
Different problems arise when modeling cellular materials. Motivated by the physics of foams,
random Laguerre tessellations are a good choice to model the pore system of foams.
Considering tessellations generated by systems of non-overlapping spheres allows to control the
cell size distribution, but yields the loss of an analytical description of the model.
Nevertheless, automatic model fitting can still be obtained by approximating the characteristics
of the tessellation depending on the parameters of the model. We investigate how to improve
the choice of the model parameters. Angles between facets and between edges were never considered
so far. We show that the distributions of angles in Laguerre tessellations
depend on the model parameters. Thus, including the moments of the angles still allows automatic
model fitting. Moreover, we propose an algorithm to estimate angles from images of real foams.
We observe that angles are matched well in random Laguerre tessellations also when they are not
employed to choose the model parameters. Then, we concentrate on the edge length distribution. In
Laguerre tessellations occur many more short edges than in real foams. To deal with this problem,
we consider relaxed models. Relaxation refers to topological and structural modifications
of a tessellation in order to make it comply with Plateau's laws of mechanical equilibrium. We inspect
samples of different types of foams, closed and open cell foams, polymeric and metallic. By comparing
the geometric characteristics of the model and of the relaxed tessellations, we conclude that whether
the relaxation improves the edge length distribution strongly depends on the type of foam.
In the increasingly competitive public-cloud marketplace, improving the efficiency of data centers is a major concern. One way to improve efficiency is to consolidate as many VMs onto as few physical cores as possible, provided that performance expectations are not violated. However, as a prerequisite for increased VM densities, the hypervisor’s VM scheduler must allocate processor time efficiently and in a timely fashion. As we show in this thesis, contemporary VM schedulers leave substantial room for improvements in both regards when facing challenging high-VM-density workloads that frequently trigger the VM scheduler. As root causes, we identify (i) high runtime overheads and (ii) unpredictable scheduling heuristics.
To better support high VM densities, we propose Tableau, a VM scheduler that guarantees a minimum processor share and a maximum bound on scheduling delay for every VM in the system. Tableau combines a low-overhead, core-local, table-driven dispatcher with a fast on-demand table-generation procedure (triggered on VM creation/teardown) that employs scheduling techniques typically used in hard real-time systems. Further, we show that, owing to its focus on efficiency and scalability, Tableau provides comparable or better throughput than existing Xen schedulers in dedicated-core scenarios as are commonly employed in public clouds today.
Tableau also extends this design by providing the ability to use idle cycles in the system to perform low-priority background work, without affecting the performance of primary VMs, a common requirement in public clouds.
Finally, VM churn and workload variations in multi-tenant public clouds result in changing interference patterns at runtime, resulting in performance variation. In particular, variation in last-level cache (LLC) interference has been shown to have a significant impact on virtualized application performance in cloud environments. Tableau employs a novel technique for dealing with dynamically changing interference, which involves periodically regenerating tables with the same guarantees on utilization and scheduling latency for all VMs in the system, but having different LLC interference characteristics. We present two strategies to mitigate LLC interference: a randomized approach, and one that uses performance counters to detect VMs running cache-intensive workloads and selectively mitigate interference.
The present thesis is concerned with the simulation of the loading behaviour of both hybrid lightweight structures and piezoelectric mesostructures, with a special focus on solid interfaces on the meso scale. Furthermore, an analytical review on bifurcation modes of continuum-interface problems is included. The inelastic interface behaviour is characterised by elastoplastic, viscous, damaging and fatigue-motivated models. For related numerical computations, the Finite Element Method is applied. In this context, so-called interface elements play an important role. The simulation results are reflected by numerous examples which are partially correlated to experimental data.
Fragmentation of tropical rain forests is pervasive and results in various modifications in the ecosystem functioning such as … It has long been noticed that the colony densities of a dominant herbivore in the neotropics - leaf-cutting ant (LCA) - increase in fragmentation-related habitats like forest edges and small fragments, however the reasons for this increase are not clear. The aim of the study was to test the hypothesis that bottom-up control of LCA populations is less effective in fragmented compared to continuous forests and thus explains the increase in LCA colony densities in these habitats. In order to test for less effective bottom-up control, I proposed four working hypotheses. I hypothesized that LCA colonies in fragmented habitats (1) find more palatable vegetation due to low plant defences, (2) forage on few dominant species resulting in a narrow diet breadth, (3) possess small foraging areas and (4) increase herbivory rate at the colony level. The study was conducted in the remnants of the Atlantic rainforest in NE Brazil. Two fragmentation-related forest habitats were included: the edge and a 3500-ha continuous forest and the interior of the 50-ha forest fragment. The interior of the continuous forest served as a control habitat for the study. All working hypotheses can be generally accepted. The results indicate that the abundance of LCA host plant species in the habitats created by forest fragmentation along with weaker chemical defense of those species (especially the lack of terpenoids) allow ants to forage predominantly on palatable species and thus reduce foraging costs on other species. This is supported by narrower ant diet breadth in these habitats. Similarly, small foraging areas in edge habitats and in small forest fragments indicate that there ants do not have to go far to find the suitable host species and thus they save foraging costs. Increased LCA herbivory rates indicate that the damages (i.e., amount of harvested foliage) caused by LCA are more important in fragmentation-related habitats which are more vulnerable to LCA herbivory due to the high availability of palatable plants and a low total amount of foliage (LAI). (1) Few plant defences, (2) narrower ant diet breadth, (3) reduced colony foraging areas, and (4) increased herbivory rates, clearly indicate a weaker bottom-up control for LCA in fragmented habitats. Weak bottom-up control in the fragmentation-related habitats decreases the foraging costs of a LCA colony in these habitats and the colonies might use the surplus of energy resulting from reduced foraging costs to increase the colony growth, the reproduction and turnover. If correct, this explains why fragmented habitats support more LCA colonies at a given time compared to continuous forest habitats. Further studies are urgently needed to estimate LCA colony growth and turnover rates. There are indices that edge effects of forest fragmentation might be more responsible in regulating LCA populations than area or isolation effects. This emphasizes the need to conserve big forest fragments not to fall below a critical size and retain their regular shape. Weak bottom-up control of LCA populations has various consequences on forested ecosystems. I suggest a loop between forest fragmentation and LCA population dynamics: the increased LCA colony densities, along with lower bottom-up control increase LCA herbivory pressure on the forest and thus inevitably amplify the deleterious effects of fragmentation. These effects include direct consequences of leaf removal by ants and various indirect effects on ecosystem functioning. This study contributes to our understanding of how primary fragmentation effects, via the alteration of trophic interactions, may translate into higher order effects on ecosystem functions.
In the past, information and knowledge dissemination was relegated to the
brick-and-mortar classrooms, newspapers, radio, and television. As these
processes were simple and centralized, the models behind them were well
understood and so were the empirical methods for optimizing them. In today’s
world, the internet and social media has become a powerful tool for information
and knowledge dissemination: Wikipedia gets more than 1 million edits per day,
Stack Overflow has more than 17 million questions, 25% of US population visits
Yahoo! News for articles and discussions, Twitter has more than 60 million
active monthly users, and Duolingo has 25 million users learning languages
online. These developments have introduced a paradigm shift in the process of
dissemination. Not only has the nature of the task moved from being centralized
to decentralized, but the developments have also blurred the boundary between
the creator and the consumer of the content, i.e., information and knowledge.
These changes have made it necessary to develop new models, which are better
suited to understanding and analysing the dissemination, and to develop new
methods to optimize them.
At a broad level, we can view the participation of users in the process of
dissemination as falling in one of two settings: collaborative or competitive.
In the collaborative setting, the participants work together in crafting
knowledge online, e.g., by asking questions and contributing answers, or by
discussing news or opinion pieces. In contrast, as competitors, they vie for
the attention of their followers on social media. This thesis investigates both
these settings.
The first part of the thesis focuses on the understanding and analysis of
content being created online collaboratively. To this end, I propose models for
understanding the complexity of the content of collaborative online discussions
by looking exclusively at the signals of agreement and disagreement expressed
by the crowd. This leads to a formal notion of complexity of opinions and
online discussions. Next, I turn my attention to the participants of the crowd,
i.e., the creators and consumers themselves, and propose an intuitive model for
both, the evolution of their expertise and the value of the content they
collaboratively contribute and learn from on online Q&A based forums. The
second part of the thesis explores the competitive setting. It provides methods
to help the creators gain more attention from their followers on social media.
In particular, I consider the problem of controlling the timing of the posts of
users with the aim of maximizing the attention that their posts receive under
the idealized setting of full-knowledge of timing of posts of others. To solve
it, I develop a general reinforcement learning based method which is shown to
have good performance on the when-to-post problem and which can be employed in
many other settings as well, e.g., determining the reviewing times for spaced
repetition which lead to optimal learning. The last part of the thesis looks at
methods for relaxing the idealized assumption of full knowledge. This basic
question of determining the visibility of one’s posts on the followers’ feeds
becomes difficult to answer on the internet when constantly observing the feeds
of all the followers becomes unscalable. I explore the links of this problem to
the well-studied problem of web-crawling to update a search engine’s index and
provide algorithms with performance guarantees for feed observation policies
which minimize the error in the estimate of visibility of one’s posts.
The task of printed Optical Character Recognition (OCR), though considered ``solved'' by many, still poses several challenges. The complex grapheme structure of many scripts, such as Devanagari and Urdu Nastaleeq, greatly lowers the performance of state-of-the-art OCR systems.
Moreover, the digitization of historical and multilingual documents still require much probing. Lack of benchmark datasets further complicates the development of reliable OCR systems. This thesis aims to find the answers to some of these challenges using contemporary machine learning technologies. Specifically, the Long Short-Term Memory (LSTM) networks, have been employed to OCR modern as well historical monolingual documents. The excellent OCR results obtained on these have led us to extend their application for multilingual documents.
The first major contribution of this thesis is to demonstrate the usability of LSTM networks for monolingual documents. The LSTM networks yield very good OCR results on various modern and historical scripts, without using sophisticated features and post-processing techniques. The set of modern scripts include modern English, Urdu Nastaleeq and Devanagari. To address the challenge of OCR of historical documents, this thesis focuses on Old German Fraktur script, medieval Latin script of the 15th century, and Polytonic Greek script. LSTM-based systems outperform the contemporary OCR systems on all of these scripts. To cater for the lack of ground-truth data, this thesis proposes a new methodology, combining segmentation-based and segmentation-free OCR approaches, to OCR scripts for which no transcribed training data is available.
Another major contribution of this thesis is the development of a novel multilingual OCR system. A unified framework for dealing with different types of multilingual documents has been proposed. The core motivation behind this generalized framework is the human reading ability to process multilingual documents, where no script identification takes place.
In this design, the LSTM networks recognize multiple scripts simultaneously without the need to identify different scripts. The first step in building this framework is the realization of a language-independent OCR system which recognizes multilingual text in a single step. This language-independent approach is then extended to script-independent OCR that can recognize multiscript documents using a single OCR model. The proposed generalized approach yields low error rate (1.2%) on a test corpus of English-Greek bilingual documents.
In summary, this thesis aims to extend the research in document recognition, from modern Latin scripts to Old Latin, to Greek and to other ``under-privilaged'' scripts such as Devanagari and Urdu Nastaleeq.
It also attempts to add a different perspective in dealing with multilingual documents.
Cell migration is essential for embryogenesis, wound healing, immune surveillance, and
progression of diseases, such as cancer metastasis. For the migration to occur, cellular
structures such as actomyosin cables and cell-substrate adhesion clusters must interact.
As cell trajectories exhibit a random character, so must such interactions. Furthermore,
migration often occurs in a crowded environment, where the collision outcome is deter-
mined by altered regulation of the aforementioned structures. In this work, guided by a
few fundamental attributes of cell motility, we construct a minimal stochastic cell migration
model from ground-up. The resulting model couples a deterministic actomyosin contrac-
tility mechanism with stochastic cell-substrate adhesion kinetics, and yields a well-defined
piecewise deterministic process. The signaling pathways regulating the contractility and
adhesion are considered as well. The model is extended to include cell collectives. Numer-
ical simulations of single cell migration reproduce several experimentally observed results,
including anomalous diffusion, tactic migration, and contact guidance. The simulations
of colliding cells explain the observed outcomes in terms of contact induced modification
of contractility and adhesion dynamics. These explained outcomes include modulation
of collision response and group behavior in the presence of an external signal, as well as
invasive and dispersive migration. Moreover, from the single cell model we deduce a pop-
ulation scale formulation for the migration of non-interacting cells. In this formulation,
the relationships concerning actomyosin contractility and adhesion clusters are maintained.
Thus, we construct a multiscale description of cell migration, whereby single, collective,
and population scale formulations are deduced from the relationships on the subcellular
level in a mathematically consistent way.
Compared to canonical model organisms, the genetic toolbox of Kinetoplastid parasites have a considerable gap in the transgenic techniques available. The implementation of the CRISPR/Cas9 technology is poised to transform the way we perform genetic manipulations and offers a new and exciting horizon for molecular parasitology. In this study, we use the Kinetoplastid parasite Leishmania tarentolae as a model organism. This unicellular eukaryote is an attractive model for both basic and applied research. Understanding Leishmania’s basic biology is valuable to underpin differences to the host that might help to treat infectious diseases. Furthermore, it also provides new examples of non-conserved mechanisms that will help to understand the fundamental principles of the biology of eukaryotes and their evolution. In this work, the CRISPR/Cas9 system was used to study mitochondrial protein import.
Here I show the efficacy of CRISPR/Cas9 to generate knockout and knockin mutants. Proof- of- concept gene PF16 was used to generate knockout immotile parasites and knockin fluorescent mutants fused with mCherry. The APRT gene was also knocked out showing resistance to APP.
In addition, I generated endogenous mutants of a constituent of the mitochondrial import machineries, the sulfhydryl oxidoreductase Erv. I showed that the KISS domain and cysteine 17 are dispensable for survival dismissing that their functions correlate with the essential operation/s of Erv. I report that the ERV gene and the intervening sequences of its shuttle pair cysteines are refractory to ablation and modification, respectively, indicating that they are essential for survival. I also generated Erv interactomes using full-length and mutant (ErvΔKISS) baits showing candidates with hitherto unknown functions that might be related to Erv function.
I also tested the glmS riboswitch and generate endogenous mutants with CRISPR/Cas9. We asked if it was possible in Leishmania to obtain knockdown mutants with this technique. The evidence of this study indicates that the system is inefficient in provoking a knockdown phenotype for the genes characterized.
An alternative negative marker was also developed in this work. I propose the APRT gene as a novel and efficient counter-selectable marker as compared to the current yFCU and TK genes. The implementation of this system could lead to first shuffling experiments that are not feasible in Leishmania further highlighting the value of this model organism.
Materials in general can be divided into insulators, semiconductors and conductors,
depending on their degree of electrical conductivity. Polymers are classified as
electrically insulating materials, having electrical conductivity values lower than 10-12
S/cm. Due to their favourable characteristics, e.g. their good physical characteristics,
their low density, which results in weight reduction, etc., polymers are also
considered for applications where a certain degree of conductivity is required. The
main aim of this study was to develop electrically conductive composite materials
based on epoxy (EP) matrix, and to study their thermal, electrical, and mechanical
properties. The target values of electrical conductivity were mainly in the range of
electrostatic discharge protection (ESD, 10-9-10-6 S/cm).
Carbon fibres (CF) were the first type of conductive filler used. It was established that
there is a significant influence of the fibre aspect ratio on the electrical properties of
the fabricated composite materials. With longer CF the percolation threshold value
could be achieved at lower concentrations. Additional to the homogeneous CF/EP
composites, graded samples were also developed. By the use of a centrifugation
method, the CF created a graded distribution along one dimension of the samples.
The effect of the different processing parameters on the resulting graded structures
and consequently on their gradients in the electrical and mechanical properties were
systematically studied.
An intrinsically conductive polyaniline (PANI) salt was also used for enhancing the
electrical properties of the EP. In this case, a much lower percolation threshold was
observed compared to that of CF. PANI was found out to have, up to a particular
concentration, a minimal influence on the thermal and mechanical properties of the
EP system.
Furthermore, the two above-mentioned conductive fillers were jointly added to the EP
matrix. Improved electrical and mechanical properties were observed by this
incorporation. A synergy effect between the two fillers took place regarding the
electrical conductivity of the composites.
The last part of this work was engaged in the application of existing theoretical
models for the prediction of the electrical conductivity of the developed polymer composites. A good correlation between the simulation and the experiments was
observed.
Allgemein werden Materialien in Bezug auf ihre elektrische Leitfähigkeit in Isolatoren,
Halbleiter oder Leiter unterteilt. Polymere gehören mit einer elektrischen Leitfähigkeit
niedriger als 10-12 S/cm in die Gruppe der Isolatoren. Aufgrund vorteilhafter
Eigenschaften der Polymere, wie z.B. ihren guten physikalischen Eigenschaften,
ihrer geringen Dichte, welche zur Gewichtsreduktion beiträgt, usw., werden Polymere
auch für Anwendungen in Betracht gezogen, bei denen ein gewisser Grad an
Leitfähigkeit gefordert wird. Das Hauptziel dieser Studie war, elektrisch leitende
Verbundwerkstoffe auf der Basis von Epoxidharz (EP) zu entwickeln und deren
elektrische, mechanische und thermische Eigenschaften zu studieren. Die Zielwerte
der elektrischen Leitfähigkeit lagen hauptsächlich im Bereich der Vermeidung
elektrostatischer Aufladungen (ESD, 10-9-10-6 S/cm).
Bei der Herstellung elektrisch leitender Kunststoffen wurden als erstes
Kohlenstofffasern (CF) als leitfähige Füllstoffe benutzt. Bei den durchgeführten
Experimenten konnte man beobachten, dass das Faserlängenverhältnis einen
bedeutenden Einfluss auf die elektrischen Eigenschaften der fabrizierten
Verbundwerkstoffe hat. Mit längeren CF wurde die Perkolationsschwelle bereits bei
einer niedrigeren Konzentration erreicht. Zusätzlich zu den homogenen CF/EP
Verbundwerkstoffen, wurden auch Gradientenwerkstoffe entwickelt. Mit Hilfe einer
Zentrifugation konnte eine gradierte Verteilung der CF entlang der Probenlängeachse
erreicht werden. Die Effekte der unterschiedlichen Zentrifugationsparameter
auf die resultierenden Gradientenwerkstoffe und die daraus
resultierenden, gradierten elektrischen und mechanischen Eigenschaften wurden
systematisch studiert.
Ein intrinsisch leitendes Polyanilin-Salz (PANI) wurde auch für das Erhöhen der
elektrischen Eigenschaften des EP benutzt. In diesem Fall wurde eine viel niedrigere
Perkolationsschwelle verglichen mit der von CF beobachtet. Der Einsatz von PANI hat bis zu einer bestimmten Konzentration nur einen minimalen Einfluß auf die
thermischen und mechanischen Eigenschaften des EP Systems.
In einem dritte Schritt wurden die zwei oben erwähnten, leitenden Füllstoffe
gemeinsam der EP Matrix hinzugefügt. Erhöhte elektrische und mechanische
Eigenschaften wurden in diesem Fall beobachtet, wobei sich ein Synergie-Effekt
zwischen den zwei Füllstoffen bezogen auf die elektrische Leitfähigkeit der
Verbundwerkstoffe ergab.
Im letzten Teil dieser Arbeit fand die Anwendung von theoretischen Modelle zur
Vorhersage der elektrischen Leitfähigkeit der entwickelten Verbundwerkstoffe statt.
Dabei konnte eine gute Übereinstimmung mit den experimentellen Ergebnissen
festgestellt werden .
This thesis focuses on dealing with some new aspects of continuous time portfolio optimization by using the stochastic control method.
First, we extend the Busch-Korn-Seifried model for a large investor by using the Vasicek model for the short rate, and that problem is solved explicitly for two types of intensity functions.
Next, we justify the existence of the constant proportion portfolio insurance (CPPI) strategy in a framework containing a stochastic short rate and a Markov switching parameter. The effect of Vasicek short rate on the CPPI strategy has been studied by Horsky (2012). This part of the thesis extends his research by including a Markov switching parameter, and the generalization is based on the B\"{a}uerle-Rieder investment problem. The explicit solutions are obtained for the portfolio problem without the Money Market Account as well as the portfolio problem with the Money Market Account.
Finally, we apply the method used in Busch-Korn-Seifried investment problem to explicitly solve the portfolio optimization with a stochastic benchmark.
In this thesis we address two instances of duality in commutative algebra.
In the first part, we consider value semigroups of non irreducible singular algebraic curves
and their fractional ideals. These are submonoids of Z^n closed under minima, with a conductor and which fulfill special compatibility properties on their elements. Subsets of Z^n
fulfilling these three conditions are known in the literature as good semigroups and their ideals, and their class strictly contains the class of value semigroup ideals. We examine
good semigroups both independently and in relation with their algebraic counterpart. In the combinatoric setting, we define the concept of good system of generators, and we
show that minimal good systems of generators are unique. In relation with the algebra side, we give an intrinsic definition of canonical semigroup ideals, which yields a duality
on good semigroup ideals. We prove that this semigroup duality is compatible with the Cohen-Macaulay duality under taking values. Finally, using the duality on good semigroup ideals, we show a symmetry of the Poincaré series of good semigroups with special properties.
In the second part, we treat Macaulay’s inverse system, a one-to-one correspondence
which is a particular case of Matlis duality and an effective method to construct Artinian k-algebras with chosen socle type. Recently, Elias and Rossi gave the structure of the inverse system of positive dimensional Gorenstein k-algebras. We extend their result by establishing a one-to-one correspondence between positive dimensional level k-algebras and certain submodules of the divided power ring. We give several examples to illustrate
our result.
Attention-awareness is a key topic for the upcoming generation of computer-human interaction. A human moves his or her eyes to visually attends to a particular region in a scene. Consequently, he or she can process visual information rapidly and efficiently without being overwhelmed by vast amount of information from the environment. Such a physiological function called visual attention provides a computer system with valuable information of the user to infer his or her activity and the surrounding environment. For example, a computer can infer whether the user is reading text or not by analyzing his or her eye movements. Furthermore, it can infer with which object he or she is interacting by recognizing the object the user is looking at. Recent developments of mobile eye tracking technologies enable us
to capture human visual attention in ubiquitous everyday environments. There are various types of applications where attention-aware systems may be effectively incorporated. Typical examples are augmented reality (AR) applications such as Wikitude which overlay virtual information onto physical objects. This type of AR application presents augmentative information of recognized objects to the user. However, if it presents information of all recognized objects at once, the over
ow of information could be obtrusive to the user. As a solution for such a problem, attention-awareness can be integrated into a system. If a
system knows to which object the user is attending, it can present only the information of
relevant objects to the user.
Towards attention-aware systems in everyday environments, this thesis presents approaches
for analysis of user attention to visual content. Using a state-of-the-art wearable eye tracking device, one can measure the user's eye movements in a mobile scenario. By capturing the user's eye gaze position in a scene and analyzing the image where the eyes focus, a computer can recognize the visual content the user is currently attending to. I propose several image analysis methods to recognize the user-attended visual content in a scene image. For example, I present an application called Museum Guide 2.0. In Museum Guide 2.0, image-based object recognition and eye gaze analysis are combined together to recognize user-attended objects in a museum scenario. Similarly, optical character recognition
(OCR), face recognition, and document image retrieval are also combined with eye gaze analysis to identify the user-attended visual content in respective scenarios. In addition to Museum Guide 2.0, I present other applications in which these combined frameworks are effectively used. The proposed applications show that the user can benefit from active information presentation which augments the attended content in a virtual environment with
a see-through head-mounted display (HMD).
In addition to the individual attention-aware applications mentioned above, this thesis
presents a comprehensive framework that combines all recognition modules to recognize the user-attended visual content when various types of visual information resources such as text, objects, and human faces are present in one scene. In particular, two processing strategies are proposed. The first one selects an appropriate image analysis module according to the user's current cognitive state. The second one runs all image analysis modules simultaneously and merges the analytic results later. I compare these two processing strategies in terms of user-attended visual content recognition when multiple visual information resources are present in the same scene.
Furthermore, I present novel interaction methodologies for a see-through HMD using eye gaze input. A see-through HMD is a suitable device for a wearable attention-aware system for everyday environments because the user can also view his or her physical environment
through the display. I propose methods for the user's attention engagement estimation with the display, eye gaze-driven proactive user assistance functions, and a method for interacting
with a multi-focal see-through display.
Contributions of this thesis include:
• An overview of the state-of-the-art in attention-aware computer-human interaction
and attention-integrated image analysis.
• Methods for the analysis of user-attended visual content in various scenarios.
• Demonstration of the feasibilities and the benefits of the proposed user-attended visual content analysis methods with practical user-supportive applications.
• Methods for interaction with a see-through HMD using eye gaze.
• A comprehensive framework for recognition of user-attended visual content in a complex
scene where multiple visual information resources are present.
This thesis opens a novel field of wearable computer systems where computers can understand the user attention in everyday environments and provide with what the user wants. I will show the potential of such wearable attention-aware systems for everyday
environments for the next generation of pervasive computer-human interaction.
The main focus of this dissertation is the synthesis and characterization of more recent zeolites with different pore architectures. The unique shape-selective properties of the zeolites are important in various chemical processes and the new zeolites containing novel internal pore architectures are of high interest, since they could lead to further improvement of existing processes or open the way to new applications. This dissertation is organized in the following way: The first part is focused on the synthesis of selected recent zeolites with different pore architectures and their modification to the acidic and bifunctional forms. The second part comprises the characterization of the physicochemical properties of the prepared zeolites by selected physicochemical methods, viz. powder X-ray diffractometry (XRD), N2 adsorption, thermogravimetric analysis (TGA/DTA/MS), ultraviolet-visible (UV-Vis) spectroscopy, atomic absorption spectroscopy (AAS), infrared (IR) spectroscopy, scanning electron microscopy (SEM), 27Al and 29Si magic angle spinning nuclear magnetic resonance (MAS NMR) spectroscopy, temperature-programmed reduction (TPR), temperature-programmed desorption of pyridine (pyridine TPD) and adsorption experiments with hydrocarbon adsorptives. The third part of this work is devoted to the application of test reactions, i.e., the acid catalyzed disproportionation of ethylbenzene and the bifunctional hydroconversion of n-decane, to characterize the pore size and architecture of the prepared zeolites. They are known to be valuable tools for exploring the pore structure of zeolites. Finally, an additional test, viz. the competitive hydrogenation of 1-hexene and 2,4,4-trimethyl-1-pentene, has been applied to probe the location of noble metals in medium pore zeolite. The synthesis of the following zeolite molecular sieves was successfully performed in the frame of this thesis (they are ranked according to the largest window size in the respective structure): • 14-MR pores: UTD-1, CIT-5, SSZ-53 and IM-12 • 12-MR pores: ITQ-21 and MCM-68 • 10-MR pores: SSZ-35 and MCM-71 All of them were obtained as pure phase (except zeolite MCM-71 with a minor impurity phase that is hardly to avoid and also present in samples shown in the patent literature). The synthesis conditions are very critical with respect to the formation of the zeolite with a given structure. In this work, the recommended synthesis recipes are included. Among the 14-MR zeolites, the aluminosilicates UTD-1 (nSi/nAl = 28), CIT-5 (nSi/nAl = 116) and SSZ-53 (nSi/nAl = 55) with unidimensional extra-large pore opening formed from 14-MR rings exhibit promising catalytic properties with high thermal stability and they possess strong Brønsted-acid sites. By contrast, the germanosilicate IM-12 with a structure containing 14-MR channels intersecting with 12-MR channels is unstable toward moisture. It was found that UTD-1 and SSZ-53 zeolites are highly active catalysts for the acid catalyzed disproportionation of ethylbenzene and n-decane hydroconversion due to their high Brønsted acidity. To explore their pore structures, the applied two test reactions suggest that UTD-1, CIT-5 and SSZ-53 zeolites contain a very open pore system (12-MR or larger pore systems) because the product distributions are not hampered by too small pores. ITQ-21, a germanoaluminosilicate zeolite with a three-dimensional pore system and large spherical cages accessible through six 12-MR windows, can be synthesized with nSi/nAl ratios between 27 and >200. It possesses a large amount of Brønsted-acid sites. The aluminosilicate zeolite MCM-68 (nSi/nAl = 9) is an extremely active catalyst in the disproportionation of ethylbenzene and in the n-decane hydroconversion. This is due to the presence of a high density of strong Brønsted-acid sites in its structure. The disproportionation of ethylbenzene suggests that MCM-68 is a large pore (i.e., at least 12-MR) zeolite, in agreement with its crystallographic structure. In the hydroconversion of n-decane, the presence of tribranched and ethylbranched isomers and a high isopentane yield of 58 % in the hydrocracked products suggest the presence of large (12-MR) pores in its structure. By contrast, a relatively high value for CI* (modified constraint index) of 2.9 suggests the presence of medium (10-MR) pores in its structure. As a whole, the results are in-line with the crystallographic structure of MCM-68. SSZ-35, a 10-MR zeolite, can be synthesized in a broad range of nSi/nAl ratios between 11 and >500. This zeolite is interesting in terms of shape selectivity resulting from its unusual pore system having unidimensional channels alternating between 10-MR windows and large 18-MR cages. This thermally very stable zeolite contains both, strong Brønsted- and strong Lewis-acid sites. The disproportionation of ethylbenzene classifies SSZ-35 as a large pore zeolite. In the hydroconversion of n-decane, the suppression of bulky ethyloctanes and propylheptane clearly suggests the presence of 10-MR sections in the pore system. By contrast, the low CI* values of 1.2-2.3 and the high isopentane yields of 56-60 % in the hydrocracked products suggest that SSZ-35 also possesses larger intracystalline voids, i.e., the 18-MR cages. The results from the catalytic characterization are in good agreement with the crystallographic structure of zeolite SSZ-35. It was also found that the nSi/nAl ratio influences the crystallite size and therefore the external surface area. As a consequence, product selectivities are also influenced: The lowest nSi/nAl ratio or the smallest crystallite size sample produces larger amounts of the relatively bulky products. The formation of these products probably results from the higher conversion or they are preferentially formed on the external surface area of the catalyst. Zeolite MCM-71 (nSi/nAl = 8) possesses an extremely thermally stable structure and contains a high concentration of Brønsted-acid sites. Its structure allows for the separation of n-alkanes from branched alkanes by selective adsorption. MCM-71 exhibits unique shape-selective properties towards the product distribution in ethylbenzene disproportionation, which is different to those obtained in the medium pore SSZ-35 zeolite. All reaction parameters are fulfilled to classify MCM-71 as medium pore zeolite and this is in good agreement with its reported structure consisting of two-dimensional network of elliptical 10-MR channels and an orthogonal sinusoidal 8-MR channels. The competitive hydrogenation of 1-hexene and 2,4,4-trimethyl-1-pentene was exploited to probe that the major part of the noble metal is located inside the intracrystalline void volume of the medium pore zeolite SSZ-35.
The high demanded data throughput of data communication between units in the system can be covered by short-haul optical communication and high speed serial data communication. In these data communication schemes, the receiver has to extract the corresponding clock from serial data stream by a clock and data recovery circuit (CDR). Data transceiver nodes have their own local reference clocks for their data transmission and data processing units. The reference clocks are normally slightly different even if they are specified to have the same frequency. Therefore, the data communication transceivers always work in a plesiochronous condition, an operation with slightly different reference frequencies. The difference of the data rates is covered by an elastic buffer. In a data readout system in the experiment in particle physics, such as a particle detector, the data of analog-to-digital converters (ADCs) in all detector nodes are transmitted over the networks. The plesiochronous condition in these networks are non-preferable because it causes the difficulty in the time stamping, which is used to indicate the relative time between events. The separated clock distribution network is normally required to overcome this problem. If the existing data communication networks can support the clock distribution function, the system complexity can be largely reduced. The CDRs on all detector nodes have to operate without a local reference clock and provide the recovered clocks, which have sufficiently good quality, for using as the reference timing for their local data processing units. In this thesis, a low jitter clock and data recovery circuit for large synchronous networks is presented. It possesses a 2-loop topology. They are clock and data recovery loop and clock jitter filter loop. In CDR loop, the CDR with rotational frequency detector is applied to increase its frequency capture range, therefore the operation without local reference clock is possible. Its loop bandwidth can be freely adjusted to meet the specified jitter tolerance. The 1/4-rate time-interleaving architecture is used to reduce the operation frequency and optimize the power consumption. The clock-jitter-filter loop is applied to improve the jitter of the recovered clock. It uses a low jitter LC voltage controlled oscillator (VCO). The loop bandwidth of the clock-jitter-filter is minimized to suppress the jitter of the recovered clock. The 1/4-rate CDR with frequency detector and clock-jitter-filter with LC-VCO were implemented in 0.18µm CMOS Technology. Both circuits occupy an area of 1.61mm2 and consume 170mW from 1.8V supply. The CDR can cover data rate from 1 to 2Gb/s. Its loop bandwidth is configurable from 700kHz to 4MHz. Its jitter tolerance can comply to SONET standard. The clock-jitter-filter has the configurable input/output frequencies from 9.191 to 78.125MHz. Its loop bandwidth is adjustable from 100kHz to 3MHz. The high frequency clock is also available for a serial data transmitter. The CDR with clock-jitter-filter can generate clock with jitter of 4.2ps rms from the incoming serial data with inter-symbol-interference jitter of 150ps peak-to-peak.
Spin and orbital magnetic moments of isolated single molecule magnets and transition metal clusters
(2015)
In the present work, magnetic moments of isolated Single Molecule Magnets (SMMs) and transition
metal clusters were investigated. Gas phase X‐ray Magnetic Circular Dichroism (XMCD) in
combination with sum rule analysis served to separate the total magnetic moments of the
investigated species into their spin and orbital contributions. Two different mass spectrometry based
setups were used for the presented investigations on transition metal clusters (GAMBIT‐setup) and
on single molecule magnets (NanoClusterTrap). Both experiments were coupled to the UE52‐PGM
beamline at the BESSY II synchrotron facility (Helmholtz Zentrum Berlin) which provided the
necessary polarized X‐ray photons. The investigation of the given compounds as isolated molecules
in the gas phase enabled a determination of their intrinsic magnetic properties void of any influences
of e.g. a surrounding bulk or supporting surface
The thesis discusses discrete-time dynamic flows over a finite time horizon T. These flows take time, called travel time, to pass an arc of the network. Travel times, as well as other network attributes, such as, costs, arc and node capacities, and supply at the source node, can be constant or time-dependent. Here we review results on discrete-time dynamic flow problems (DTDNFP) with constant attributes and develop new algorithms to solve several DTDNFPs with time-dependent attributes. Several dynamic network flow problems are discussed: maximum dynamic flow, earliest arrival flow, and quickest flow problems. We generalize the hybrid capacity scaling and shortest augmenting path algorithmic of the static network flow problem to consider the time dependency of the network attributes. The result is used to solve the maximum dynamic flow problem with time-dependent travel times and capacities. We also develop a new algorithm to solve earliest arrival flow problems with the same assumptions on the network attributes. The possibility to wait (or park) at a node before departing on outgoing arc is also taken into account. We prove that the complexity of new algorithm is reduced when infinite waiting is considered. We also report the computational analysis of this algorithm. The results are then used to solve quickest flow problems. Additionally, we discuss time-dependent bicriteria shortest path problems. Here we generalize the classical shortest path problems in two ways. We consider two - in general contradicting - objective functions and introduce a time dependency of the cost which is caused by a travel time on each arc. These problems have several interesting practical applications, but have not attained much attention in the literature. Here we develop two new algorithms in which one of them requires weaker assumptions as in previous research on the subject. Numerical tests show the superiority of the new algorithms. We then apply dynamic network flow models and their associated solution algorithms to determine lower bounds of the evacuation time, evacuation routes, and maximum capacities of inhabited areas with respect to safety requirements. As a macroscopic approach, our dynamic network flow models are mainly used to produce good lower bounds for the evacuation time and do not consider any individual behavior during the emergency situation. These bounds can be used to analyze existing buildings or help in the design phase of planning a building.
Since their introduction, robots have primarily influenced the industrial world, providing new opportunities and challenges for humans and machinery. With the introduction of lightweight robots and mobile robot platforms, the field of robot applications has been expanded, diversified, and brought closer to society. The increased degree of digitalization and the personalization of goods and products require an enhanced and flexible robot deployment by operating several multi-robot systems along production processes, industrial applications, assembly and packaging lines, transport systems, etc.
Efficient and safe robot operation relies on successful task planning followed by the computation and execution of task-performing motion trajectories. This thesis addresses these issues by developing, implementing, and validating optimization-based methods for task and trajectory planning in robotics, considering certain optimality and performance criteria. The focus is mainly on the time optimality of the presented approaches with respect to both execution and computation time without compromising safe robot use.
Driven by a systematic approach, the basis for the algorithm development is established first by modeling the kinematics and dynamics of the considered robots and identifying required dynamic parameters. In a further step, time-optimal task and trajectory planning algorithms for a single robotic arm are developed. Initially, a hierarchical approach is introduced consisting of two decoupled optimization-based control policies, a binary problem for task planning, and a continuous model predictive trajectory planning problem. The two layers of the hierarchical structure are then merged into a monolithic layer, resulting in a hybrid structure in the form of a mixed-integer optimization problem for inherent task and trajectory planning.
Motivated by a multi-robot deployment, the hierarchical control structure for time-optimal task and trajectory planning is extended for the case of a two-arm robotic system with highly overlapping operational spaces, leading to challenging robot motions with high inter-robot collision potential. To this end, a novel predictive approach for collision avoidance is proposed based on a continuous approximation of the robot geometry, resulting in a nonlinear optimization problem capable of online applications with real-time requirements. Towards a mobile and flexible robot platform, a model predictive path-following controller for an omnidirectional mobile robot is introduced. Here, a time-minimal approach is also applied, which consists of the robot following a given parameterized path as accurately as possible and at maximum speed.
The performance of the proposed algorithms and methods is experimentally analyzed and validated under real conditions on robot demonstrators. Implementation details, including the resulting hardware and software architecture, are presented, followed by a detailed description of the results. Concrete and industry-oriented demonstrators for integrating robotic arms in existing manual processes and the indoor navigation of a mobile robot complete the work.
Diversification is one of the main pillars of investment strategies. The prominent 1/N portfolio, which puts equal weight on each asset is, apart from its simplicity, a method which is hard to outperform in realistic settings, as many studies have shown. However, depending on the number of considered assets, this method can lead to very large portfolios. On the other hand, optimization methods like the mean-variance portfolio suffer from estimation errors, which often destroy the theoretical benefits. We investigate the performance of the equal weight portfolio when using fewer assets. For this we explore different naive portfolios, from selecting the best Sharpe ratio assets to exploiting knowledge about correlation structures using clustering methods. The clustering techniques separate the possible assets into non-overlapping clusters and the assets within a cluster are ordered by their Sharpe ratio. Then the best asset of each portfolio is chosen to be a member of the new portfolio with equal weights, the cluster portfolio. We show that this portfolio inherits the advantages of the 1/N portfolio and can even outperform it empirically. For this we use real data and several simulation models. We prove these findings from a statistical point of view using the framework by DeMiguel, Garlappi and Uppal (2009). Moreover, we show the superiority regarding the Sharpe ratio in a setting, where in each cluster the assets are comonotonic. In addition, we recommend the consideration of a diversification-risk ratio to evaluate the performance of different portfolios.
The recently established technologies in the areas of distributed measurement and intelligent
information processing systems, e.g., Cyber Physical Systems (CPS), Ambient
Intelligence/Ambient Assisted Living systems (AmI/AAL), the Internet of Things
(IoT), and Industry 4.0 have increased the demand for the development of intelligent
integrated multi-sensory systems as to serve rapid growing markets [1, 2]. These increase
the significance of complex measurement systems, that incorporate numerous advanced
methodological implementations including electronics circuit, signal processing,
and multi-sensory information fusion. In particular, in multi-sensory cognition applications,
to design such systems, the skill-required tasks, e.g., method selection, parameterization,
model analysis, and processing chain construction are elaborated with immense
effort, which conventionally are done manually by the expert designer. Moreover, the
strong technological competition imposes even more complicated design problems with
multiple constraints, e.g., cost, speed, power consumption,
exibility, and reliability.
Thus, the conventional human expert based design approach may not be able to cope
with the increasing demand in numbers, complexity, and diversity. To alleviate the issue,
the design automation approach has been the topic for numerous research works [3-14]
and has been commercialized to several products [15-18]. Additionally, the dynamic
adaptation of intelligent multi-sensor systems is the potential solution for developing
dependable and robust systems. Intrinsic evolution approach and self-x properties [19],
which include self-monitoring, -calibrating/trimming, and -healing/repairing, are among
the best candidates for the issue. Motivated from the ongoing research trends and based
on the background of our research work [12, 13] among the pioneers in this topic, the
research work of the thesis contributes to the design automation of intelligent integrated
multi-sensor systems.
In this research work, the Design Automation for Intelligent COgnitive system with self-
X properties, the DAICOX, architecture is presented with the aim of tackling the design
effort and to providing high quality and robust solutions for multi-sensor intelligent
systems. Therefore, the DAICOX architecture is conceived with the defined goals as
listed below.
Perform front to back complete processing chain design with automated method
selection and parameterization,
Provide a rich choice of pattern recognition methods to the design method pool,
Associate design information via interactive user interface and visualization along
with intuitive visual programming,
Deliver high quality solutions outperforming conventional approaches by using
multi-objective optimization,
Gain the adaptability, reliability and robustness of designed solutions with self-x
properties,
Derived from the goals, several scientific methodological developments and implementations,
particularly in the areas of pattern recognition and computational intelligence,
will be pursued as part of the DAICOX architecture in the research work of this thesis.
The method pool is aimed to contain a rich choice of methods and algorithms covering
data acquisition and sensor configuration, signal processing and feature computation,
dimensionality reduction, and classification. These methods will be selected and parameterized
automatically by the DAICOX design optimization to construct a multi-sensory
cognition processing chain. A collection of non-parametric feature quality assessment
functions for the purpose of Dimensionality Reduction (DR) process will be presented.
In addition, to standard DR methods, the variations of feature selection method, in
particular, feature weighting will be proposed. Three different classification categories
shall be incorporated in the method pool. Hierarchical classification approach will be
proposed and developed to serve as a multi-sensor fusion architecture at the decision
level. Beside multi-class classification, one-class classification methods, e.g., One-Class
SVM and NOVCLASS will be presented to extend functionality of the solutions, in particular,
anomaly and novelty detection. DAICOX is conceived to effectively handle the
problem of method selection and parameter setting for a particular application yielding
high performance solutions. The processing chain construction tasks will be carried
out by meta-heuristic optimization methods, e.g., Genetic Algorithms (GA) and Particle
Swarm Optimization (PSO), with multi-objective optimization approach and model
analysis for robust solutions. In addition, to the automated system design mechanisms,
DAICOX will facilitate the design tasks with intuitive visual programming and various
options of visualization. Design database concept of DAICOX is aimed to allow the
reusability and extensibility of the designed solutions gained from previous knowledge.
Thus, the cooperative design of machine and knowledge from the design expert can also
be utilized for obtaining fully enhanced solutions. In particular, the integration of self-x
properties as well as intrinsic optimization into the system is proposed to gain enduring
reliability and robustness. Hence, DAICOX will allow the inclusion of dynamically
reconfigurable hardware instances to the designed solutions in order to realize intrinsic
optimization and self-x properties.
As a result from the research work in this thesis, a comprehensive intelligent multisensor
system design architecture with automated method selection, parameterization,
and model analysis is developed with compliance to open-source multi-platform software.It is integrated with an intuitive design environment, which includes visual programming
concept and design information visualizations. Thus, the design effort is minimized as
investigated in three case studies of different application background, e.g., food analysis
(LoX), driving assistance (DeCaDrive), and magnetic localization. Moreover, DAICOX
achieved better quality of the solutions compared to the manual approach in all cases,
where the classification rate was increased by 5.4%, 0.06%, and 11.4% in the LoX,
DeCaDrive, and magnetic localization case, respectively. The design time was reduced
by 81.87% compared to the conventional approach by using DAICOX in the LoX case
study. At the current state of development, a number of novel contributions of the thesis
are outlined below.
Automated processing chain construction and parameterization for the design of
signal processing and feature computation.
Novel dimensionality reduction methods, e.g., GA and PSO based feature selection
and feature weighting with multi-objective feature quality assessment.
A modification of non-parametric compactness measure for feature space quality
assessment.
Decision level sensor fusion architecture based on proposed hierarchical classification
approach using, i.e., H-SVM.
A collection of one-class classification methods and a novel variation, i.e.,
NOVCLASS-R.
Automated design toolboxes supporting front to back design with automated
model selection and information visualization.
In this research work, due to the complexity of the task, neither all of the identified goals
have been comprehensively reached yet nor has the complete architecture definition been
fully implemented. Based on the currently implemented tools and frameworks, ongoing
development of DAICOX is pursuing towards the complete architecture. The potential
future improvements are the extension of method pool with a richer choice of methods
and algorithms, processing chain breeding via graph based evolution approach, incorporation
of intrinsic optimization, and the integration of self-x properties. According to
these features, DAICOX will improve its aptness in designing advanced systems to serve
the increasingly growing technologies of distributed intelligent measurement systems, in
particular, CPS and Industrie 4.0.
Due to remarkable technological advances in the last three decades the capacity of computer systems has improved tremendously. Considering Moore's law, the number of transistors on integrated circuits has doubled approximately every two years and the trend is continuing. Likewise, developments in storage density, network bandwidth, and compute capacity show similar patterns. As a consequence, the amount of data that can be processed by today's systems has increased by orders of magnitude. At the same time, however, the resolution of screens has hardly increased by a factor of ten. Thus, there is a gap between the amount of data that can be processed and the amount of data that can be visualized. Large high-resolution displays offer a way to deal with this gap and provide a significantly increased screen area by combining the images of multiple smaller display devices. The main objective of this dissertation is the development of new visualization and interaction techniques for large high-resolution displays.
In embedded systems, there is a trend of integrating several different functionalities on a common platform. This has been enabled by increasing processing power and the arise of integrated system-on-chips.
The composition of safety-critical and non-safety-critical applications results in mixed-criticality systems. Certification Authorities (CAs) demand the certification of safety-critical applications with strong confidence in the execution time bounds. As a consequence, CAs use conservative assumptions in the worst-case execution time (WCET) analysis which result in more pessimistic WCETs than the ones used by designers. The existence of certified safety-critical and non-safety-critical applications can be represented by dual-criticality systems, i.e., systems with two criticality levels.
In this thesis, we focus on the scheduling of mixed-criticality systems which are subject to certification. Scheduling policies cognizant of the mixed-criticality nature of the systems and the certification requirements are needed for efficient and effective scheduling. Furthermore, we aim at reducing the certification costs to allow faster modification and upgrading, and less error-prone certification. Besides certification aspects, requirements of different operational modes result in challenging problems for the scheduling process. Despite the mentioned problems, schedulers require a low runtime overhead for an efficient execution at runtime.
The presented solutions are centered around time-triggered systems which feature a low runtime overhead. We present a transformation to include event-triggered activities, represented by sporadic tasks, already into the offline scheduling process. Further, this transformation can also be applied on periodic tasks to shorten the length of schedule tables which reduces certification costs. These results can be used in our method to construct schedule tables which creates two schedule tables to fulfill the requirements of dual-criticality systems using mode changes at runtime. Finally, we present a scheduler based on the slot-shifting algorithm for mixed-criticality systems. In a first version, the method schedules dual-criticality jobs without the need for mode changes. An already certified schedule table can be used and at runtime, the scheduler reacts to the actual behavior of the jobs and thus, makes effective use of the available resources. Next, we extend this method to schedule mixed-criticality job sets with different operational modes. As a result, we can schedule jobs with varying parameters in different modes.
Cells and organelles are enclosed by membranes that consist of a lipid bilayer harboring highly
diverse membrane proteins (MPs). These carry out vital functions, and α-helical MPs, in
particular, are of outstanding pharmacological importance, as they comprise more than half of
all drug targets. However, knowledge from MP research is limited, as MPs require membranemimetic
environments to retain their native structures and functions and, thus, are not readily
amenable to in vitro studies. To gain insight into vectorial functions, as in the case of channels
and transporters, and into topology, which describes MP conformation and orientation in the
context of a membrane, purified MPs need to be reconstituted, that is, transferred from detergent
micelles into a lipid-bilayer system.
The ultimate goal of this thesis was to elucidate the membrane topology of Mistic, which is
an essential regulator of biofilm formation in Bacillus subtilis consisting of four α-helices. The
conformational stability of Mistic has been shown to depend on the presence of a hydrophobic
environment. However, Mistic is characterized by an uncommonly hydrophilic surface, and
its helices are significantly shorter than transmembrane helices of canonical integral MPs.
Therefore, the means by which its association with the hydrophobic interior of a lipid bilayer
is accomplished is a subject of much debate. To tackle this issue, Mistic was produced and
purified, reconstituted, and subjected to topological studies.
Reconstitution of Mistic in the presence of lipids was performed by lowering the detergent
concentration to subsolubilizing concentrations via addition of cyclodextrin. To fully exploit
the advantages offered by cyclodextrin-mediated detergent removal, a quantitative model was
established that describes the supramolecular state of the reconstitution mixture and allows
for the prediction of reconstitution trajectories and their cross points with phase boundaries.
Automated titrations enabled spectroscopic monitoring of Mistic reconstitutions in real time.
On the basis of the established reconstitution protocol, the membrane topology of Mistic was
investigated with the aid of fluorescence quenching experiments and oriented circular dichroism
spectroscopy. The results of these experiments reveal that Mistic appears to be an exception
from the commonly observed transmembrane orientation of α-helical MPs, since it exhibits
a highly unusual in-plane topology, which goes in line with recent coarse-grained molecular
dynamics simulations.
The neural networks have been extensively used for tasks based on image sensors. These models have, in the past decade, consistently performed better than other machine learning methods on tasks of computer vision. It is understood that methods for transfer learning from neural networks trained on large datasets can reduce the total data requirement while training new neural network models. These methods tend not to perform well when the data recording sensor or the recording environment is unique from the existing large datasets. The machine learning literature provides various methods for prior-information inclusion in a learning model. Such methods employ methods like designing biases into the data representation vectors, enforcing priors or physical constraints on the models. Including such information into neural networks for the image frames and image-sequence classification is hard because of the very high dimensional neural network mapping function and little information about the relation between the neural network parameters. In this thesis, we introduce methods for evaluating the statistically learned data representation and combining these information descriptors. We have introduced methods for including information into neural networks. In a series of experiments, we have demonstrated methods for adding the existing model or task information to neural networks. This is done by 1) Adding architectural constraints based on the physical shape information of the input data, 2) including weight priors on neural networks by training them to mimic statistical and physical properties of the data (hand shapes), and 3) by including the knowledge about the classes involved in the classification tasks to modify the neural network outputs. These methods are demonstrated, and their positive influence on the hand shape and hand gesture classification tasks are reported. This thesis also proposes methods for combination of statistical and physical models with parametrized learning models and show improved performances with constant data size. Eventually, these proposals are tied together to develop an in-car hand-shape and hand-gesture classifier based on a Time of Flight sensor.
Die Anwendung von tragbare Sensorik im Bereich der Bewegungsanalyse ist mittlerweile zu einem zentralen Bestandteil in der Medizin und im Sport geworden. In den letzten Jahren befinden sich vor allem Inertiale Messeinheiten (IMU) auf dem Vormarsch. Durch die Fusion mehrerer Sensoren erlauben es IMU Systeme komplexe Informationen wie etwa Gelenkwinkel und spatio-temporale Parameter (STP) zu gewinnen. Viele der heute verfügbaren IMU Systeme befinden sich in der Entwicklungsphase und wurden noch nicht adäquat für den klinischen oder den sportspezifischen Einsatz auf Validität und Reliabilität getestet. Dieses Prozedere ist nach wissenschaftlichen Gesichtspunkten unerlässlich bevor ein System zur biomechanischen Analyse herangezogen und basierend auf dessen Ergebnissen etwa klinische Entscheidungen getroffen werden können. Folglich wurde in der vorliegenden Arbeit ein neu entwickeltes IMU System, dass, basierend auf Akzelerometer und Gyroskop Daten, spatio-temporale Gangparameter und Gelenkswinkel der unteren Extremität berechnet, hinsichtlich dieser Kriterien evaluiert. Zu diesem Zweck wurden mit Hilfe dieses IMU Systems Daten von unterschiedlich dynamischen Bewegungen in zwei verschiedenen Probandengruppen, einer gesunden, jungen Gruppe und einer Gruppe mit Patienten nach totaler Hüftarthroplastik (THA), aufgenommen. Daraus wurden die 3D Winkel des Hüft-, Knie- und Sprunggelenks sowie die globale Bewegung des Beckens berechnet. Weiter wurden gangspezifische STP, z.B. Schrittlänge, Schreitlänge, Kadenz, berechnet. Aber auch STP die typischerweise nur mit alternativen Systemen zuverlässig zu messen sind, z.B. Spurbreite und Durchschwungbreite, wurden erhoben. Die Ergebnisse aus dem IMU System wurden gegen ein etabliertes Referenzsystem im Bereich der Bewegungsanalyse, in Form eines markerbasierten stereophotogrammetrischen Systems, verglichen. Die vorliegenden Ergebnisse zeigen in beiden Gruppen eine starke Korrelation zwischen den Systemen in den Gelenkwinkeln der sagittalen und frontalen Ebene, sowie den STP. Es zeigte sich aber auch, dass die Übereinstimmung des IMU Systems mit dem kamerabasierten System vor allem in den Winkeln der Transversalebene, i.e. Rotationsbewegungen, und hier vor allem im Bereich des Kniegelenks leicht abnimmt. Weiter zeigte sich, dass die Genauigkeit des IMU Systems bei dynamischeren Bewegungen ebenfalls abnimmt. Bezüglich der Test-Retest Reliabilität zeigen die aktuellen Daten eine hohe Verlässlichkeit der Messergebnisse.
In einem zweiten Schritt wurde mit Hilfe der Daten des nun validierten IMU Systems versucht pathologische Gangmuster, in dem konkreten Fall das Gangmuster von Patienten nach THA, von physiologischen zu differenzieren. Hierzu wurde ein Algorithmus des maschinellen Lernens angewandt um an Hand von ausgewählten, klinisch relevanten Parametern eine Klassifikation vorzunehmen. Diese Methode wurde ebenfalls sowohl an Hand von IMU Daten und Daten des Referenzsystems evaluiert. Es zeigte sich kein Unterschied in der Klassifikationsgenauigkeit zwischen den Systemen. Die Genauigkeit, mit der pathologische Gangmuster erkannt wurden, lag in beiden Fällen über 96 %.
Die vorliegende Arbeit beschreibt im Detail die Vor- und Nachteile eines neu entwickelten, mobilen IMU Systems, das komplexe Parameter der Kinematik mit hoher Genauigkeit und Verlässlichkeit erfasst. Besonders die erfolgreiche Evaluierung dieses Systems in einer klinisch relevanten Applikation zeigt das große Potential von IMU Systemen in der klinischen Anwendung.
Image restoration and enhancement methods that respect important features such as edges play a fundamental role in digital image processing. In the last decades a large
variety of methods have been proposed. Nevertheless, the correct restoration and
preservation of, e.g., sharp corners, crossings or texture in images is still a challenge, in particular in the presence of severe distortions. Moreover, in the context of image denoising many methods are designed for the removal of additive Gaussian noise and their adaptation for other types of noise occurring in practice requires usually additional efforts.
The aim of this thesis is to contribute to these topics and to develop and analyze new
methods for restoring images corrupted by different types of noise:
First, we present variational models and diffusion methods which are particularly well
suited for the restoration of sharp corners and X junctions in images corrupted by
strong additive Gaussian noise. For their deduction we present and analyze different
tensor based methods for locally estimating orientations in images and show how to
successfully incorporate the obtained information in the denoising process. The advantageous
properties of the obtained methods are shown theoretically as well as by
numerical experiments. Moreover, the potential of the proposed methods is demonstrated
for applications beyond image denoising.
Afterwards, we focus on variational methods for the restoration of images corrupted
by Poisson and multiplicative Gamma noise. Here, different methods from the literature
are compared and the surprising equivalence between a standard model for
the removal of Poisson noise and a recently introduced approach for multiplicative
Gamma noise is proven. Since this Poisson model has not been considered for multiplicative
Gamma noise before, we investigate its properties further for more general
regularizers including also nonlocal ones. Moreover, an efficient algorithm for solving
the involved minimization problems is proposed, which can also handle an additional
linear transformation of the data. The good performance of this algorithm is demonstrated
experimentally and different examples with images corrupted by Poisson and
multiplicative Gamma noise are presented.
In the final part of this thesis new nonlocal filters for images corrupted by multiplicative
noise are presented. These filters are deduced in a weighted maximum likelihood
estimation framework and for the definition of the involved weights a new similarity measure for the comparison of data corrupted by multiplicative noise is applied. The
advantageous properties of the new measure are demonstrated theoretically and by
numerical examples. Besides, denoising results for images corrupted by multiplicative
Gamma and Rayleigh noise show the very good performance of the new filters.
We present a new efficient and robust algorithm for topology optimization of 3D cast parts. Special constraints are fulfilled to make possible the incorporation of a simulation of the casting process into the optimization: In order to keep track of the exact position of the boundary and to provide a full finite element model of the structure in each iteration, we use a twofold approach for the structural update. A level set function technique for boundary representation is combined with a new tetrahedral mesh generator for geometries specified by implicit boundary descriptions. Boundary conditions are mapped automatically onto the updated mesh. For sensitivity analysis, we employ the concept of the topological gradient. Modification of the level set function is reduced to efficient summation of several level set functions, and the finite element mesh is adapted to the modified structure in each iteration of the optimization process. We show that the resulting meshes are of high quality. A domain decomposition technique is used to keep the computational costs of remeshing low. The capabilities of our algorithm are demonstrated by industrial-scale optimization examples.
The booming global market of nanomaterials in the last few decades has led to the inevitable emission of these materials into aquatic environments; hence, understanding their physical, chemical, and biological transformations has become a big concern for environmental scientists. Despite a great deal of effort made to understand the mobility, fate, and risk assessment of e.g, TiO2 nanoparticles, it is still unclear if the obtained results, under lab-controlled conditions, can be generalized to realistic released nanoparticles in aquatic environments since the complex dynamics of environmental conditions are not completely reproducible under controlled conditions.
In the present study, we proposed a new approach to expose TiO2 nanoparticles to environmental conditions of natural surface waters by making use of dialysis membranes as passive reactors. The function of these reactors is based on the permeability of the membrane to the dissolved matter of surface waters while TiO2 nanoparticles do not pass through the membrane. These systems benefit from the fact that although the complexity and temporal variability of most of the environmental parameters of surface waters are reproducible inside the reactors, colloidal and particulate interferences remain separated. Furthermore, no significant reduction in pore size i.e., membrane fouling is observed in dialysis bags after exposure to surface waters which validates the efficiency of the system.
Taking advantage of these reactors to expose nanoparticles to surface waters, we investigated the influential physicochemical parameters of the surface waters on the formation of natural coating onto nanoparticles. Hence, dialysis bags were used to expose TiO2 nanoparticles, in situ, to ten different surface waters in the spring and summer of 2019. Due to the complexity of the natural dissolved matter of the surface waters as long as their low natural concentrations, we needed to use a combination of analytical techniques and multivariate data analysis to investigate the coatings. The initial findings were similar to the lab-controlled exposure studies in the literature showing pH, electrical conductivity, and Ca2+- Mg2+ concentration as the three most important parameters of surface waters controlling the formation of coatings. Nonetheless, we came across a phenomenon being overlooked under lab-controlled conditions; natural coatings are composed of not only organics (DOM: dissolved organic matter) but also inorganics (carbonate) which implies that their realistic coatings are more complex than what the previous studies described.
The second part of this thesis focused on investigating the interactions of more realistic nanoparticles (extracted TiO2 nanoparticles from 11 sunscreens) with DOM. Using ToF-SIMS combined with high-dimensional data analysis, we tried to find a general DOM-sorption pattern among TiO2 nanoparticles since finding this pattern could have ultimately opened a way to assess the fate of (more) realistic nanoparticles in aquatic environments. Contrary to our expectations, the results showed a unique sorption pattern for each sunscreen controlled by the composition of the sunscreens implying that the sorption pattern of each sunscreen should be investigated individually. In the next step of this study, we used random forest to extract the most important fragments of DOM sorbed onto each sunscreen followed by an effort to assign these important masses to chemical fragments.
Trying to provide a comprehensive understanding of interactions of the released n-TiO2 in aquatic environments, in future studies, we are going to expand our coating research to different types of TiO2 nanoparticles, such as extracted particles from paint, where the reaction media (surface waters) are covering a wide range of water parameters representative of various ecosystems. Making use of state-of-the-art techniques as long as multivariate data analysis, we will try to achieve a model describing the sorption mechanisms of dissolved matter of surface waters onto nanoparticles. Such studies can eventually lead us to a better understanding of the fate of the released nanoparticles under natural conditions.
Analog sensor electronics requires special care during design in order to increase the quality and precision of the signal, and the life time of the product. Nevertheless, it can experience static deviations due to the manufacturing tolerances, and dynamic deviations due to operating in non-ideal environment. Therefore, the advanced applications such as MEMS technology employs calibration loop to deal with the deviations, but unfortunately, it is considered only in the digital domain, which cannot cope with all the analog deviations such as saturation of the analog signal, etc. On the other hand, rapid-prototyping is essential to decrease the development time, and the cost of the products for small quantities. Recently, evolvable hardware has been developed with the motivation to cope with the mentioned sensor electronic problems. However the industrial specifications and requirements are not considered in the hardware learning loop. Indeed, it minimizes the error between the required output and the real output generated due to given test signal. The aim of this thesis is to synthesize the generic organic-computing sensor electronics and return hardware with predictable behavior for embedded system applications that gains the industrial acceptance; therefore, the hardware topology is constrained to the standard hardware topologies, the hardware standard specifications are included in the optimization, and hierarchical optimization are abstracted from the synthesis tools to evolve first the building blocks, then evolve the abstract level that employs these optimized blocks. On the other hand, measuring some of the industrial specifications needs expensive equipments and some others are time consuming which is not fortunate for embedded system applications. Therefore, the novel approach "mixtrinsic multi-objective optimization" is proposed that simulates/estimates the set of the specifications that is hard to be measured due to the cost or time requirements, while it measures intrinsically the set of the specifications that has high sensitivity to deviations. These approaches succeed to optimize the hardware to meet the industrial specifications with low cost measurement setup which is essential for embedded system applications.
Analyzing Centrality Indices in Complex Networks: an Approach Using Fuzzy Aggregation Operators
(2018)
The identification of entities that play an important role in a system is one of the fundamental analyses being performed in network studies. This topic is mainly related to centrality indices, which quantify node centrality with respect to several properties in the represented network. The nodes identified in such an analysis are called central nodes. Although centrality indices are very useful for these analyses, there exist several challenges regarding which one fits best
for a network. In addition, if the usage of only one index for determining central
nodes leads to under- or overestimation of the importance of nodes and is
insufficient for finding important nodes, then the question is how multiple indices
can be used in conjunction in such an evaluation. Thus, in this thesis an approach is proposed that includes multiple indices of nodes, each indicating
an aspect of importance, in the respective evaluation and where all the aspects of a node’s centrality are analyzed in an explorative manner. To achieve this
aim, the proposed idea uses fuzzy operators, including a parameter for generating different types of aggregations over multiple indices. In addition, several preprocessing methods for normalization of those values are proposed and discussed. We investigate whether the choice of different decisions regarding the
aggregation of the values changes the ranking of the nodes or not. It is revealed that (1) there are nodes that remain stable among the top-ranking nodes, which
makes them the most central nodes, and there are nodes that remain stable
among the bottom-ranking nodes, which makes them the least central nodes; and (2) there are nodes that show high sensitivity to the choice of normalization
methods and/or aggregations. We explain both cases and the reasons why the nodes’ rankings are stable or sensitive to the corresponding choices in various networks, such as social networks, communication networks, and air transportation networks.
Lithium-ion batteries are broadly used nowadays in all kinds of portable electronics, such as laptops, cell phones, tablets, e-book readers, digital cameras, etc. They are preferred to other types of rechargeable batteries due to their superior characteristics, such as light weight and high energy density, no memory effect, and a big number of charge/discharge cycles. The high demand and applicability of Li-ion batteries naturally give rise to the unceasing necessity of developing better batteries in terms of performance and lifetime. The aim of the mathematical modelling of Li-ion batteries is to help engineers test different battery configurations and electrode materials faster and cheaper. Lithium-ion batteries are multiscale systems. A typical Li-ion battery consists of multiple connected electrochemical battery cells. Each cell has two electrodes - anode and cathode, as well as a separator between them that prevents a short circuit.
Both electrodes have porous structure composed of two phases - solid and electrolyte. We call macroscale the lengthscale of the whole electrode and microscale - the lengthscale at which we can distinguish the complex porous structure of the electrodes. We start from a Li-ion battery model derived on the microscale. The model is based on nonlinear diffusion type of equations for the transport of Lithium ions and charges in the electrolyte and in the active material. Electrochemical reactions on the solid-electrolyte interface couple the two phases. The interface kinetics is modelled by the highly nonlinear Butler-Volmer interface conditions. Direct numerical simulations with standard methods, such as the Finite Element Method or Finite Volume Method, lead to ill-conditioned problems with a huge number of degrees of freedom which are difficult to solve. Therefore, the aim of this work is to derive upscaled models on the lengthscale of the whole electrode so that we do not have to resolve all the small-scale features of the porous microstructure thus reducing the computational time and cost. We do this by applying two different upscaling techniques - the Asymptotic Homogenization Method and the Multiscale Finite Element Method (MsFEM). We consider the electrolyte and the solid as two self-complementary perforated domains and we exploit this idea with both upscaling methods. The first method is restricted only to periodic media and periodically oscillating solutions while the second method can be applied to randomly oscillating solutions and is based on the Finite Element Method framework. We apply the Asymptotic Homogenization Method to derive a coupled macro-micro upscaled model under the assumption of periodic electrode microstructure. A crucial step in the homogenization procedure is the upscaling of the Butler-Volmer interface conditions. We rigorously determine the asymptotic order of the interface exchange current densities and we perform a comprehensive numerical study in order to validate the derived homogenized Li-ion battery model. In order to upscale the microscale battery problem in the case of random electrode microstructure we apply the MsFEM, extended to problems in perforated domains with Neumann boundary conditions on the holes. We conduct a detailed numerical investigation of the proposed algorithm and we show numerical convergence of the method that we design. We also apply the developed technique to a simplified two-dimensional Li-ion battery problem and we show numerical convergence of the solution obtained with the MsFEM to the reference microscale one.
Lithium-ion batteries are increasingly becoming an ubiquitous part of our everyday life - they are present in mobile phones, laptops, tools, cars, etc. However, there are still many concerns about their longevity and their safety. In this work we focus on the simulation of several degradation mechanisms on the microscopic scale, where one can resolve the active materials inside the electrodes of the lithium-ion batteries as porous structures. We mainly study two aspects - heat generation and mechanical stress. For the former we consider an electrochemical non-isothermal model on the spatially resolved porous scale to observe the temperature increase inside a battery cell, as well as to observe the individual heat sources to assess their contributions to the total heat generation. As a result from our experiments, we determined that the temperature has very small spatial variance for our test cases and thus allows for an ODE formulation of the heat equation.
The second aspect that we consider is the generation of mechanical stress as a result of the insertion of lithium ions in the electrode materials. We study two approaches - using small strain models and finite strain models. For the small strain models, the initial geometry and the current geometry coincide. The model considers a diffusion equation for the lithium ions and equilibrium equation for the mechanical stress. First, we test a single perforated cylindrical particle using different boundary conditions for the displacement and with Neumann boundary conditions for the diffusion equation. We also test for cylindrical particles, but with boundary conditions for the diffusion equation in the electrodes coming from an isothermal electrochemical model for the whole battery cell. For the finite strain models we take in consideration the deformation of the initial geometry as a result of the intercalation and the mechanical stress. We compare two elastic models to study the sensitivity of the predicted elastic behavior on the specific model used. We also consider a softening of the active material dependent on the concentration of the lithium ions and using data for silicon electrodes. We recover the general behavior of the stress from known physical experiments.
Some models, like the mechanical models we use, depend on the local values of the concentration to predict the mechanical stress. In that sense we perform a short comparative study between the Finite Element Method with tetrahedral elements and the Finite Volume Method with voxel volumes for an isothermal electrochemical model.
The spatial discretizations of the PDEs are done using the Finite Element Method. For some models we have discontinuous quantities where we adapt the FEM accordingly. The time derivatives are discretized using the implicit Backward Euler method. The nonlinear systems are linearized using the Newton method. All of the discretized models are implemented in a C++ framework developed during the thesis.
In this dissertation, we discuss how to price American-style options. Our aim is to study and improve the regression-based Monte Carlo methods. In order to have good benchmarks to compare with them, we also study the tree methods.
In the second chapter, we investigate the tree methods specifically. We do research firstly within the Black-Scholes model and then within the Heston model. In the Black-Scholes model, based on Müller's work, we illustrate how to price one dimensional and multidimensional American options, American Asian options, American lookback options, American barrier options and so on. In the Heston model, based on Sayer's research, we implement his algorithm to price one dimensional American options. In this way, we have good benchmarks of various American-style options and put them all in the appendix.
In the third chapter, we focus on the regression-based Monte Carlo methods theoretically and numerically. Firstly, we introduce two variations, the so called "Tsitsiklis-Roy method" and the "Longstaff-Schwartz method". Secondly, we illustrate the approximation of American option by its Bermudan counterpart. Thirdly we explain the source of low bias and high bias. Fourthly we compare these two methods using in-the-money paths and all paths. Fifthly, we examine the effect using different number and form of basis functions. Finally, we study the Andersen-Broadie method and present the lower and upper bounds.
In the fourth chapter, we study two machine learning techniques to improve the regression part of the Monte Carlo methods: Gaussian kernel method and kernel-based support vector machine. In order to choose a proper smooth parameter, we compare fixed bandwidth, global optimum and suboptimum from a finite set. We also point out that scaling the training data to [0,1] can avoid numerical difficulty. When out-of-sample paths of stock prices are simulated, the kernel method is robust and even performs better in several cases than the Tsitsiklis-Roy method and the Longstaff-Schwartz method. The support vector machine can keep on improving the kernel method and needs less representations of old stock prices during prediction of option continuation value for a new stock price.
In the fifth chapter, we switch to the hardware (FGPA) implementation of the Longstaff-Schwartz method and propose novel reversion formulas for the stock price and volatility within the Black-Scholes and Heston models. The test for this formula within the Black-Scholes model shows that the storage of data is reduced and also the corresponding energy consumption.