### Refine

#### Year of publication

#### Document Type

- Doctoral Thesis (665) (remove)

#### Language

- English (665) (remove)

#### Keywords

- Visualisierung (13)
- finite element method (9)
- Finite-Elemente-Methode (7)
- Visualization (7)
- Algebraische Geometrie (6)
- Numerische Strömungssimulation (6)
- Computergraphik (5)
- Finanzmathematik (5)
- Mobilfunk (5)
- Optimization (5)
- Portfolio Selection (5)
- Computeralgebra (4)
- Kontinuumsmechanik (4)
- Navier-Stokes-Gleichung (4)
- Nichtlineare Finite-Elemente-Methode (4)
- Portfolio-Optimierung (4)
- Simulation (4)
- Stochastische dynamische Optimierung (4)
- computational mechanics (4)
- portfolio optimization (4)
- verification (4)
- Elastizität (3)
- Evaluation (3)
- Finite-Volumen-Methode (3)
- Flüssig-Flüssig-Extraktion (3)
- Geoinformationssystem (3)
- Homogenisierung <Mathematik> (3)
- Infrarotspektroskopie (3)
- Inverses Problem (3)
- Layout (3)
- MIMO (3)
- Mehrskalenmodell (3)
- Model checking (3)
- Mustererkennung (3)
- NURBS (3)
- Numerische Mathematik (3)
- OFDM (3)
- Partial Differential Equations (3)
- Portfolio Optimization (3)
- Ray casting (3)
- Robotik (3)
- Scientific Visualization (3)
- Semantic Web (3)
- Transaction Costs (3)
- Tropische Geometrie (3)
- Verifikation (3)
- Wavelet (3)
- cobalt (3)
- computer graphics (3)
- document analysis (3)
- isogeometric analysis (3)
- optical character recognition (3)
- optimales Investment (3)
- visualization (3)
- ADAS (2)
- Activity recognition (2)
- Algorithmus (2)
- Apoptosis (2)
- Asymptotic Expansion (2)
- Asymptotik (2)
- B-Spline (2)
- B-splines (2)
- Bewertung (2)
- Bildverarbeitung (2)
- Blattschneiderameisen (2)
- Bottom-up (2)
- CFD (2)
- CYP1A1 (2)
- Cluster-Analyse (2)
- Clusterion (2)
- Computational Fluid Dynamics (2)
- Datenanalyse (2)
- Datenbank (2)
- Domänenumklappen (2)
- Effizienter Algorithmus (2)
- Elasticity (2)
- Elastomer (2)
- Elastoplasticity (2)
- Elastoplastizität (2)
- Endlicher Automat (2)
- Erdmagnetismus (2)
- Erwarteter Nutzen (2)
- Evolution (2)
- Extrapolation (2)
- FEM (2)
- Festkörper (2)
- Filtergesetz (2)
- Filtration (2)
- Finite Pointset Method (2)
- Funktionale Sicherheit (2)
- Geometric Ergodicity (2)
- Gröbner-Basis (2)
- Hochskalieren (2)
- Homogenization (2)
- Hydrodynamics (2)
- Hydrodynamik (2)
- IMRT (2)
- IR-MPD (2)
- IRMPD (2)
- Isogeometrische Analyse (2)
- Knowledge Management (2)
- Kreditrisiko (2)
- Level-Set-Methode (2)
- Lineare Elastizitätstheorie (2)
- Local smoothing (2)
- Machine learning (2)
- Mehrskalenanalyse (2)
- Merkmalsextraktion (2)
- Microarray (2)
- Model Checking (2)
- Modellierung (2)
- Modulraum (2)
- Molekulardynamik (2)
- Morphology (2)
- Nanocomposites (2)
- Natural Language Processing (2)
- Neural Networks (2)
- Nichtlineare Kontinuumsmechanik (2)
- Optimierung (2)
- Optionspreistheorie (2)
- Optische Zeichenerkennung (2)
- Pattern Recognition (2)
- Phasengleichgewicht (2)
- Piezoelektrizität (2)
- Plastizität (2)
- Polymere (2)
- Populationsbilanzen (2)
- Portfoliomanagement (2)
- Poröser Stoff (2)
- Protonentransfer (2)
- Raumakustik (2)
- Recommender Systems (2)
- Regularisierung (2)
- Room acoustics (2)
- Räumliche Statistik (2)
- SOC (2)
- Scattered-Data-Interpolation (2)
- Schnitttheorie (2)
- Stochastic Control (2)
- System-on-Chip (2)
- Technische Mechanik (2)
- Teilchen (2)
- Topology (2)
- Tropenökologie (2)
- Uncertainty Visualization (2)
- Upscaling (2)
- Vektorwavelets (2)
- Verbundwerkstoffe (2)
- Viskoelastizität (2)
- Wahrscheinlichkeitsfunktion (2)
- Wearable computing (2)
- ab initio (2)
- air interface (2)
- anisotropy (2)
- auditory brainstem (2)
- benzene (2)
- beyond 3G (2)
- bottom-up (2)
- classification (2)
- cluster (2)
- computational fluid dynamics (2)
- computational homogenization (2)
- configurational forces (2)
- continuum mechanics (2)
- curve singularity (2)
- deuteration (2)
- dipeptide (2)
- domain decomposition (2)
- duality (2)
- elastomer (2)
- enamide (2)
- finite deformations (2)
- finite elements (2)
- finite volume method (2)
- forest fragmentation (2)
- gas phase (2)
- geomagnetism (2)
- homogenization (2)
- ice shelves (2)
- illiquidity (2)
- image analysis (2)
- image processing (2)
- impedance spectroscopy (2)
- interface problem (2)
- iron (2)
- langfaserverstärkte Thermoplaste (2)
- layout analysis (2)
- lichen (2)
- linear kinetics theory (2)
- lineare kinetische Theorie (2)
- machine learning (2)
- material forces (2)
- mesh generation (2)
- metal (2)
- metal cluster (2)
- numerische Mechanik (2)
- optimal investment (2)
- phase field model (2)
- probabilistic approach (2)
- rate-dependency (2)
- rolling friction (2)
- ruthenium (2)
- single molecule magnet (2)
- social media (2)
- splines (2)
- tractor (2)
- virtual acoustics (2)
- viscoelasticity (2)
- "Slender-Body"-Theorie (1)
- "Stress-Mentor" (1)
- 150 bar loop (1)
- 17beta-Estradiol (1)
- 1D-CFD (1)
- 2D-CFD (1)
- 3D Gene Expression (1)
- 3D Point Data (1)
- 3D image analysis (1)
- 3D printing (1)
- A-infinity-bimodule (1)
- A-infinity-category (1)
- A-infinity-functor (1)
- A/D conversion (1)
- AFDX (1)
- ALE-Methode (1)
- AMC225xe (1)
- ASM (1)
- AUTOSAR (1)
- Ab-initio-Rechnung (1)
- Ableitungsfreie Optimierung (1)
- Ableitungsschätzung (1)
- Abrechnungsmanagement (1)
- Abstraction (1)
- Abstraktion (1)
- Accounting Agent (1)
- Achslage (1)
- Adaptive Antennen (1)
- Adaptive Data Structure (1)
- Adaptive Entzerrung (1)
- Adaptive time step (1)
- Additionsreaktion (1)
- Addukt (1)
- Adhäsion (1)
- Adjoint method (1)
- Adsorption (1)
- Adult learning (1)
- Advanced Encryption Standard (1)
- Aerosol (1)
- Aerosol Particles (1)
- Aerosol Partikeln (1)
- Affine Arithmetic (1)
- Ah-Rezeptor (1)
- AhR/ER Crosstalk (1)
- AhRR (1)
- Ahr Knockout Model (1)
- Algebraic dependence of commuting elements (1)
- Algebraic geometry (1)
- Algebraische Abhängigkeit der kommutierende Elementen (1)
- Algebraischer Funktionenkörper (1)
- Algorithm (1)
- Algorithmic Differentiation (1)
- Amination (1)
- Analysis (1)
- Analytical method (1)
- Ananasgewächse (1)
- Angewandte Mathematik (1)
- Anion recognition (1)
- Anisotropie (1)
- Annulus (1)
- Ansäuerung (1)
- Anthropogener Einfluss (1)
- Anti-diffusion (1)
- Antidiffusion (1)
- Application Framework (1)
- Approximationsalgorithmus (1)
- Arc distance (1)
- Archimedische Kopula (1)
- Arithmetic data-path (1)
- Artificial Intelligence (1)
- Aryl hydrocarbon Receptor (1)
- Ascorbat (1)
- Ascorbinsäure (1)
- Ascorbylradikal (1)
- Asiatische Option (1)
- Association (1)
- Asympotic Analysis (1)
- Asymptotic Analysis (1)
- Asymptotische Entwicklung (1)
- Atmungskette (1)
- Atom optics (1)
- Atomoptik (1)
- Ausfallrisiko (1)
- Austin (1)
- Automat <Automatentheorie> (1)
- Automatic Image Captioning (1)
- Automation (1)
- Automatische Differentiation (1)
- Automatisches Beweisverfahren (1)
- Automorphismengruppe (1)
- Autonomer Roboter (1)
- Autoregressive Hilbertian model (1)
- Avirulence (1)
- Backlog (1)
- Baeocyte (1)
- Barriers (1)
- Basisband (1)
- Basket Option (1)
- Bayes-Entscheidungstheorie (1)
- Beam models (1)
- Beam orientation (1)
- Befahrbarkeitsanalyse (1)
- Benetzung (1)
- Benutzer (1)
- Benutzerfreundlichkeit (1)
- Benzol (1)
- Berufliche Entwicklung (1)
- Beschichtungsprozess (1)
- Beschränkte Arithmetik (1)
- Beschränkte Krümmung (1)
- Betrachtung des Schlimmstmöglichen Falles (1)
- Bifurkation (1)
- Bildsegmentierung (1)
- Binomialbaum (1)
- Bio-inspired (1)
- Biogeographie (1)
- Biogeography (1)
- Bioinformatik (1)
- Biomarker (1)
- Biomechanik (1)
- Bionas (1)
- Biophysics (1)
- Biorthogonalisation (1)
- Biot Poroelastizitätgleichung (1)
- Biot-Savart Operator (1)
- Biot-Savart operator (1)
- Biotrophy (1)
- Bioturbation (1)
- Bipedal Locomotion (1)
- Bitvektor (1)
- Bluetooth (1)
- Boltzmann Equation (1)
- Boosting (1)
- Bootstrap (1)
- Boundary Value Problem / Oblique Derivative (1)
- Bounded Model Checking (1)
- Brandenburg-Lubuskie (1)
- Brinkman (1)
- Brownian Diffusion (1)
- Brownian motion (1)
- Brownsche Bewegung (1)
- Bruchmechanik (1)
- Buchstabe (1)
- Buffer (1)
- Buffer Zone Method (1)
- Business Sustainability (1)
- Büyükçekmece and Mogan Lake (1)
- CAD (1)
- CDS (1)
- CDSwaption (1)
- CFD Simulation (1)
- CFRP (1)
- CHAMP (1)
- CID (1)
- CMOS (1)
- CMOS-Schaltung (1)
- CPDO (1)
- CYP1B1 (1)
- Caching (1)
- Carbon footprint (1)
- Castelnuovo Funktion (1)
- Castelnuovo function (1)
- Cauchy-Born Regel (1)
- Cauchy-Born Rule (1)
- Cauchy-Born rule (1)
- Cauchy-Navier-Equation (1)
- Cauchy-Navier-Gleichung (1)
- Cell crosstalk (1)
- Censoring (1)
- Center Location (1)
- Change Point Analysis (1)
- Change Point Test (1)
- Change-point Analysis (1)
- Change-point estimator (1)
- Change-point test (1)
- Channel estimation (1)
- Chi-Quadrat-Test (1)
- Chlamydomonas reinhardii (1)
- Chloride regulation (1)
- Cholesky-Verfahren (1)
- Chow Quotient (1)
- Chromatin (1)
- Chromatographiesäule (1)
- Chroococcales (1)
- Chroococcidiopsis (1)
- Chroococcidiopsis cubana (1)
- Chroococcidiopsis thermalis (1)
- Chroococcidiopsisdaceae (1)
- Circle Location (1)
- Classification (1)
- Click chemistry (1)
- Clock and Data Recovery Circuits (1)
- Closure (1)
- Cluster (1)
- Clusterverbindungen (1)
- Coarse graining (1)
- Code Generation (1)
- Codierung (1)
- Cognitive Amplification (1)
- Cohen-Lenstra heuristic (1)
- Collaboration (1)
- Collision Induced Dissociation (1)
- Combinatorial Optimization (1)
- Combinatorial Testing (1)
- Combined IR/UV spectroscopy (1)
- Commodity Index (1)
- Competence (1)
- Composites (1)
- Computational Homogenization (1)
- Computational Mechanics (1)
- Computer Algebra (1)
- Computer Algebra System (1)
- Computer Graphic (1)
- Computer Supported Cooperative Work (1)
- Computer algebra (1)
- Computer graphics (1)
- Computeralgebra System (1)
- Computerphysik (1)
- Computersimulation (1)
- Computertomographie (1)
- Computervision (1)
- Concrete experience (1)
- Concurrent data structures (1)
- Conditional Value-at-Risk (1)
- Configurational Forces (1)
- Conservation laws (1)
- Consistencyanalysis (1)
- Consistent Price Processes (1)
- Construction of hypersurfaces (1)
- Constructivism (1)
- Context Awareness (1)
- Continuum Damage (1)
- Continuum-Atomistic Multiscale Algorithm (1)
- Continuum-Atomistics (1)
- Control Engineering (1)
- Cook Wilson (1)
- Copula (1)
- Corridors (1)
- Coupled PDEs (1)
- Crack resistance (1)
- Crash (1)
- Crash Hedging (1)
- Crash modelling (1)
- Crash-Charakteristiken (1)
- Crashmodellierung (1)
- Credit Default Swap (1)
- Credit Risk (1)
- Cross-Cultural Product Development (1)
- Cross-border regions (1)
- Cross-border transport (1)
- Curvature (1)
- Curved viscous fibers (1)
- Cyanobacteria (1)
- Cyanobakterien (1)
- Cyanobakterium (1)
- Cycle Accuracy (1)
- Cytochomes P450 (1)
- Cytochrom P-450 (1)
- DCE <Programm> (1)
- DFG (1)
- DFT (1)
- DFT calculation (1)
- DL-PCBs (1)
- DNA adducts (1)
- DNS-Schädigung (1)
- DOSY (1)
- DPN (1)
- DSM (1)
- DSMC (1)
- Damage (1)
- Dark-state Polariton (1)
- Darstellungstheorie (1)
- Das Urbild von Ideal unter einen Morphismus der Algebren (1)
- Data Modeling (1)
- Data Spreading (1)
- Data path (1)
- Dataset (1)
- Datenrückgewinnungsschaltungen (1)
- Datenspreizung (1)
- Decision Support Systems (1)
- Deep Learning (1)
- Defaultable Options (1)
- Defektinteraktion (1)
- Deformationstheorie (1)
- Dekonsolidierung (1)
- Delaunay (1)
- Delaunay triangulation (1)
- Delaunay triangulierung (1)
- Delaunay-Triangulierung (1)
- Derivat <Wertpapier> (1)
- Derivative Estimation (1)
- Deuterierung (1)
- Dicarbonsäuren (1)
- Differenzmenge (1)
- Diffusion (1)
- Diffusion processes (1)
- Diffusionskoeffizient (1)
- Diffusionsmessung (1)
- Diffusionsmodell (1)
- Diffusionsprozess (1)
- Digitalmodulation (1)
- Dioxin (1)
- Dioxin-like Compounds (1)
- Dipeptide (1)
- Direct Numerical Simulation (1)
- Discrete Event Simulation (DES) (1)
- Discriminatory power (1)
- Diskontinuität (1)
- Diskrete Fourier-Transformation (1)
- Diskrete Simulation (1)
- Dislocations (1)
- Dispersionsrelation (1)
- Disproportionierung von Ethylbenzol (1)
- Dissertation (1)
- Distributed Rendering (1)
- Distributed system (1)
- Disulfidbrücken-Transfer (1)
- Diversifikation (1)
- Domain switching (1)
- Doppelresonanz (1)
- Downlink (1)
- Dreidimensionale Bildverarbeitung (1)
- Dreidimensionale Rekonstruktion (1)
- Dreidimensionale Strömung (1)
- Droplet breakage (1)
- Droplet coalescence (1)
- Druckabfall (1)
- Druckkorrektur (1)
- Drug delivery systems (1)
- Dsitribution (1)
- Dunkelzustandspolariton (1)
- Duplicate Identification (1)
- Duplikaterkennung (1)
- Dynamically reconfigurable analog circuits (1)
- Dyslexie (1)
- Dünnfilmapproximation (1)
- EM algorithm (1)
- EPDM (1)
- EPR (1)
- EPR Spectroscopy (1)
- EPR Spektroskopie (1)
- EROD (1)
- ESR (1)
- Ecology (1)
- Edwards Model (1)
- Effective Conductivity (1)
- Efficiency (1)
- Effizienz (1)
- Eikonal equation (1)
- Einzelzell-Analyse (1)
- Elastische Deformation (1)
- Elektrisch (1)
- Elektrohydraulik (1)
- Elektromagnetische Streuung (1)
- Elektronenspinresonanz (1)
- Elektroporation (1)
- Eliminationsverfahren (1)
- Elliptische Verteilung (1)
- Elliptisches Randwertproblem (1)
- Empfangssignalverarbeitung (1)
- Empfehlungssysteme (1)
- Empfängerorientierung (1)
- Enamide (1)
- Endliche Geometrie (1)
- Endliche Gruppe (1)
- Ensemble Visualization (1)
- Entscheidungsbaum (1)
- Entscheidungsproblem (1)
- Entscheidungsunterstützung (1)
- Entwurf (1)
- Entwurfsautomation (1)
- Enumerative Geometrie (1)
- Environmental inequality (1)
- Environmental stress cracking resistance (1)
- Epiphyten (1)
- Epoxydharz (1)
- Erdöl Prospektierung (1)
- Erfüllbarke (1)
- Erhaltungsgleichungen (1)
- Erkenntnistheorie (1)
- Erreichbarkeit (1)
- Erwachsenenbildung (1)
- Estradiol (1)
- Estradiolrezeptor (1)
- Ethernet (1)
- Etylbenzene disproportionation (1)
- European Pollutant Release and Transfer Register (E-PRTR) (1)
- European Territorial Cooperation (1)
- European Union (1)
- European Union policy-making (1)
- European integration (1)
- Europeanisation (1)
- Europäische Territoriale Zusammenarbeit (1)
- Eventual consistency (1)
- Evolutionary Algorithm (1)
- Expected shortfall (1)
- Experiential learning (1)
- Experiment (1)
- Experimentation (1)
- Exponential Utility (1)
- Exponentieller Nutzen (1)
- Extended Finite-Elemente-Methode (1)
- Extended Kalman Filter (1)
- Extended Mind (1)
- Extreme Events (1)
- Extreme value theory (1)
- Eyewear Computing (1)
- FFT (1)
- FPM (1)
- Faden (1)
- Fahrerassistenzsystem (1)
- Fahrtkostenmodelle (1)
- Fattyacids (1)
- Fault Tree Analysis (1)
- Feasibility study (1)
- Feature (1)
- Feature Detection (1)
- Feature Extraction (1)
- Feature extraction (1)
- Feedfoward Neural Networks (1)
- Fehlerbaumanalyse (1)
- Femtosecond Laser (1)
- Fernerkundung (1)
- Festkörpergrenzschichten (1)
- Fettsäuren (1)
- Feynman Integrals (1)
- Fiber suspension flow (1)
- Fifth generation (5G) mobile networks (1)
- Financial Engineering (1)
- Finanzkrise (1)
- Finanznumerik (1)
- Finite Element Method (1)
- Finite Elemente Methode (1)
- Finite Elementes (1)
- Finite Elements (1)
- Finite element method (1)
- Finite-Elemente-Simulation (1)
- Finite-Punktmengen-Methode (1)
- Firmware (1)
- Firmwertmodell (1)
- First Order Optimality System (1)
- Flachwasser (1)
- Flachwassergleichungen (1)
- Flechten (1)
- Fließanalyse (1)
- Flow Visualization (1)
- Fluid dynamics (1)
- Fluid-Feststoff-Strömung (1)
- Fluid-Struktur-Kopplung (1)
- Fluid-Struktur-Wechselwirkung (1)
- Foam decay (1)
- Fokker-Planck-Gleichung (1)
- Formal Verification (1)
- Formale Beschreibungstechnik (1)
- Formale Grammatik (1)
- Formale Methode (1)
- Formale Sprache (1)
- Formaler Beweis (1)
- Fourier-Transformation (1)
- Fracture behavior (1)
- Fragmentierung (1)
- Framework (1)
- Fredholmsche Integralgleichung (1)
- Functional Safety (1)
- Functional autoregression (1)
- Functional time series (1)
- Funktionenkörper (1)
- Fusion (1)
- Future Internet (1)
- Füllkörpersäule (1)
- GARCH (1)
- GARCH Modelle (1)
- GPU (1)
- Galerkin Verfahren (1)
- Galerkin methods (1)
- Galerkin-Methode (1)
- Gamification (1)
- Gamma-Konvergenz (1)
- Garbentheorie (1)
- Gasphase (1)
- Gateway (1)
- Gauß-Filter (1)
- Gebietszerlegung (1)
- Gebietszerlegungsmethode (1)
- Gebogener viskoser Faden (1)
- Gefahren- und Risikoanalyse (1)
- Gemeinsame Kanalschaetzung (1)
- Gen-Expression (1)
- Gene Expression (1)
- Gene expression programming (1)
- Generalisierte Plastizität (1)
- Generierung (1)
- Genome analysis (1)
- Genregulation (1)
- Geo-referenced data (1)
- Geodesie (1)
- Geographic Information System (GIS) (1)
- Geometrical Nonlinear Thermomechanics (1)
- Geometrische Ergodizität (1)
- Geovisualization (1)
- German census (1)
- Geschwindigkeitsbegrenzung (1)
- Geschwindigkeitsregelung (1)
- Geschwindigkeitswahrnehmung (1)
- Gewichteter Sobolev-Raum (1)
- Giga bit per second (1)
- Gitterbaufehler (1)
- Gittererzeugung (1)
- Glassy polymers (1)
- Gleichgewichtsstrategien (1)
- Gleitverschleiß (1)
- GlyHis (1)
- Gold nanoparticles (1)
- Google Earth (1)
- Gradient based optimization (1)
- Granular (1)
- Granular flow (1)
- Granulat (1)
- Gravitationsfeld (1)
- Greater Region Saar-Lor-Lux+ (1)
- Green's functions (1)
- Green-Funktion (1)
- Grenzfläche (1)
- Grenzflächenspannung (1)
- Gromov Witten (1)
- Gromov-Witten-Invariante (1)
- Group and Organizational Learning (1)
- Grouping by similarity (1)
- Große Abweichung (1)
- Großregion Saar-Lor-Lux+ (1)
- Gruppenoperation (1)
- Gruppentheorie (1)
- Gröbner bases (1)
- Gröbner basis (1)
- Grüne Chemie (1)
- Gyroscopic (1)
- H/D exchange (1)
- HIF-1 (1)
- HPC (1)
- HSF (1)
- HSF1 (1)
- HSP (1)
- HSP70 (1)
- Hadamard manifold (1)
- Hadamard space (1)
- Hadamard-Mannigfaltigkeit (1)
- Hadamard-Raum (1)
- Hamilton-Jacobi-Differentialgleichung (1)
- Hamiltonian Path Integrals (1)
- Hamiltonian systems (1)
- Hand gestures (1)
- Handelsstrategien (1)
- Hardware/Software co-verification (1)
- Hardwareverifikation (1)
- Harmonische Analyse (1)
- Harmonische Spline-Funktion (1)
- Harold Arthur (1)
- Haustoria (1)
- Hazard Analysis (1)
- Hazard Functions (1)
- Heat stress response (1)
- Heavy-tailed Verteilung (1)
- Hedging (1)
- Heißes Elektron (1)
- Helmholtz Type Boundary Value Problems (1)
- Heterogene Katalyse (1)
- Heterogeneous (1)
- Hidden Markov models for Financial Time Series (1)
- Hierarchische Matrix (1)
- High-Spin-Komplexe (1)
- HisGly (1)
- Hitting families (1)
- Homogeneous deformation (1)
- Homogenisieren (1)
- Homologische Algebra (1)
- Honduras (1)
- Horizontal gene transfer (1)
- Hub Location Problem (1)
- Human Liver Cell Models (1)
- Human-Computer Interaction (1)
- Humanism (1)
- Hydratation (1)
- Hydrostatischer Druck (1)
- Hyperelastizität (1)
- Hyperelliptische Kurve (1)
- Hyperflächensingularität (1)
- Hypergraph (1)
- Hyperspektraler Sensor (1)
- Hysterese (1)
- IEC 61508 (1)
- IEEE 802.15.4 (1)
- IP Address (1)
- IP Traffic Accounting (1)
- IP-XACT (1)
- ISO 26262 (1)
- ITSM (1)
- Idealklassengruppe (1)
- Ileostomy (1)
- Illiquidität (1)
- Image Processing (1)
- Image restoration (1)
- Imatinib mesilat (1)
- Immiscible lattice BGK (1)
- Immobilienaktie (1)
- Immobilisierung (1)
- Immunoblot (1)
- Implementierung (1)
- Incremental recomputation (1)
- Individual (1)
- Induction heating (1)
- Induktive logische Programmierung (1)
- Industrial air pollution (1)
- Inflation (1)
- Information Extraction (1)
- Information Visualization (1)
- Informationsübertragung (1)
- Infrared Multi Photon Dissociation (1)
- Infrared Multiphoton Dissociation Spectroscopy (IR-MPD) (1)
- Infrarotspek (1)
- Innovation (1)
- Intensity estimation (1)
- Intensität (1)
- Interaction (1)
- Interactive decision support systems (1)
- Interaktion (1)
- Interfaces (1)
- Interkulturelle Produktentwicklung (1)
- Intermediate Composition (1)
- Interpolation (1)
- Invariante (1)
- Invariante Momente (1)
- Inverse Problem (1)
- Inverse spin injection (1)
- Ionensolvatation (1)
- Irreduzibler Charakter (1)
- Isomerisierung von n-Decan (1)
- Isotopieeffekt (1)
- Jacobigruppe (1)
- Jitter (1)
- John L. (1)
- KCC2 (1)
- Kanalcodierung (1)
- Kanalschätzung (1)
- Kardiotoxizität (1)
- Karhunen-Loève expansion (1)
- Katalyse (1)
- Katalytische Hydrierung (1)
- Kategorientheorie (1)
- Kellerautomat (1)
- Kelvin Transformation (1)
- Kirchhoff-Love shell (1)
- Klassifikation (1)
- Knochenmetastase (1)
- Knowledge transfer (1)
- Knuth-Bendix completion (1)
- Knuth-Bendix-Vervollständigung (1)
- Kohäsive Grenzschichten (1)
- Kolonisierung (1)
- Kombinatorik (1)
- Kombinierte IR/UV-Spektroskopie (1)
- Kommutative Algebra (1)
- Kompetenz (1)
- Kompression (1)
- Konfigurationskräfte (1)
- Konfigurationsmechanik (1)
- Konjugierte Dualität (1)
- Konstruktion von Hyperflächen (1)
- Kontinuum <Mathematik> (1)
- Kontinuums-Atomistische Kopplung (1)
- Kontinuumsphysik (1)
- Konvergenz (1)
- Konvergenzrate (1)
- Konvergenzverhalten (1)
- Konvexe Optimierung (1)
- Kopplungsmethoden (1)
- Kopplungsproblem (1)
- Kopula <Mathematik> (1)
- Korrelationsanalyse (1)
- Kreitderivaten (1)
- Kryptoanalyse (1)
- Kryptologie (1)
- Krümmung (1)
- Kullback-Leibler divergence (1)
- Kurve (1)
- Kurvenschar (1)
- LIBOR (1)
- LIDAR (1)
- LIR-Tree (1)
- Lagrangian relaxation (1)
- Laminare Grenzschicht (1)
- Land Use Planning (1)
- Laplace transform (1)
- Large Eddy Simulation (1)
- Large High-Resolution Displays (1)
- Large Synchronous Networks (1)
- Laser Wakefield Particle Accelerator (1)
- Lateral superior olive (1)
- Lattice Boltzmann (1)
- Lattice Boltzmann Method (1)
- Lattice-BGK (1)
- Lattice-Boltzmann (1)
- Leading-Order Optimality (1)
- Learning Analytics (1)
- Leber (1)
- Leichtbau (1)
- Leistungseffizienz (1)
- Leistungsmessung (1)
- Leitfähigkeit (1)
- Lennard-Jones (1)
- Lesen (1)
- Lesenlernen (1)
- Lesestörung (1)
- Leukämie (1)
- Level set methods (1)
- LiDAR (1)
- Lichtspeicherung (1)
- Lie-Typ-Gruppe (1)
- Light Storage (1)
- Linear-Quadratic-Regulator (1)
- Lineare partielle Differentialgleichung (1)
- Linked Data (1)
- Linking Data Analysis and Visualization (1)
- Lippmann-Schwinger equation (1)
- Liquid-Liquid Extraction (1)
- Liquid-liquid dispersion (1)
- Liquid-liquid extraction (1)
- Liquidität (1)
- Liver Toxicity (1)
- Local continuum (1)
- Locally Supported Zonal Kernels (1)
- Location (1)
- Logiksynthese (1)
- Lokalisierung (1)
- Low Jitter (1)
- Luftschnittstellen (1)
- Lungenkrebs (1)
- Lärmbelastung (1)
- Lärmimmission (1)
- MAC protocols (1)
- MBS (1)
- MIMO Systeme (1)
- MIMO-Antennen (1)
- MIP-Emissionsspektroskopie (1)
- MIP-Massenspektrometrie (1)
- MKS (1)
- MO-Theorie (1)
- Macaulay’s inverse system (1)
- Mach-Zehnder-Interferometer (1)
- Machine Learning (1)
- Magnetfeldbasierter Lokalisierung (1)
- Magnetfelder (1)
- Magnetoelastic coupling (1)
- Magnetoelasticity (1)
- Magnetostriction (1)
- Manufacturing (1)
- Manufacturing Control (1)
- Manufacturing System (1)
- MapReduce (1)
- Marangoni-Effekt (1)
- Markov Chain (1)
- Markov Kette (1)
- Markov-Ketten-Monte-Carlo-Verfahren (1)
- Marktmanipulation (1)
- Marktrisiko (1)
- Martingaloptimalitätsprinzip (1)
- Maschinelle Übersetzung (1)
- Mass transfer (1)
- Massenspektrometrie (1)
- Material Modelling (1)
- Material Properties under Exctreme Conditions (1)
- Material-Force-Method (1)
- Materialermüdung (1)
- Materialmodellierung (1)
- Materialsysteme (1)
- Materielle Kräfte (1)
- Mathematical Finance (1)
- Mathematik (1)
- Mathematisches Modell (1)
- Matrixkompression (1)
- Matrizenfaktorisierung (1)
- Matrizenzerlegung (1)
- Maximal Cohen-Macaulay modules (1)
- Maximale Cohen-Macaulay Moduln (1)
- Maximum Intensity Projection (1)
- Maximum Likelihood Estimation (1)
- Maximum-Likelihood-Schätzung (1)
- Maxwell's equations (1)
- Meaningful Work (1)
- Mechanical (1)
- Mechanics (1)
- Mechanisch (1)
- Mehrdimensionale Bildverarbeitung (1)
- Mehrdimensionales Variationsproblem (1)
- Mehrkriterielle Optimierung (1)
- Mehrkörpersystem (1)
- Mehrphasenströmung (1)
- Mehrskalen (1)
- Mehrtraegeruebertragungsverfahren (1)
- Memory Architecture (1)
- Memory Consistency (1)
- Memory Theory (1)
- Mensch-Maschine-Kommunikation (1)
- Menschenmenge (1)
- Merkmalsraum (1)
- Mesh-Free (1)
- Metabolismus (1)
- Metabolomics (1)
- Metal-Free (1)
- Metallschicht (1)
- Metapopulation (1)
- Meter (1)
- Methode der finiten Elemente (1)
- Microsystem Technology (1)
- Mie- and Helmholtz-Representation (1)
- Mie- und Helmholtz-Darstellung (1)
- Mikrodrall (1)
- Mikroelektronik (1)
- Mikroklima (1)
- Mikromorphe Kontinua (1)
- Mikroskopie (1)
- Mikrostruktur (1)
- Mikrosystemtechnik (1)
- Mindfulness (1)
- Minimal Cut Set Visualization (1)
- Minimal training (1)
- Mitochondria (1)
- Mitochondrien (1)
- Mitochondrium (1)
- Mixed integer programming (1)
- Mobile Telekommunikation (1)
- Mobile system (1)
- Mobiler Roboter (1)
- Mobilfunksysteme (1)
- Model-driven Engineering (1)
- Modellbasierte Fehlerdiagnose (1)
- Modellbildung (1)
- Modes of learning (1)
- Modifiziertes Epoxidharz (1)
- Modularisierung (1)
- Modulationsübertragungsfunktion (1)
- Molecular Dynamics (1)
- Molekularbiologie (1)
- Molekulare Bioinformatik (1)
- Molekularstrahl (1)
- Molekülorbital (1)
- Moment Invariants (1)
- Momentum and Mas Transfer (1)
- Monte Carlo (1)
- Monte-Carlo Modelling (1)
- Mood-based Music Recommendations (1)
- Moreau-Yosida regularization (1)
- Morphismus (1)
- Morphologie (1)
- Mosco convergence (1)
- Multi Primary and One Second Particle Method (1)
- Multi-Asset Option (1)
- Multi-Edge Graph (1)
- Multi-Field (1)
- Multi-Variate Data (1)
- Multicore Resource Management (1)
- Multicore Scheduling (1)
- Multicriteria optimization (1)
- Multifield Data (1)
- Multileaf collimator (1)
- Multiperiod planning (1)
- Multiphase Flows (1)
- Multiple Jobholding (1)
- Multiresolution Analysis (1)
- Multiscale (1)
- Multiscale modelling (1)
- Multiskalen-Entrauschen (1)
- Multispektralaufnahme (1)
- Multispektralfotografie (1)
- Multivariate Analyse (1)
- Multivariate Wahrscheinlichkeitsverteilung (1)
- Multivariates Verfahren (1)
- Mutagenität (1)
- Mutation (1)
- NNK (1)
- Nachhaltigkeit (1)
- Nahrungsnetz (1)
- Namibia (1)
- Nanocomposite (1)
- Nanofaser (1)
- Nanopartikel (1)
- Natural Neighbor (1)
- Natural Neighbor Interpolation (1)
- Natürliche Nachbarn (1)
- Navigation (1)
- Nekrose (1)
- Network (1)
- Network Architecture (1)
- Network Calculus (1)
- Networks (1)
- Netzwerk (1)
- Netzwerksynthese (1)
- Neural ADC (1)
- Neuronales Netz (1)
- Nicht-Desarguessche Ebene (1)
- Nichtglatte Optimierung (1)
- Nichtkommutative Algebra (1)
- Nichtkonvexe Optimierung (1)
- Nichtkonvexes Variationsproblem (1)
- Nichtlineare Approximation (1)
- Nichtlineare Diffusion (1)
- Nichtlineare Dynamik (1)
- Nichtlineare Mechanik (1)
- Nichtlineare Optimierung (1)
- Nichtlineare Zeitreihenanalyse (1)
- Nichtlineare partielle Differentialgleichung (1)
- Nichtpositive Krümmung (1)
- Nichtrauchen (1)
- Niederschlag (1)
- Ninequilibirum Electron Kinetics (1)
- Nische (1)
- Nitrosamine (1)
- Nitsches method (1)
- No-Arbitrage (1)
- Node-Link Diagram (1)
- Noise control (1)
- Non--local atomistic (1)
- Non-Newtonian (1)
- Non-commutative Computer Algebra (1)
- Nonlinear Optimization (1)
- Nonlinear time series analysis (1)
- Nonparametric time series (1)
- Nonspecific Adsorption (1)
- North Sea (1)
- Nostocales (1)
- Null Modell (1)
- Nulldimensionale Schemata (1)
- Numerical Flow Simulation (1)
- Numerical methods (1)
- Numerische Homogenisierung (1)
- Numerische Integration (1)
- Numerische Mathematik / Algorithmus (1)
- Numerische Simulat (1)
- Numerisches Verfahren (1)
- Nutzerorientierte Produktentwicklung (1)
- OCR (1)
- OFDM mobile radio systems (1)
- OFDM-Mobilfunksysteme (1)
- Oberflächenmaße (1)
- Oberflächenphysik (1)
- Oberflächenplasmonresonanz (1)
- Oberflächenspannung (1)
- Objekterkennung (1)
- Off-road Robotics (1)
- Off-road Robotik (1)
- Online chain partitioning (1)
- Optimal Control (1)
- Optimale Kontrolle (1)
- Optimierender Compiler (1)
- Optimization Algorithms (1)
- Option (1)
- Option Valuation (1)
- Optionsbewertung (1)
- Optische Abbildung (1)
- Optische Fernerkundung (1)
- Orchideen (1)
- Order (1)
- Osteoblast (1)
- Osteomimicry (1)
- Oxidant Evolution (1)
- PCDD/Fs (1)
- PCDD/Fs PCBs (1)
- Order of printed copy (1)
- PSPICE (1)
- Packed Columns (1)
- Panama (1)
- Papiermaschine (1)
- Parallel Algorithms (1)
- Paralleler Algorithmus (1)
- Paralleler Hybrid (1)
- Parameter identification (1)
- Parameteridentifikation (1)
- Pareto Optimality (1)
- Partially ordered sets (1)
- Participatory Sensing (1)
- Particle (1)
- Partielle Differentialgleichung (1)
- Partikel Methoden (1)
- Passivrauchen (1)
- Patchworking Methode (1)
- Patchworking method (1)
- Pathogen (1)
- Pathwise Optimality (1)
- Pedestrian FLow (1)
- Penicillin-resistance (1)
- Peptide synthesis (1)
- Perceptual grouping (1)
- Performance (1)
- Permutationsäquivalenz (1)
- Personalisation (1)
- Pervasive health (1)
- Pflanzenfressende Insekten (1)
- Phase Transition Effect (1)
- Phase Transition Effekt (1)
- Phasmatodea (1)
- Philosophy of Technology (1)
- Photoelektron (1)
- Photonische Kristalle (1)
- Phylogenie (1)
- Phylogeny (1)
- Physical activity monitoring (1)
- Piezoelectric Materials (1)
- Piezoelectricity (1)
- Piezokeramik (1)
- Planar Pressure (1)
- Planares Polynom (1)
- Planning Support Systems (1)
- Plasmon (1)
- Plastizitätstheorie (1)
- Pleurocapsales (1)
- Poisson noise (1)
- Poisson-Gleichung (1)
- Polariton (1)
- Policy implementation (1)
- PolyBoRi (1)
- Polymer (1)
- Polymer nanocomposites (1)
- Polymers (1)
- Polypropylen (1)
- Population Balance Equation (1)
- Population balance (1)
- Population balances (1)
- Populationsbilanzmodelle (1)
- Populationswachstum (1)
- Portfolio Optimierung (1)
- Portfoliooptimierung (1)
- Power Efficiency (1)
- Pragmatism (1)
- Preimage of an ideal under a morphism of algebras (1)
- Pressure Drop (1)
- Prichard (1)
- Primary human Hepatocytes (1)
- Process Data (1)
- Processor Architecture (1)
- Produktentwicklung (1)
- Professional development (1)
- Programmverifikation (1)
- Projektionsoperator (1)
- Projektive Fläche (1)
- Property checking (1)
- Prostatakrebs (1)
- Protein-Tyrosin-Kinasen (1)
- Protein/detergent complexes (1)
- Proteine (1)
- Proteintransport (1)
- Protocol Compliance (1)
- Protocol Composition (1)
- Protonentransf (1)
- Prototyp (1)
- Prototype (1)
- Prox-Regularisierung (1)
- Prozessvisualisierung (1)
- Pump Intake Flows (1)
- Punktdefekte (1)
- Punktprozess (1)
- QMC (1)
- QVIs (1)
- QoS (1)
- Quantencomputer (1)
- Quanteninformatik (1)
- Quantenwell (1)
- Quantile autoregression (1)
- Quantitative Bildanalyse (1)
- Quartz (1)
- Quasi-Variational Inequalities (1)
- Quicksort (1)
- RH795 (1)
- RKHS (1)
- RNS-Interferenz (1)
- ROS (1)
- RTL (1)
- Radial Basis Functions (1)
- Radiotherapy (1)
- Random testing (1)
- Randwertproblem (1)
- Randwertproblem / Schiefe Ableitung (1)
- Rank test (1)
- Rarefied gas (1)
- Ratenabhängigkeit (1)
- Rauchen (1)
- Raucherentwöhnung (1)
- Ray tracing (1)
- Reachability (1)
- Reactive extraction (1)
- Reaktive Sauerstoff Spezies (1)
- Reaktive Sauerstoffspezies (1)
- Reaktivextraktion (1)
- Real-Time (1)
- Real-Time Systems (1)
- Receptor design (1)
- Rechtecksgitter (1)
- Recognition (1)
- Rectilinear Grid (1)
- Red Sea (1)
- Redundanzvermeidung (1)
- Reflexionsspektroskopie (1)
- Regime Shifts (1)
- Regime-Shift Modell (1)
- Regressionsanalyse (1)
- Regularisierung / Stoppkriterium (1)
- Regularität (1)
- Regularization / Stop criterion (1)
- Regularization methods (1)
- Relative effect potencies (REPs) (1)
- Representation (1)
- Requirements engineering (1)
- Response Priming (1)
- Restricted Regions (1)
- Rhabdomyolyse (1)
- Riemannian manifolds (1)
- Riemannsche Mannigfaltigkeiten (1)
- Rigid Body Motion (1)
- Risikoanalyse (1)
- Risikobewertung (1)
- Risikomanagement (1)
- Risikomaße (1)
- Risikotheorie (1)
- Risk Assessment (1)
- Risk Measures (1)
- Rissausbreitung (1)
- Roboter (1)
- Robotics (1)
- Robust smoothing (1)
- Rohstoffhandel (1)
- Rohstoffindex (1)
- Rollreibung (1)
- Rollreibung und -verschleiß (1)
- Rombopak (1)
- Rust effector (1)
- Ruthenium (1)
- Ruthenium-Vinyliden (1)
- Rydberg molecule (1)
- SAHARA (1)
- SDL (1)
- SDL extensions (1)
- SM-SQMOM (1)
- SOEP (1)
- SPARQL (1)
- SPARQL query learning (1)
- SQMOM (1)
- SWARM (1)
- Safety Analysis (1)
- Sagnac-Effekt (1)
- Satellitenfernerkundung (1)
- Sauerstoffverbrauch (1)
- Scalar (1)
- Scale function (1)
- Scanning Electron Microscope (1)
- Schadensmechanik (1)
- Schaltwerk (1)
- Schaum (1)
- Schaumzerfall (1)
- Scheduler (1)
- Schema <Informatik> (1)
- Schematisation (1)
- Schematisierung (1)
- Schiefe Ableitung (1)
- Schlagfrequenz (1)
- Schnittstelle (1)
- Schwache Formulierung (1)
- Schwache Konvergenz (1)
- Schwache Lösu (1)
- Schädigung (1)
- Second Order Conditions (1)
- Self-X (1)
- Self-directed learning (1)
- Self-splitting objects (1)
- Semantic Wikis (1)
- Semantische Modellierung (1)
- Semantisches Datenmodell (1)
- Sendesignalverarbeitung (1)
- Sensing (1)
- Sequenzieller Algorithmus (1)
- Serre functor (1)
- Serumalbumine (1)
- Service-oriented Architecture (1)
- Settlement Appropriateness and Thresholds (1)
- Shallow Water Equations (1)
- Shape optimization (1)
- Shared Resource Modeling (1)
- Sicherheitsanalyse (1)
- Silanisierung (1)
- Silanization (1)
- Siliciumdioxid (1)
- Similarity Join (1)
- Similarity Joins (1)
- Simulation acceleration (1)
- Single Cell Analysis (1)
- Singly Occupied Molecular Orbital (SOMO) (1)
- Singular <Programm> (1)
- Singularity theory (1)
- Singularität (1)
- Singularitätentheorie (1)
- Skalar (1)
- Skelettmuskel (1)
- Slender body theory (1)
- Smart City (1)
- Smart Textile (1)
- Smartphone (1)
- Smartwatch (1)
- Sobolev spaces (1)
- Sobolev-Raum (1)
- Socio-Semantic Web (1)
- Soft Spaces (1)
- Software Comprehension (1)
- Software Dependencies (1)
- Software Evolution (1)
- Software Maintenance (1)
- Software Measurement (1)
- Software Testing (1)
- Software Visualization (1)
- Software engineering (1)
- Software transactional memory (1)
- Software-Architektur (1)
- Softwareentwicklung (1)
- Softwaremetrie (1)
- Softwarewartung (1)
- Sound Simulation (1)
- Spannungs-Dehn (1)
- Spatial Statistics (1)
- Spatial regression models (1)
- Spectral theory (1)
- Speech recognition (1)
- Speed (1)
- Speed management (1)
- Spektralanalyse <Stochastik> (1)
- Spherical Fast Wavelet Transform (1)
- Spherical Location Problem (1)
- Sphärische Approximation (1)
- Spiking Neural ADC (1)
- Spiritual leadership (1)
- Spirituality (1)
- Spline-Approximation (1)
- Split Operator (1)
- Splitoperator (1)
- Sprachdefinition (1)
- Sprachprofile (1)
- Spritzgusstechnologie (1)
- Sprödbru (1)
- Stabile Vektorbundle (1)
- Stable vector bundles (1)
- Standard basis (1)
- Standortprobleme (1)
- Static light scattering (1)
- Statine (1)
- Stationary Light (1)
- Stationäres Licht (1)
- Statistics (1)
- Statistisches Modell (1)
- Steady state (1)
- Steuer (1)
- Stimmungsbasierte Musikempfehlungen (1)
- Stochastic Impulse Control (1)
- Stochastic Processes (1)
- Stochastische Differentialgleichung (1)
- Stochastische Inhomogenitäten (1)
- Stochastische Processe (1)
- Stochastische optimale Kontrolle (1)
- Stochastischer Prozess (1)
- Stoffaustausch (1)
- Stokes Equations (1)
- Stokes-Gleichung (1)
- Stop- und Spieloperator (1)
- Stoßdämpfer (1)
- Strahlentherapie (1)
- Strahlungstransport (1)
- Streaming (1)
- Streptococcus (1)
- Streptococcus pneumoniae (1)
- Streptomyces (1)
- Stress management (1)
- Structure-property relationships (1)
- Strukturiertes Finanzprodukt (1)
- Strukturiertes Gitter (1)
- Strukturoptimierung (1)
- Strömung (1)
- Strömungsdynamik (1)
- Strömungsmechanik (1)
- Sublimation (1)
- Superior olivary complex (1)
- Supramolecular chemistry (1)
- Surface Reconstruction (1)
- Survival Analysis (1)
- Susceptor (1)
- Sustainability (1)
- Swakopmund (1)
- Swift Heavy Ion (1)
- Symbolic execution (1)
- Symmetrie (1)
- Symmetriebrechung (1)
- Symmetry (1)
- Synchronnetze (1)
- SystemC (1)
- Systemarchitektur (1)
- Systematics (1)
- Systematik (1)
- Systemdesign (1)
- Systemic Constructivist Approach (1)
- Systemidentifikation (1)
- Systemische Konstruktivistischen Ansatz (1)
- Systems Engineering (1)
- Sägezahneffekt (1)
- TCDD (1)
- TD-CDMA (1)
- TIPARP (1)
- TPC Bauteile (1)
- TTEthernet (1)
- TVET teachers’ education (1)
- Tail Dependence Koeffizient (1)
- Taktrückgewinnungsschaltungen (1)
- Task-based (1)
- Temporal Decoupling (1)
- Temporal data processing (1)
- Tensor (1)
- Tensorfeld (1)
- Tesselation (1)
- Test for Changepoint (1)
- Tetrachlordibenzodioxine (1)
- Tetraeder (1)
- Tetraedergi (1)
- Tetrahedral Grid (1)
- Tetrahedral Mesh (1)
- Texturrichtung (1)
- Themenbasierte Empfehlungen von Ressourcen (1)
- Thermodynamik (1)
- Thermomechanische Behandlung (1)
- Thermophoresis (1)
- Thermoplast (1)
- Thermoset (1)
- Thin film approximation (1)
- Thylakoid (1)
- Tichonov-Regularisierung (1)
- Time Series (1)
- Time-Series (1)
- Time-Triggered (1)
- Time-delay-Netz (1)
- Tire-soil interaction (1)
- ToF (1)
- Top-down (1)
- Topic-based Resource Recommendations (1)
- Topologie (1)
- Topologieoptimierung (1)
- Topology optimization (1)
- Topology visualization (1)
- Toxizität (1)
- Traffic flow (1)
- Trans-European Transport Networks (1)
- Transaction Level Modeling (TLM) (1)
- Transaction costs (1)
- Transaktionskosten (1)
- Transeuropäische Verkehrsnetze (1)
- Transfektion (1)
- Transferred proteins (1)
- Transformation (1)
- Transient state (1)
- Transkription (1)
- Transport Protocol (1)
- Traversability Analysis (1)
- Trennschärfe <Statistik> (1)
- Tribologie (1)
- Tropfenkoaleszenz (1)
- Tropfenzerfall (1)
- Tropical Grassmannian (1)
- Tropical Intersection Theory (1)
- Tube Drawing (1)
- Turnover <Ökologie> (1)
- Two-phase flow (1)
- UCP2 (1)
- UML (1)
- Ubiquitous system (1)
- Ultraviolettspektroskopie (1)
- Umweltgerechtigkeit (1)
- Unobtrusive instrumentations (1)
- Unorganized Data (1)
- Unreinheitsfunktion (1)
- Unspezifische Adsorption (1)
- Unstrukturiertes Gitter (1)
- Untermannigfaltigkeit (1)
- Upper bound (1)
- Upwind-Verfahren (1)
- Urban Flooding (1)
- Urban Water Supply (1)
- Urban sprawl (1)
- UrbanSim (1)
- Usability (1)
- Usage modeling (1)
- User Model (1)
- User-Centred Product Development (1)
- User-Experience (1)
- Utility (1)
- VALBM (1)
- VOF Model (1)
- VOF Modell (1)
- VSCPT (1)
- Validierung (1)
- Value at Risk (1)
- Value-at-Risk (1)
- Variationsrechnung (1)
- Vector (1)
- Vector Field (1)
- Vectorfield approximation (1)
- Vegetationsentwicklung (1)
- Vektor (1)
- Vektorfeldapproximation (1)
- Vektorfelder (1)
- Vektorkugelfunktionen (1)
- Verdampfung (1)
- Verification (1)
- Verkehrspolitik (1)
- Verkehrssicherheit (1)
- Verschwindungsatz (1)
- Verzerrungstensor (1)
- Verzweigung <Mathematik> (1)
- Virtual Reality (1)
- Virulence (1)
- Viscosity Adaptive Lattice Boltzmann Method (1)
- Viskoelastische Flüssigkeiten (1)
- Viskose Transportschemata (1)
- Viskosität (1)
- Visual Analytics (1)
- Visual Queries (1)
- Visualization Theory (1)
- Vitamin C (1)
- Vitamin C-Derivate (1)
- Vocational education and training (1)
- Volume rendering (1)
- Volumen-Rendering (1)
- Vorkonditionierer (1)
- Voronoi diagram (1)
- Voronoi-Diagramm (1)
- Vorverarbeitung (1)
- WCET (1)
- Waldfragmentierung (1)
- Waldökosystem (1)
- Wasserstoffbrückenbindungen (1)
- Water (1)
- Water resources (1)
- Wave Based Method (1)
- Wavelet-Theorie (1)
- Wavelet-Theory (1)
- Weak Memory Model (1)
- Wearable Computing (1)
- Weißes Rauschen (1)
- Wetland Conservation (1)
- Wetting (1)
- White Noise (1)
- White Noise Analysis (1)
- WiFi (1)
- Wide-column stores (1)
- Wirbelabtrennung (1)
- Wirbelströmung (1)
- Wirtsspezifität (1)
- Wissen (1)
- Wissenschaftliches Rechnen (1)
- Wissenserwerb (1)
- Worst-Case (1)
- Wärmeleitfähigkeit (1)
- Wärmeleitung (1)
- XDBMS (1)
- XFEM (1)
- XMCD (1)
- XML (1)
- XML query estimation (1)
- XML summary (1)
- Yaglom limits (1)
- Yaroslavskiy-Bentley-Bloch Quicksort (1)
- Zeitintegrale Modelle (1)
- Zeitreihe (1)
- Zentrenprobleme (1)
- Zeolite MCM-71 (1)
- Zeolite SSZ-53 (1)
- Zeolite UTD-1 (1)
- Zeolith (1)
- Zeolith ITQ-21 (1)
- Zeolith MCM-71 (1)
- Zeolith SSZ-53 (1)
- Zeolith UTD-1 (1)
- Zero-dimensional schemes (1)
- Zigarettenrauchen (1)
- Zigarrenrauchen (1)
- Zufälliges Feld (1)
- Zugesicherte Eigenschaft (1)
- Zustandsgleichung (1)
- Zweiphasenströmung (1)
- Zweiphotonenspektroskopie (1)
- acetate (1)
- acetylcholine receptor (1)
- acidification (1)
- acoustic modeling (1)
- actively steered implement (1)
- adhesion (1)
- adhesive joints in concrete (1)
- affective user interface (1)
- affine arithmetic (1)
- algebraic attack (1)
- algebraic correspondence (1)
- algebraic function fields (1)
- algebraic geometry (1)
- algebraic number fields (1)
- algebraic topology (1)
- algebraische Korrespondenzen (1)
- algebraische Topologie (1)
- algebroid curve (1)
- alkali (1)
- alkin (1)
- alkyne (1)
- alpha shape method (1)
- alternating minimization (1)
- alternating optimization (1)
- amid (1)
- amide (1)
- analoge Mikroelektronik (1)
- analysis of algorithms (1)
- angewandte Mathematik (1)
- angewandte Topologie (1)
- anharmonic CH modes (1)
- anharmonic vibrations (1)
- anionic receptors (1)
- anisotropen Viskositätsmodell (1)
- anisotropic viscosity (1)
- anoxia (1)
- anserine (1)
- anthropogenic effects (1)
- apoptosis (1)
- applied mathematics (1)
- apprehension (1)
- aquatic (1)
- arbitrary Lagrangian-Eulerian methods (ALE) (1)
- archimedean copula (1)
- artificial intelligence (1)
- artificial neural network (1)
- aryl hydrocarbon receptor (1)
- ascorbate (1)
- ascorbic acid (1)
- ascorbyl radical (1)
- asian option (1)
- aspartam (1)
- aspartame (1)
- assembly tasks (1)
- associations (1)
- assymmetric carboxylate stretch vibrations (1)
- automated theorem proving (1)
- autonomous networking (1)
- average-case analysis (1)
- axis orientation (1)
- basic carboxylates (1)
- basket option (1)
- beam refocusing (1)
- beating rate (1)
- behaviour-based system (1)
- benders decomposition (1)
- bending strip method (1)
- benzo[a]pyrene (1)
- benzol (1)
- bifurcation (1)
- binary countdown protocol (1)
- binomial tree (1)
- bioactive metabolites (1)
- bioavailability (1)
- biochemical characterisation (1)
- biodiversity (1)
- biology of knowledge (1)
- biomarker (1)
- biomechanics (1)
- biosensors (1)
- bitvector (1)
- black bursts (1)
- blackout period (1)
- bocses (1)
- boundary value problem (1)
- bounded model checking (1)
- brittle fracture (1)
- bursting disk (1)
- butterfly molecule (1)
- c-Abl (1)
- calving (1)
- canonical ideal (1)
- canonical module (1)
- carboxylate bridge (1)
- carboxylates (1)
- carnosine (1)
- carrier-grade point-to-point radio networks (1)
- catalysis (1)
- cells on chips (1)
- change detection (1)
- changing market coefficients (1)
- characterization of Structures (1)
- chemoprevention (1)
- chromium (1)
- climate change (1)
- closure approximation (1)
- clustering (1)
- clustering methods (1)
- coffee (1)
- cohesive cracks (1)
- cohesive elements (1)
- cohesive interface (1)
- collaborative information visualization (1)
- collaborative mobile sensing (1)
- collective intelligence (1)
- collision induced dissociation (1)
- colonization (1)
- combination band (1)
- combinatorics (1)
- community assembly (1)
- composite (1)
- composite materials (1)
- composites (1)
- computational biology (1)
- computational dynamics (1)
- computational finance (1)
- computer algebra (1)
- computer-based systems (1)
- computer-supported cooperative work (1)
- computeralgebra (1)
- concurrent (1)
- condition number (1)
- configurational mechanics (1)
- conserving time integration (1)
- consistent integration (1)
- constrained mechanical systems (1)
- content-and-structure summary (1)
- context awareness (1)
- context management (1)
- context-aware topology control (1)
- continuous master theorem (1)
- continuum damage (1)
- continuum damage mechanics (1)
- continuum fracture mechanics (1)
- controller (1)
- convergence behaviour (1)
- convex constraints (1)
- convex optimization (1)
- coordinated backhaul networks in rural areas (1)
- coordinative flexibility (1)
- correlated errors (1)
- coupled problems (1)
- coupling methods (1)
- crack path tracking (1)
- crash (1)
- crash application (1)
- crash hedging (1)
- crashworthiness (1)
- credit risk (1)
- cross section (1)
- crossphase modulation (1)
- crowd condition estimation (1)
- crowd density estimation (1)
- crowd scanning (1)
- crowd sensing (1)
- crowdsourcing (1)
- crystallization (1)
- cumulative IRMPD (1)
- curvature (1)
- curves and surfaces (1)
- cutting simulation (1)
- cyclic peptides (1)
- damage tolerance (1)
- data annotation (1)
- data sets (1)
- data-flow (1)
- dataset (1)
- decidability (1)
- decision support (1)
- decision support systems (1)
- decoding (1)
- default time (1)
- defect interaction (1)
- degenerations of an elliptic curve (1)
- dense univariate rational interpolation (1)
- density gradient theory (1)
- depth sensing (1)
- design (1)
- design automation (1)
- deterministic arbitration (1)
- deuterierung (1)
- development (1)
- dielectric elastomers (1)
- diffusion coefficient (1)
- diffusion measurement (1)
- diffusion model (1)
- diffusion models (1)
- dioxin-like compounds (1)
- directed graphs (1)
- dischargeable mass flow rate (1)
- discontinuous finite elements (1)
- discrepancy (1)
- distributed (1)
- distributed real-time systems (1)
- distributed tasks (1)
- disulfide bond transfer (1)
- diversification (1)
- diversity (1)
- domain parametrization (1)
- domain switching (1)
- double exponential distribution (1)
- downward continuation (1)
- driver assistance (1)
- driver status and intention prediction (1)
- drowsiness detection (1)
- dynamic (1)
- dynamic calibration (1)
- dynamic combinatorial chemistry (1)
- dynamic fracture mechanics (1)
- dynamic model (1)
- dysprosium (1)
- ecology (1)
- efficiency loss (1)
- elastoplasticity (1)
- electrical (1)
- electrical conductivity (1)
- electro-hydraulic systems (1)
- electronically excited states (1)
- elektronisch angeregte Zustände (1)
- elliptical distribution (1)
- embedded (1)
- embedding (1)
- emotion visualization (1)
- empirical review (1)
- enamid (1)
- end-to-end learning (1)
- endolithic (1)
- endomorphism ring (1)
- enrichment (1)
- entrepreneurial orientation (1)
- entrepreneurship (1)
- enumerative geometry (1)
- environment perception (1)
- environmental noise (1)
- epiphytes (1)
- epoxy (1)
- equation of state (1)
- equilibrium strategies (1)
- equisingular families (1)
- esterases (1)
- event segmentation (1)
- evolutionary algorithm (1)
- face value (1)
- fallible knowledge (1)
- fatigue (1)
- fault-tolerant control (1)
- fehlertolerante Regelung (1)
- fermi resonance (1)
- ferroelectric fatigue (1)
- ferroelektrische Ermüdung (1)
- ferroelektrischer Perowskit (1)
- fiber reinforced silicon carbide (1)
- fictitious configurations (1)
- filter (1)
- filtration (1)
- financial mathematics (1)
- finite Elasto-Plastizität (1)
- finite elasto-plasticity (1)
- firewall (1)
- first hitting time (1)
- flexible multibody dynamics (1)
- float glass (1)
- flood risk (1)
- flow cytometry (1)
- flow visualization (1)
- fluid interface (1)
- fluid structure (1)
- fluid structure interaction (1)
- fluid-structure interaction (FSI) (1)
- folding rocks (1)
- forest management (1)
- formate (1)
- forward-shooting grid (1)
- foundational translation validation (1)
- fracture mechanics (1)
- fragmentation channel (1)
- free surface (1)
- free-living (1)
- freie Oberfläche (1)
- front loader (1)
- functional safety (1)
- fuzzy Q-learning (1)
- fuzzy logic (1)
- gas phase reaction (1)
- gasphase (1)
- gaussian filter (1)
- gebietszerlegung (1)
- gelonin (1)
- generalized plasticity (1)
- generic self-x sensor systems (1)
- generic sensor interface (1)
- geographic information systems (1)
- geology (1)
- geometrically exact beams (1)
- gitter (1)
- global tracking (1)
- glycine neurotransmission (1)
- good semigroup (1)
- graph drawing algorithm (1)
- graph embedding (1)
- graph layout (1)
- graph p-Laplacian (1)
- gravitation (1)
- group action (1)
- großer Investor (1)
- hand pose, hand shape, depth image, convolutional neural networks (1)
- handover optimzaiion (1)
- hedging (1)
- heterogeneous access management (1)
- heterogenous catalysis (1)
- heuristic (1)
- hexadiendiale (1)
- hierarchical matrix (1)
- higher education (1)
- higher order accurate conserving time integrators (1)
- higher-order continuum (1)
- historical documents (1)
- host preference (1)
- host-range (1)
- hybrid lightweight structures (1)
- hybrid material (1)
- hybrid materials (1)
- hybrid structure (1)
- hybride Leichtbaustrukturen (1)
- hydrogen bonds (1)
- hydrogenation (1)
- hyperbolic systems (1)
- hyperelliptic function field (1)
- hyperelliptische Funktionenkörper (1)
- hypergraph (1)
- hyperspectal unmixing (1)
- hypolithic (1)
- hypoxia (1)
- iB2C (1)
- idealclass group (1)
- image denoising (1)
- imaging (1)
- immobilization (1)
- immunotoxins (1)
- implement (1)
- implementation (1)
- impulse control (1)
- impurity functions (1)
- incompressible elasticity (1)
- inelastic multibody systems (1)
- inelastische Mehrkörpersysteme (1)
- inertial measurement unit (1)
- infinite-dimensional manifold (1)
- inflation-linked product (1)
- information systems (1)
- infrared spectroscopy (1)
- infrarot (1)
- inhibition (1)
- injection molding (1)
- integer programming (1)
- integral constitutive equations (1)
- intellectual disability (1)
- intensity (1)
- interaction networks (1)
- interface (1)
- interference (1)
- intermediate stops (1)
- interpretation (1)
- interval arithmetic (1)
- invariant (1)
- inverse coordination (1)
- inverse optimization (1)
- inverse problem (1)
- ion-sensitive field-effect transistor (1)
- ionization (1)
- isogeometric analysis (IGA) (1)
- jenseits der dritten Generation (1)
- joint channel estimation (1)
- jump-diffusion process (1)
- kalman (1)
- kinematic (1)
- kinematic model (1)
- kinetic isotope effect (1)
- kinetischer Isotopeneffekt (1)
- konsistente Integration (1)
- kontinuumsatomistischer Ansatz (1)
- landsat (1)
- language definition (1)
- language modeling (1)
- language profiles (1)
- lanthanide (1)
- large investor (1)
- large neighborhood search (1)
- large scale integer programming (1)
- lattice Boltzmann (1)
- layout (1)
- leaf-cutting ants (1)
- letter (1)
- leukemia (1)
- level K-algebras (1)
- level set method (1)
- life-history (1)
- life-strategy (1)
- limit theorems (1)
- linear code (1)
- linked data (1)
- lipases (1)
- liquid-liquid extraction (1)
- loader (1)
- localizing basis (1)
- logic synthesis (1)
- long short-term memory (1)
- long tail (1)
- longevity bonds (1)
- low-rank approximation (1)
- lung cancer (1)
- mHealth (1)
- machine-checkable proof (1)
- macro derivative (1)
- magnetic field based localization (1)
- magnetism (1)
- manganese (1)
- marine bacteria (1)
- market crash (1)
- market manipulation (1)
- martingale optimality principle (1)
- mass spectrometry (1)
- material characterisation (1)
- materielle Kräfte (1)
- mathematical modelling (1)
- mathematical morphology (1)
- matrix problems (1)
- matrix visualization (1)
- matroid flows (1)
- mehreren Uebertragungszweigen (1)
- mesh deformation (1)
- mesoporous (1)
- message-passing (1)
- meta-analysis (1)
- metabolism (1)
- metadata (1)
- metaheuristics (1)
- metal fibre (1)
- metal organic frameworks (1)
- miRNA (1)
- micro lead (1)
- micromechanics (1)
- micromorphic continua (1)
- microstructures (1)
- mixed convection (1)
- mixed methods (1)
- mixed multiscale finite element methods (1)
- mixed-signal (1)
- mobile radio (1)
- mobile radio systems (1)
- mobile scale (1)
- mobility robustness optimization (1)
- modal derivatives (1)
- model (1)
- model order reduction (1)
- model-based fault diagnosis (1)
- modularisation (1)
- moduli space (1)
- molecular capsules (1)
- molecular simulation (1)
- molekulare Simulation (1)
- moment (1)
- monotone Konvergenz (1)
- monotropic programming (1)
- muconaldehyde (1)
- multi scale (1)
- multi-asset option (1)
- multi-carrier (1)
- multi-class image segmentation (1)
- multi-core processors (1)
- multi-domain modeling and evaluation methodology (1)
- multi-level Monte Carlo (1)
- multi-object tracking (1)
- multi-phase flow (1)
- multi-user (1)
- multicategory (1)
- multicore (1)
- multidimensional datasets (1)
- multifilament superconductor (1)
- multifunctionality (1)
- multigrid method (1)
- multileaf collimator (1)
- multinomial regression (1)
- multiobjective optimization (1)
- multipatch (1)
- multiplicative decomposition (1)
- multiplicative noise (1)
- multiplikative Zerlegung (1)
- multiscale analysis (1)
- multiscale denoising (1)
- multiscale methods (1)
- multitemporal (1)
- multithreading (1)
- multitype code coupling (1)
- multiuser detection (1)
- multiuser transmission (1)
- multivariate chi-square-test (1)
- multiway partitioning (1)
- myasthenia gravis (1)
- n-Decane hydroconversion (1)
- naive diversification (1)
- nanocomposites (1)
- nanofiber (1)
- nanoparticle (1)
- natural products (1)
- necrosis (1)
- negative refraction (1)
- neonatal rat ventricular cardiomyocytes (1)
- neonatale ventrikuläre Kardiomyozyten der Ratte (1)
- nestable tangibles (1)
- network flows (1)
- network synthesis (1)
- netzgenerierung (1)
- neural networks (1)
- neurotrophin 3 (1)
- nicht-newtonsche Strömungen (1)
- nichtlineare Druckkorrektor (1)
- nichtlineare Modellreduktion (1)
- nichtlineare Netzwerke (1)
- nickel (1)
- niob (1)
- non-conventional (1)
- non-desarguesian plane (1)
- non-newtonian flow (1)
- nonconvex optimization (1)
- nonlinear circuits (1)
- nonlinear diffusion filtering (1)
- nonlinear elasticity (1)
- nonlinear elastodynamics (1)
- nonlinear model reduction (1)
- nonlinear pressure correction (1)
- nonlinear term structure dependence (1)
- nonlinear vibration analysis (1)
- nonlocal filtering (1)
- nonnegative matrix factorization (1)
- nonwovens (1)
- normalization (1)
- nucleofection (1)
- null model (1)
- numerical irreducible decomposition (1)
- numerical methods (1)
- numerical time integration (1)
- numerics (1)
- numerische Dynamik (1)
- numerische Strömungssimulation (1)
- numerisches Verfahren (1)
- oblique derivative (1)
- optical code multiplex (1)
- optical imaging (1)
- optimal (1)
- optimal capital structure (1)
- optimal consumption and investment (1)
- optiman stopping (1)
- optimization (1)
- optimization correctness (1)
- option pricing (1)
- option valuation (1)
- orbit (1)
- oscillating magnetic fields (1)
- out-of-order (1)
- output feedback approximation (1)
- overtone (1)
- oxidative DNA Damage (1)
- oxidative DNA Schäden (1)
- oxo centered transition metal complexes (1)
- oxygen consumption (1)
- p300 (1)
- p53 (1)
- parallel (1)
- partial hydrolysis (1)
- partial information (1)
- participatory sensing (1)
- particle dynamics (1)
- particle finite element method (1)
- path (1)
- path cost models (1)
- path relinking (1)
- path tracking (1)
- path-dependent options (1)
- pattern (1)
- pattern recognition (1)
- penalty methods (1)
- penalty-free formulation (1)
- peripheral blood mononuclear cells (1)
- petroleum exploration (1)
- phase equilibrium (1)
- phase field modeling (1)
- phenothiazine (1)
- photonic crystals (1)
- photonic crystals filter (1)
- photonics (1)
- piezoelectricity (1)
- pivot sampling (1)
- planar polynomial (1)
- plant-herbivore interactions (1)
- plasticity (1)
- platin (1)
- platinum (1)
- point cloud (1)
- point defects (1)
- polymer blends (1)
- polymer morphology (1)
- polymer nanocomposites (1)
- polyphenol (1)
- population balance modelling (1)
- poroelasticity (1)
- porous media (1)
- portfolio (1)
- portfolio decision (1)
- portfolio-optimization (1)
- poröse Medien (1)
- position detection (1)
- potential (1)
- preconditioners (1)
- preprocessing (1)
- pressure correction (1)
- pressure drop (1)
- pressure relief (1)
- preventive maintenance (1)
- primal-dual algorithm (1)
- probabilistic modeling (1)
- probability distribution (1)
- probability of dangerous failure on demand (1)
- processing (1)
- projective surfaces (1)
- proof generating optimizer (1)
- propagating discontinuities (1)
- property cheking (1)
- protein adducts (1)
- protein analysis (1)
- protein conjugate (1)
- proximation (1)
- pulsed and stirred columns (1)
- pulsierte und gerührte Kolonen (1)
- quadrinomial tree (1)
- quantum gas (1)
- quasi-Monte Carlo (1)
- quasi-variational inequalities (1)
- quasihomogeneity (1)
- quasiregular group (1)
- quasireguläre Gruppe (1)
- radiation therapy (1)
- radiotherapy (1)
- rank-one convexity (1)
- rare disasters (1)
- rat liver cell systems (1)
- rate of convergence (1)
- raum-zeitliche Analyse (1)
- ray casting (1)
- ray tracing (1)
- reaction coordinate (1)
- reaction kinetics (1)
- reactive oxygen species (1)
- readout system (1)
- reaktionskinetik (1)
- real quadratic number fields (1)
- real-tiem (1)
- real-time scheduling (1)
- real-time systems (1)
- real-time tasks (1)
- receiver orientation (1)
- receptors for anions (1)
- redundant constraint (1)
- reflectionless boundary condition (1)
- reflexionslose Randbedingung (1)
- regime-shift model (1)
- regression analysis (1)
- regularity (1)
- regularization methods (1)
- reinforcement learning (1)
- relative effect potencies (1)
- relaxed memory models (1)
- remote sensing (1)
- respiratory chain (1)
- reverse logistics (1)
- rhabdomyolysis (1)
- rheology (1)
- ribosome-inactivating proteins (1)
- risk analysis (1)
- risk measures (1)
- risk reduction (1)
- robustness (1)
- rupture disk (1)
- ruthenium-vinylidene (1)
- safety-related systems (1)
- sampling (1)
- satisfiability (1)
- sawtooth effect (1)
- scalar and vectorial wavelets (1)
- scaled boundary isogeometric analysis (1)
- scaled boundary parametrizations (1)
- second class group (1)
- secondary structure prediction (1)
- seismic tomography (1)
- self calibration (1)
- self-optimizing networks (1)
- semantic web (1)
- semigroup of values (1)
- sensor fusion (1)
- sequential circuit (1)
- serum albumin (1)
- service area (1)
- sheaf theory (1)
- silica (1)
- silicon nanowire (1)
- similarity measures (1)
- singularities (1)
- skeletal muscle cells (1)
- sliding wear (1)
- small-multiples node-link visualization (1)
- software comprehension (1)
- software engineering task (1)
- solid interfaces (1)
- solvation (1)
- sparse interpolation of multivariate rational functions (1)
- sparse multivariate polynomial interpolation (1)
- sparsity (1)
- spatial planning (1)
- spatial statistics (1)
- spectroscopy (1)
- spherical approximation (1)
- spin (1)
- spin flip (1)
- sputtering process (1)
- srtm (1)
- stability (1)
- stabilization (1)
- star-shaped domain (1)
- static software structure (1)
- statin (1)
- stationary sensing (1)
- stationär (1)
- statistics (1)
- steel fibre (1)
- stochastic coefficient (1)
- stochastic optimal control (1)
- stochastic processes (1)
- stop- and play-operator (1)
- strain localization (1)
- structural summary (1)
- structural tensors (1)
- subgradient (1)
- subjective evaluation (1)
- subjectivity (1)
- sulfonic (1)
- superposed fluids (1)
- supramolecular chemistry (1)
- surface measures (1)
- surface tension (1)
- surrogate algorithm (1)
- suzuki coupling (1)
- symbolic simulation (1)
- symmetrc carboxylate stretch vibrations (1)
- symmetry (1)
- synchronization (1)
- synchronous (1)
- system architecture (1)
- syzygies (1)
- tabletop (1)
- tail dependence coefficient (1)
- target sensitivity (1)
- task sequence (1)
- tax (1)
- technische und berufliche Aus- und Weiterbildung Lehrer lernen (1)
- technology mapping (1)
- tensions (1)
- tensor (1)
- tensorfield (1)
- terrain rendering (1)
- tetrachlorodibenzo-p-dioxin (1)
- texture orientation (1)
- thermal analysis (1)
- thermodynamic model (1)
- thermophysical properties (1)
- thermoplastische Verbundwerkstoffe (1)
- thiazolium (1)
- thiol-disulfide exchange (1)
- time delays (1)
- time utility functions (1)
- time-varying flow fields (1)
- timeliness (1)
- top-down (1)
- topological asymptotic expansion (1)
- topological insulator (1)
- toric geometry (1)
- torische Geometrie (1)
- total variation (1)
- total variation spatial regularization (1)
- touch surfaces (1)
- toxic equivalency factor (TEF) concept (1)
- toxicity (1)
- tracking (1)
- trade-off (1)
- traffic safety (1)
- transfer hydrogenation (1)
- transient (1)
- transition metal (1)
- transition metal complexes (1)
- transition metals (1)
- translation contract (1)
- translation invariant spaces (1)
- translinear circuits (1)
- translineare Schaltungen (1)
- transmission conditions (1)
- transport (1)
- tribology (1)
- tropical ecology (1)
- tropical geometry (1)
- tropical rainforest (1)
- tropischer Regenwald (1)
- urban planning (1)
- user-centered design (1)
- value semigroup (1)
- variable neighborhood search (1)
- variational methods (1)
- variational model (1)
- vector bundles (1)
- vector field visualization (1)
- vector spherical harmonics (1)
- vectorfield (1)
- vectorial wavelets (1)
- vehicle routing (1)
- vertical velocity (1)
- vertikale Geschwindigkeiten (1)
- virtual reality (1)
- virtual training (1)
- viscoelastic fluids (1)
- viscoelastic modeling (1)
- viscosity model (1)
- visual structure (1)
- voltage sensitive dye (1)
- vortex seperation (1)
- waveguides (1)
- wavelength multiplex (1)
- wear (1)
- weighing (1)
- weighted finite-state transducers (1)
- well-posedness (1)
- wetting (1)
- wheel side-slip estimation (1)
- whole genome microarray analysis (1)
- wireless communications system (1)
- wireless networks (1)
- wireless sensor network (1)
- wireless signal (1)
- worker assistance (1)
- worst-case (1)
- worst-case scenario (1)
- zeitabhängige Strömungen (1)
- zinc (1)
- Ähnlichkeit (1)
- Äquisingularität (1)
- Ökologie (1)
- Überflutung (1)
- Überflutungsrisiko (1)
- Übergangsbedingungen (1)
- Übergangsmetall (1)
- Übersetzung (1)

#### Faculty / Organisational entity

- Fachbereich Mathematik (228)
- Fachbereich Informatik (153)
- Fachbereich Maschinenbau und Verfahrenstechnik (100)
- Fachbereich Chemie (59)
- Fachbereich Elektrotechnik und Informationstechnik (48)
- Fachbereich Biologie (31)
- Fachbereich Sozialwissenschaften (17)
- Fachbereich Wirtschaftswissenschaften (11)
- Fachbereich Physik (6)
- Fachbereich ARUBI (5)

3D hand pose and shape estimation from a single depth image is a challenging computer vision and graphics problem with many applications such as
human computer interaction and animation of a personalized hand shape in
augmented reality (AR). This problem is challenging due to several factors
for instance high degrees of freedom, view-point variations and varying hand
shapes. Hybrid approaches based on deep learning followed by model fitting
preserve the structure of hand. However, a pre-calibrated hand model limits
the generalization of these approaches. To address this limitation, we proposed a novel hybrid algorithm for simultaneous estimation of 3D hand pose
and bone-lengths of a hand model which allows training on datasets that contain varying hand shapes. On the other hand, direct joint regression methods
achieve high accuracy but they do not incorporate the structure of hand in
the learning process. Therefore, we introduced a novel structure-aware algorithm which learns to estimate 3D hand pose jointly with new structural constraints. These constraints include fingers lengths, distances of joints along
the kinematic chain and fingers inter-distances. Learning these constraints
help to maintain a structural relation between the estimated joint keypoints.
Previous methods addressed the problem of 3D hand pose estimation. We
open a new research topic and proposed the first deep network which jointly
estimates 3D hand shape and pose from a single depth image. Manually annotating real data for shape is laborious and sub-optimal. Hence, we created a
million-scale synthetic dataset with accurate joint annotations and mesh files
of depth maps. However, the performance of this deep network is restricted by
limited representation capacity of the hand model. Therefore, we proposed a
novel regression-based approach in which the 3D dense hand mesh is recovered
from sparse 3D hand pose, and weak-supervision is provided by a depth image synthesizer. The above mentioned approaches regressed 3D hand meshes
from 2D depth images via 2D convolutional neural networks, which leads to
artefacts in the estimations due to perspective distortions in the images. To
overcome this limitation, we proposed a novel voxel-based deep network with
3D convolutions trained in a weakly-supervised manner. Finally, an interesting
application is presented which is in-air signature acquisition and verification
based on deep hand pose estimation. Experiments showed that depth itself is
an important feature, which is sufficient for verification.

In the avionics domain, “ultra-reliability” refers to the practice of ensuring quantifiably negligible residual failure rates in the presence of transient and permanent hardware faults. If autonomous Cyber- Physical Systems (CPS) in other domains, e.g., autonomous vehicles, drones, and industrial automation systems, are to permeate our everyday life in the not so distant future, then they also need to become ultra-reliable. However, the rigorous reliability engineering and analysis practices used in the avionics domain are expensive and time consuming, and cannot be transferred to most other CPS domains. The increasing adoption of faster and cheaper, but less reliable, Commercial Off-The-Shelf (COTS) hardware is also an impediment in this regard.
Motivated by the goal of ultra-reliable CPS, this dissertation shows how to soundly analyze the reliability of COTS-based implementations of actively replicated Networked Control Systems (NCSs)—which are key building blocks of modern CPS—in the presence of transient hard- ware faults. When an NCS is deployed over field buses such as the Controller Area Network (CAN), transient faults are known to cause host crashes, network retransmissions, and incorrect computations. In addition, when an NCS is deployed over point-to-point networks such as Ethernet, even Byzantine errors (i.e., inconsistent broadcast transmissions) are possible. The analyses proposed in this dissertation account for NCS failures due to each of these error categories, and consider NCS failures in both time and value domains. The analyses are also provably free of reliability anomalies. Such anomalies are problematic because they can result in unsound failure rate estimates, which might lead us to believe that a system is safer than it actually is.
Specifically, this dissertation makes four main contributions. (1) To reduce the failure rate of NCSs in the presence of Byzantine errors, we present a hard real-time design of a Byzantine Fault Tolerance (BFT) protocol for Ethernet-based systems. (2) We then propose a quantitative reliability analysis of the presented design in the presence of transient faults. (3) Next, we propose a similar analysis to upper-bound the failure probability of an actively replicated CAN-based NCS. (4) Finally, to upper-bound the long-term failure rate of the NCS more accurately, we propose analyses that take into account the temporal robustness properties of an NCS expressed as weakly-hard constraints.
By design, our analyses can be applied in the context of full-system analyses. For instance, to certify a system consisting of multiple actively replicated NCSs deployed over a BFT atomic broadcast layer, the upper bounds on the failure rates of each NCS and the atomic broadcast layer can be composed using the sum-of-failure-rates model.

Learning From Networked-data: Methods and Models for Understanding Online Social Networks Dynamics
(2020)

Abstract
Nowadays, people and systems created by people are generating an unprecedented amount of
data. This data has brought us data-driven services with a variety of applications that affect
people’s behavior. One of these applications is the emergent online social networks as a method
for communicating with each other, getting and sharing information, looking for jobs, and many
other things. However, the tremendous growth of these online social networks has also led to many
new challenges that need to be addressed. In this context, the goal of this thesis is to better understand
the dynamics between the members of online social networks from two perspectives. The
first perspective is to better understand the process and the motives underlying link formation in
online social networks. We utilize external information to predict whether two members of an online
social network are friends or not. Also, we contribute a framework for assessing the strength of
friendship ties. The second perspective is to better understand the decay dynamics of online social
networks resulting from the inactivity of their members. Hence, we contribute a model, methods,
and frameworks for understanding the decay mechanics among the members, for predicting members’
inactivity, and for understanding and analyzing inactivity cascades occurring during the decay.
The results of this thesis are: (1) The link formation process is at least partly driven by interactions
among members that take place outside the social network itself; (2) external interactions might
help reduce the noise in social networks and for ranking the strength of the ties in these networks;
(3) inactivity dynamics can be modeled, predicted, and controlled using the models contributed in
this thesis, which are based on network measures. The contributions and the results of this thesis
can be beneficial in many respects. For example, improving the quality of a social network by introducing
new meaningful links and removing noisy ones help to improve the quality of the services
provided by the social network, which, e.g., enables better friend recommendations and helps to
eliminate fake accounts. Moreover, understanding the decay processes involved in the interaction
among the members of a social network can help to prolong the engagement of these members. This
is useful in designing more resilient social networks and can assist in finding influential members
whose inactivity may trigger an inactivity cascade resulting in a potential decay of a network.

Operator semigroups and infinite dimensional analysis applied to problems from mathematical physics
(2020)

In this dissertation we treat several problems from mathematical physics via methods from functional analysis and probability theory and in particular operator semigroups. The thesis consists thematically of two parts.
In the first part we consider so-called generalized stochastic Hamiltonian systems. These are generalizations of Langevin dynamics which describe interacting particles moving in a surrounding medium. From a mathematical point of view these systems are stochastic differential equations with a degenerated diffusion coefficient. We construct weak solutions of these equations via the corresponding martingale problem. Therefore, we prove essential m-dissipativity of the degenerated and non-sectorial It\^{o} differential operator. Further, we apply results from the analytic and probabilistic potential theory to obtain an associated Markov process. Afterwards we show our main result, the convergence in law of the positions of the particles in the overdamped regime, the so-called overdamped limit, to a distorted Brownian motion. To this end, we show convergence of the associated operator semigroups in the framework of Kuwae-Shioya. Further, we established a tightness result for the approximations which proves together with the convergence of the semigroups weak convergence of the laws.
In the second part we deal with problems from infinite dimensional Analysis. Three different issues are considered. The first one is an improvement of a characterization theorem of the so-called regular test functions and distribution of White noise analysis. As an application we analyze a stochastic transport equation in terms of regularity of its solution in the space of regular distributions. The last two problems are from the field of relativistic quantum field theory. In the first one the $ (\Phi)_3^4 $-model of quantum field theory is under consideration. We show that the Schwinger functions of this model have a representation as the moments of a positive Hida distribution from White noise analysis. In the last chapter we construct a non-trivial relativistic quantum field in arbitrary space-time dimension. The field is given via Schwinger functions. For these which we establish all axioms of Osterwalder and Schrader. This yields via the reconstruction theorem of Osterwalder and Schrader a unique relativistic quantum field. The Schwinger functions are given as the moments of a non-Gaussian measure on the space of tempered distributions. We obtain the measure as a superposition of Gaussian measures. In particular, this measure is itself non-Gaussian, which implies that the field under consideration is not a generalized free field.

In today’s world, mobile communication has become one of the most widely used technologies corroborated by growing number of mobile subscriptions and extensive usage of mobile multimedia services. It is a key challenge for the network operators to accommodate such large number of users and high traffic volume. Further, several day-to-day scenarios such as public transportation, public events etc., are now characterized with high mobile data
usage. A large number of users avail cellular services in such situations posing
high load to the respective base stations. This results in increased number of dropped connections, blocking of new access attempts and blocking of handovers (HO). The users in such system will thus be subjected to poor
Quality of Experience (QoE). Beforehand knowledge of the changing data traffic dynamics associated with such practical situations will assist in designing
radio resource management schemes aiming to ease the forthcoming congestion situations. The key hypothesis of this thesis is that consideration and utilization of additional context information regarding user, network and his environment is valuable in designing such smart Radio Resource Management(RRM) schemes. Methods are developed to predict the user cell transitions, considering the fact that mobility of the users is not purely random but rather direction oriented. This is particularly used in case of traffic dense moving network or group of users moving jointly in the same vehicle (e.g., bus, train, etc.) to
predict the propagation of high load situation among cells well in advance.
This enables a proactive triggering of load balancing (LB) in cells anticipating
the arrival of high load situation and accommodating the incoming user group or moving network. The evaluated KPIs such as blocked access
attempts, dropped connections and blocked HO are reduced.
Further, everyday scenario of dynamic crowd formation is considered as another potential case of high load situation. In real world scenarios such as open air festivals, shopping malls, stadiums or public events, several mobile users gather to form a crowd. This poses high load situation to the respective serving base station at the site of crowd formation, thereby leading to congestion. As a consequence, mobile users are subjected to poor QoE due to high dropping and blocking rates. A framework to predict crowd formation in a cell is developed based on coalition of user cell transition prediction, cluster detection and trajectory prediction. This framework is suitably used to prompt context aware load balancing mechanism and activate a small cell at the probable site of crowd formation. Simulations show that proactive LB
reduces the dropping of users (23%), blocking of users (10%) and blocked
HO (15%). In addition, activation of a Small Cell (SC) at the site of frequent
crowd formation leads to further reductions in dropping of users (60%),
blocking of users (56%) and blocked HO (59%).
Similar to the framework for crowd formation prediction, a concept is developed for predicting vehicular traffic jams. Many vehicular users avail broadband cellular services on a daily basis while traveling. The density of such vehicular users change dynamically in a cell and at certain sites (e.g.
signal lights), traffic jams arise frequently leading to a high load situation at
respective serving base station. A traffic prediction algorithm is developed
from cellular network perspective as a coalition strategy consisting of schemes to predict user cell transition, vehicular cluster/moving network detection, user velocity monitoring etc. The traffic status indication provided by the algorithm is then used to trigger LB and activate/deactivate a small cell suitably. The evaluated KPIs such as blocked access attempts, dropped connections
and blocked HO are reduced by approximately 10%, 18% and 18%, respectively due to LB. In addition, switching ON of SC reduces blocked access attempts, dropped connections and blocked HO by circa 42%, 82% and 81%, respectively.
Amidst increasing number of connected devices and traffic volume, another key issue for today’s network is to provide uniform service quality
despite high mobility. Further, urban scenarios are often characterized by
coverage holes which hinder service continuity. A context aware resource allocation scheme is proposed which uses enhanced mobility prediction to facilitate service continuity. Mobility prediction takes into account additional information about the user’s origin and possible destination to predict next road segment. If a coverage hole is anticipated in upcoming road, then additional
resources are allocated to respective user and data is buffered suitably.
The buffered data is used when the user is in a coverage hole to improve service continuity. Simulation shows improvement in throughput (in coverage
hole) by circa 80% and service interruption is reduced by around 90%, for a
non-real-time streaming service. Additionally, investigation of context aware procedures is carried out with a focus on user mobility, to find commonalities among different procedures and a general framework is proposed to support mobility context awareness. The new information and interfaces which are required from various entities
(e.g., vehicular infrastructure) are discussed as well.
Device-to-Device (D2D) communications commonly refer to the technology
that enables direct communication between devices, hence relieving the
base station from traffic routing. Thus, D2D communication is a feasible
solution in crowded situations, where users in proximity requesting to communicate with one another could be granted D2D links for communication, thereby easing the traffic load to serving base station. D2D links can potentially
reuse the radio resources from cellular users (known as D2D underlay) leading to better spectral utilization. However, the mutual interference can hinder system performance. For instance, if D2D links are reusing cellular uplink resources then D2D transmissions cause interference to cellular uplink at base station. Whereas, cellular transmissions cause interference to
D2D receivers. To cope up with such issues, location aware resource allocation
schemes are proposed for D2D communication. The key aim of such RA scheme is to reuse resources with minimal interference. The RA scheme based on virtual sectoring of a cell leads to approximately 15% more established
links and 25% more capacity with respect to a random resource allocation. D2D transmissions cause significant interference to cellular links with
which they reuse physical resource blocks, thereby hindering cellular performance. Regulating D2D transmissions to mitigate the aforementioned problem would mean sub-optimal exploitation of D2D communications. As
a solution, post-resource allocation power control at cellular users is proposed.
Three schemes namely interference aware power control, blind power
control and threshold based power control are discussed. Simulation results
show reductions in dropping of cellular users due to interference from D2D
transmissions, improvement in throughput at base station (uplink) while not hindering the D2D performance.

The number of sequenced genomes increases rapidly due to the development of faster, better and new technologies. Thus, there is a great interest in automation, and standardization of the subsequent processing and analysis stages of the generated enormous amount of data. In the current work, genomes of clones, strains and species of Streptococcus were compared, which were sequenced, annotated and analysed with several technologies and methods. For sequencing, the 454- and Illumina-technology were used. The assembly of the genomes mainly was performed by the gsAssembler (Newbler) of Roche, the annotation was performed by the annotation pipeline RAST, the transfer tool RATT or manually. Concerning analysis, sets of deduced proteins of several genomes were compared to each other and common components, the so-called core-genome, of the used genomes of one or closely related species determined. Detailed comparative analysis was performed for the genomes of isolates of two clones to gather single nucleotide variants (SNV) within genes.
This work focusses on the pathogenic organism Streptococcus pneumoniae. This species is a paradigm for transformability, virulence and pathogenicity as well as resistance mechanisms against antibiotics. Its close relatives S. mitis, S. pseudopneumoniae and S. oralis have no pathogenicity potential as high as S. pneumoniae available and are thus of high interest to understand the evolution of S. pneumoniae. Strains of two S. pneumoniae clones were chosen. One is the ST10523 clone, which is associated with patients with cystic fibrosis and is characterized by long-term persistence. This clone is lacking an active hyaluronidase, which is one of the main virulence factors. The lack of two phage clusters possibly contributed to the long persistence in the human host. The clone ST226 shows a high penicillin resistance but interestingly one strain is sensitive against penicillin. Here it could be seen that the penicillin resistance mainly arose from the presence of mosaic-PBPs, while special alleles of MurM and CiaH - both genes are associated with penicillin-resistance – were present in resistant and sensitive strains as well. Penicillin resistance of S. pneumoniae is the result of horizontal gene transfer, where DNA of closely related species, mainly S. mitis or S. oralis, served as donor. The transfer of DNA from the high-level penicillin-resistant strain S. oralis Uo5 to the sensitive strain S. pneumoniae R6 was intentioned to reveal the amount of transferred DNA and whether it is possible to reach the high resistance level of S. oralis Uo5. Altogether, about 19kb of S. oralis DNA were transferred after three successive transformation steps, about 10-fold less than during transfer from S. mitis, which is more closely related to S. pneumoniae, as donor. MurE was identified as new resistance determinant. Since the resistance level of the donor strain could not be reached, it is assumed, that further unknown factors are present which contribute to penicillin resistance. The comparison of S. pneumoniae and its close relatives was performed using deduced protein sequences. 1.041 homologous proteins are common to the four complete genomes of S. pneumoniae R6, S. pseudopneumoniae IS7493, S. mitis B6 and S. oralis Uo5. Most of the virulence and pathogenicity factors described for S. pneumoniae could also be found in commensal species. These observations were confirmed by further investigations by Kilian et al. (Kilian, et al., 2019). After adding 26 complete S. pneumoniae genomes to the analysis, only 104 gene products could be identified as specific for this species. Investigations of a larger number of related streptococci, which were isolated from human and several primates, confirmed the presence of most of the virulence factors of human pneumococci in S. oralis and S. mitis strains from primates. While NanBC is common among S. pneumoniae and is missing in all S. oralis, all S. oralis contain a ß-N-acetyl-hexosaminidase which vice versa is missing in S. pneumoniae. The occurrence of S. oralis also in free-living chimpanzees suggests the assumption, that this species is part of the commensal flora of these Old-World monkeys unlike S. pneumoniae which has evolved with its human host. Compared to S. pneumoniae, S. oralis shows an amazing variability in factors important for biosynthesis of peptidoglycan and teichoic acid (PBP, MurMN, lic-cluster). Some streptococci contain a second PGP3 homologue. Additional analyses with further isolates, especially of wild animals, are necessary to determine host-specific components.

In recent years, thermoplastic composites (TPCs) have been increasingly used for
aerospace and automotive applications. But also other industrial sectors, such as the
medical technology, have discovered the benefits of this material class. Compared to
thermoset composites, TPCs can be recycled more easily, remelted, and welded. In
addition to that, TPC parts can be produced economically and efficiently. As an example,
short cycle times and high production rates of TPCs can be realised with the
injection moulding processing technology. Injection moulded parts have the advantage
that function integration is feasible with relatively little effort.
However, these parts are characterised by discontinuous fibre reinforcement. Fibres
are randomly distributed within the part and fibre orientation can show significant local
variations. Whereas the highest stiffness and strength values of the material are
achieved parallel to fibre orientation, the lowest values are present in transverse direction.
As a consequence, structural mechanical properties of injection moulded
discontinuous fibre reinforced parts are lower compared to their continuous fibre reinforced
counterparts. Continuous fibre reinforced components show excellent specific
mechanical properties. However, their freedom in geometrical product design is restricted.
The aim of this work is to extend the applicability of TPCs for structural mass products
due to the realisation of a high-strength interface between discontinuous and
continuous fibre reinforced material. A hybrid structure with unique properties is produced
by overmoulding a continuous unidirectional endless carbon fibre (CF) reinforced
polyether ether ketone (PEEK) insert with discontinuous short CF reinforced
PEEK. This approach enables the manufacturing of structural mass products in short
cycle times which require both superior structural mechanical properties and sufficient
freedom in product design. However, sufficient interface strength between the discontinuous
and continuous component is required.
This research is based on the application case of a pedicle screw system which is
a spinal implant used for spine stabilisation and fusion. Since the 1990s, CF-PEEK
has been successfully used for spinal cages, and recently also for pedicle screws and
pedicle screw systems. Compared to metallic implants, CF-PEEK implants show several
advantages, such as the reduction of stress shielding, the prevention of artefacts
in medical imaging technologies (X-ray, computer tomography scan, or magnetic resonance
imaging) or the avoidance of backscattering during radiotherapy. Pedicle screws,
which are used in the lumbar spine region, are subjected to high forces and moments.
Therefore, a hybrid composite pedicle screw was developed which is based on the
overmoulding process described before.
Different adherence tests were conducted to characterise the interface strength between
short and endless CF reinforced PEEK. However, no standardised test method
existed for interface strength characterisation of overmoulded structures. Sufficient interface
strength could only be achieved if a cohesive interface was formed. Cohesive
interface formation due to the melting of the surface of the endless CF reinforced PEEK
insert after contact with the molten mass required an insert pre-heating temperature of
at least 260 °C prior to overmoulding. Because no standardised test method existed
for interface strength characterisation of overmoulded structures, a novel test body was
developed. This cylinder pull-out specimen did not require any relevant rework steps
after manufacturing so that the interface strength could be directly tested after overmoulding.
Pre-heating of the endless CF reinforced PEEK inserts resulted in a 73%
increase in interface strength compared to non-pre-heated inserts.
In addition to that, a parametric finite element pedicle screw-bone model was developed.
By parametric optimisation, the optimal hybrid composite pedicle screw design
in terms of pull-out resistance was found. Within the underlying design space, the
difference in screw stability between the worst and the best screw design was approximately
12 %. The resulting design recommendations had to be opposed to the
manufacturing requirements to define the final screw design. The moulds of the injection
moulding machine were manufactured according to this design so that the hybrid
composite pedicle screw could be produced.
The findings of extensive material and interface characterisation were crucial for the
achievement of a cohesive interface between insert and overmould so that superior
structural mechanical properties of the hybrid composite pedicle screw could be
achieved. For example, the bending strength of hybrid composite screws was approximately
48% higher than the bending strength of discontinuous short CF reinforced
PEEK screws. Additionally, fatigue resistance was enhanced by the hybrid screw configuration
so that the risk of premature pedicle screw failure could be reduced. In the
breaking torque test, hybrid composite screws showed a reduction of 11% in their
breaking torque values compared to their discontinuous fibre reinforced counterparts.
However, not only in this test but also in the quasi-static and cyclic bending test, structural
integrity of the hybrid composite screws could be maintained which is important
for implant components.

A building-block model reveals new insights into the biogenesis of yeast mitochondrial ribosomes
(2020)

Most of the mitochondrial proteins in yeast are encoded in the nuclear genome, get synthesized by cytosolic ribosomes and are imported via TOM and TIM23 into the matrix or other subcompartments of mitochondria. The mitochondrial DNA in yeast however also encodes a small set of 8 proteins from which most are hydrophobic membrane proteins and build core components of the OXPHOS complexes. They get synthesized by mitochondrial ribosomes which are descendants of bacterial ribosomes and still have some similarities to them. On the other hand, mitochondrial ribosomes experienced various structural and functional changes during evolution that specialized them for the synthesis of the mitochondrial encoded membrane proteins. The mitoribosome contains mitochondria-specific ribosomal proteins and replaced the bacterial 5S rRNA by mitochondria-specific proteins and rRNA extensions. Furthermore, the mitoribosome is tethered to the inner mitochondrial membrane to facilitate a co-translational insertion of newly synthesized proteins. Thus, also the assembly process of mitoribosomes differs from that of bacteria and is to date not well understood.
Therefore, the biogenesis of mitochondrial ribosomes in yeast should be investigated. To this end, a strain was generated in which the gene of the mitochondrial RNA-polymerase RPO41 is under control of an inducible GAL10-promoter. Since the scaffold of ribosomes is built by ribosomal RNAs, the depletion of the RNA-polymerase subsequently leads to a loss of mitochondrial ribosomes. Reinduction of Rpo41 initiates the assembly of new mitoribosomes, which makes this strain an attractive model to study mitoribosome biogenesis.
Initially, the effects of Rpo41 depletion on cellular and mitochondrial physiology was investigated. Upon Rpo41 depletion, growth on respiratory glycerol medium was inhibited. Furthermore, mitochondrial ribosomal 21S and 15S rRNA was diminished and mitochondrial translation was almost completely absent. Also, mitochondrial DNA was strongly reduced due to the fact that mtDNA replication requires RNA primers that get synthesized by Rpo41.
Next, the effect of reinduction of Rpo41 on mitochondria was tested. Time course experiments showed that mitochondrial translation can partially recover from 48h Rpo41 depletion within a timeframe of 4.5h. Sucrose gradient sedimentation experiments further showed that the mitoribosomal constitution was comparable to wildtype control samples during the time course of 4.5h of reinduction, suggesting that the ribosome assembly is not fundamentally altered in Gal-Rpo41 mitochondria. In addition, the depletion time was found to be critical for recovery of mitochondrial translation and mitochondrial RNA levels. It was observed that after 36h of Rpo41 depletion, the rRNA levels and mitochondrial translation recovered to almost 100%, but only within a time course of 10h.
Finally, mitochondria from Gal-Rpo41 cells isolated after different timepoints of reinduction were used to perform complexome profiling and the assembly of mitochondrial protein complexes was investigated. First, the steady state conditions and the assembly process of mitochondrial respiratory chain complexes were monitored. The individual respiratory chain complexes and the super-complexes of complex III, complex IV and complex V were observed. Furthermore, it was seen that they recovered from Rpo41 depletion within 4.5h of reinduction. Complexome profiles of the mitoribosomal small and large subunit discovered subcomplexes of mitoribosomal proteins that were assumed to form prior to their incorporation into assembly intermediates. The complexome profiles after reinduction indeed showed the formation of these subcomplexes before formation of the fully assembled subunit. In the mitochondrial LSU one subcomplex builds the membrane facing protuberance and a second subcomplex forms the central protuberance. In contrast to the preassembled subcomplexes, proteins that were involved in early assembly steps were exclusively found in the fully assembled subunit. Proteins that assemble at the periphery of the mitoribosome during intermediate and late assembly steps where found in soluble form suggesting a pool of unassembled proteins that supply assembly intermediates with proteins.
Taken together, the findings of this thesis suggest a so far unknow building-block model for mitoribosome assembly in which characteristic structures of the yeast mitochondrial ribosome form preassembled subcomplexes prior to their incorporation into the mitoribosome.

Properties of vapor-liquid interfaces play an important role in many processes, but corresponding data is scarce, especially for mixtures. Therefore, two independent routes were employed in the present work to study them: molecular dynamics (MD) simulations using classical force fields as well as density gradient theory (DGT) in combination with theoretically-based equations of state (EOS). The investigated interfacial properties include: interfacial tension, adsorption, and the enrichment of components, which
quantifies the interesting effect that in many systems the density of certain components in the interfacial region is much higher than in either of the bulk phases. As systematic investigations of the enrichment were lacking, it was comprehensively studied here by considering a large number of Lennard-Jones (LJ) mixtures with different phase behavior; also the dependence of the enrichment on temperature and concentration was elucidated and a conformal solution theory for describing the interfacial properties of LJ mixtures was developed. Furthermore, general relations of interfacial properties and the phase behavior were revealed and the relation between the enrichment and the wetting behavior of fluid interfaces was elucidated. All studies were carried out by both MD and DGT, which were found to agree well in most cases. The results were extended to real mixtures, which were studied not only by simulations but also in laboratory experiments. In connection with these investigations, three literature reviews were prepared which cover: a) simulation data on thermophysical properties of the LJ fluid; b) the performance of different EOS of the LJ fluid on that simulation data; c) data on the enrichment at vapor-liquid interfaces. Electronic databases were established for a) - c). Based on c), a short-cut method for the prediction of the enrichment from readily available vapor-liquid equilibrium data was developed. Last not least, an MD method for studying the influence of mass transfer on interfacial properties was developed and applied to investigate the influence of the enrichment on the mass transfer.

Participatory urban planning approaches (PUPAs) are seen as key methodological tools to develop plans and strategies that can help in alleviating urban poverty and improving urban planning and governance. Given that, there is a need for a deeper understanding of the PUPAs adopted and implemented in the Arab cities to define the challenges that led to the weak impacts of these approaches in the examined cities and to identify their potentials for improvement in the future. Yet, adopting PUPAs in restrictive political and institutional settings like the ones in the Arab region proved to be ineffective, either in improving urban planning and governance or in including urban citizens in planning decision-making processes.
This research examines PUPAs in the City Development Strategies of two big cities in the Arab region between 2003 and 2010: Alexandria in Egypt and Aleppo in Syria. The research investigates whether PUPAs are adopted and supported by the institutional and legal framework in the cities under study and whether they are implemented successfully. For this purpose, the research identifies first the challenges and successes in implementing PUPAs in the two cities based on an in-depth analysis of the structures and actors of governance and planning. Second, it explores the effects of the PUPAs on the participatory process and vice versa.
The main findings of the research have shown that PUPAs can only be effective when the political, institutional, and social contexts are supporting participation, which is lacking in the two examined cities. Yet, the different PUPAs implemented in these cities indicate that local actors and planners have a great potential for developing innovative communication strategies and participatory mechanisms that could have positive impacts on urban planning, urban governance, and the society.

This thesis introduces a novel deformation method for computational meshes. It is based on the numerical path following for the equations of nonlinear elasticity. By employing a logarithmic variation of the neo-Hookean hyperelastic material law, the method guarantees that the mesh elements do not become inverted and remain well-shaped. In order to demonstrate the performance of the method, this thesis addresses two areas of active research in isogeometric analysis: volumetric domain parametrization and fluid-structure interaction. The former concerns itself with the construction of a parametrization for a given computational domain provided only a parametrization of the domain’s boundary. The proposed mesh deformation method gives rise to a novel solution approach to this problem. Within it, the domain parametrization is constructed as a deformed configuration of a simplified domain. In order to obtain the simplified domain, the boundary of the target domain is projected in the \(L^2\)-sense onto a coarse NURBS basis. Then, the Coons patch is applied to parametrize the simplified domain. As a range of 2D and 3D examples demonstrates, the mesh deformation approach is able to produce high-quality parametrizations for complex domains where many state-of-the-art methods either fail or become unstable and inefficient. In the context of fluid-structure interaction, the proposed mesh deformation method is applied to robustly update the computational mesh in situations when the fluid domain undergoes large deformations. In comparison to the state-of-the-art mesh update methods, it is able to handle larger deformations and does not result in an eventual reduction of mesh quality. The performance of the method is demonstrated on a classic 2D fluid-structure interaction benchmark reproduced by using an isogeometric partitioned solver with strong coupling.

Complex global sustainability challenges cannot be solved by governance and technology alone, but rather demand a broader cultural shift towards sustainability. Various authors postulate that a social change towards more sustainability can be manifested by a shift in human consciousness towards a more spiritual mindset. Similarly, the contemporary discourse in business literature increasingly emphasizes the importance of spirituality for business sustainability. This cumulative dissertation attempts to explore how the individual’s spirituality may be connected to business sustainability. Therefore, I carried out three studies on specific research gaps in the broad field of the connection between individual spirituality and business sustainability. Paper one (Chapter 2) addresses the general connection between the individual`s spirituality and business sustainability. The goal of the applied systematic literature review was to gain an overview of the themes that are discussed in the related literature and build a cohesive framework. This paper contributes to the literature stream of spirituality in business.
Paper two and three focus on the individual level of spirituality and business sustainability. In paper two (chapter three), we address the spiritual practice mindfulness, a secularized, widely discussed Eastern spiritual practice that is gaining popularity in the Western (business) world. Katharina Spraul co-authored this paper. Mindfulness describes a nonjudgemental, nonevaluative process of paying attention to what is happening internally and externally. We connect mindfulness to business sustainability in such a way that we hypothesize that mindfulness serves as a moderator between the intention and behavior relationship in the field of green employee behavior. Employee pro-environmental behavior was found to be an important antecedent of ecological and economic business sustainability, such as green procurement, and ecological efficiency. In order to test this hypothesis, we applied a quantitative prospective design, assessing variables at two points of time. This paper enhances the theoretical strands of mindfulness research and employee green behavior.
Paper three (chapter four) was written in co-authorship with Katharina Spraul. In this study, in terms of spiritual practices, we focus on German part-time yoga teachers. We investigate the meaningfulness experience of multiple jobholders with the case of part-time yoga teachers. Empirical research has linked meaningful work to job satisfaction and health (social sustainability) as well as work engagement and performance (economic sustainability). We pose the questions: What were the motives to start the secondary job as a yoga teacher? Which job is perceived as more meaningful and why? How does teaching yoga affect the meaningfulness of the primary, organizational job? In order to answer these questions, we applied a mixed method design. On the one hand, we conducted narrative interviews with part-time yoga teachers. On the other hand, we asked these interviewees to rank and rate Rosso et al.'s (2010) seven meaningfulness mechanisms for their jobs (with which we calculated meaningfulness values of each job). With this paper, we address gaps in research on meaningful work and multiple jobholders.
Considering the outlined theoretical strands, this cumulative dissertation contributes to sustainable development by a differentiated discussion of the relationship between the individual’s spirituality and business sustainability.

In this thesis, we present the basic concepts of isogeometric analysis (IGA) and we consider Poisson's equation as model problem. Since in IGA the physical domain is parametrized via a geometry function that goes from a parameter domain, e.g. the unit square or unit cube, to the physical one, we present a class of parametrizations that can be viewed as a generalization of polar coordinates, known as the scaled boundary parametrizations (SB-parametrizations). These are easy to construct and are particularly attractive when only the boundary of a domain is available. We then present an IGA approach based on these parametrizations, that we call scaled boundary isogeometric analysis (SB-IGA). The SB-IGA derives the weak form of partial differential equations in a different way from the standard IGA. For the discretization projection
on a finite-dimensional space, we choose in both cases Galerkin's method. Thanks to this technique, we state an equivalence theorem for linear elliptic boundary value problems between the standard IGA, when it makes use of an SB-parametrization,
and the SB-IGA. We solve Poisson's equation with Dirichlet boundary conditions on different geometries and with different SB-parametrizations.

Fibre reinforced polymers(FRPs) are one the newest and modern materials. In FRPs a light polymer matrix holds but weak polymer matrix is strengthened by glass or carbon fibres. The result is a material that is light and compared to its weight, very strong.\par
The stiffness of the resulting material is governed by the direction and the length of the fibres. To better understand the behaviour of FRPs we need to know the fibre length distribution in the resulting material. The classic method for this is ashing, where a sample of the material is burned and destroyed. We look at CT images of the material. In the first part we assumed that we have a full fibre segmentation, we can fit an a cylinder to each individual fibre. In this setting we identified two problems, sampling bias and censoring.\par
Sampling bias occurs since a longer fibre has a higher probability to be visible in the observation window. To solve this problem we used a reweighed fibre length distribution. The weight depends on the used sampling rule.\par
For the censoring we used an EM algorithm. The EM algorithm is used to get a Maximum Likelihood estimator in cases of missing or censored data.\par
For this setting we deduced conditions such that the EM algorithm converges to at least a stationary point of the underlying likelihood function. We further found conditions such that if the EM converges to the correct ML estimator, the estimator is consistent and asymptotically normally distributed.\par
Since obtaining a full fibre segmentation is hard we further looked in the fibre endpoint process. The fibre end point process can be modelled as a Neymann-Scott cluster process. Using this model we can find a formula for the reduced second moment measure for this process. We use this formula to get an estimator for the fibre length distribution.\par
We investigated all estimators using simulation studies. We especially investigated their performance in the case of non overlapping fibres.

On the complexity and approximability of optimization problems with Minimum Quantity Constraints
(2020)

During the last couple of years, there has been a variety of publications on the topic of
minimum quantity constraints. In general, a minimum quantity constraint is a lower bound
constraint on an entity of an optimization problem that only has to be fulfilled if the entity is
“used” in the respective solution. For example, if a minimum quantity \(q_e\) is defined on an
edge \(e\) of a flow network, the edge flow on \(e\) may either be \(0\) or at least \(q_e\) units of flow.
Minimum quantity constraints have already been applied to problem classes such as flow, bin
packing, assignment, scheduling and matching problems. A result that is common to all these
problem classes is that in the majority of cases problems with minimum quantity constraints
are NP-hard, even if the problem without minimum quantity constraints but with fixed lower
bounds can be solved in polynomial time. For instance, the maximum flow problem is known
to be solvable in polynomial time, but becomes NP-hard once minimum quantity constraints
are added.
In this thesis we consider flow, bin packing, scheduling and matching problems with minimum
quantity constraints. For each of these problem classes we provide a summary of the
definitions and results that exist to date. In addition, we define new problems by applying
minimum quantity constraints to the maximum-weight b-matching problem and to open
shop scheduling problems. We contribute results to each of the four problem classes: We
show NP-hardness for a variety of problems with minimum quantity constraints that have
not been considered so far. If possible, we restrict NP-hard problems to special cases that
can be solved in polynomial time. In addition, we consider approximability of the problems:
For most problems it turns out that, unless P=NP, there cannot be any polynomial-time
approximation algorithm. Hence, we consider bicriteria approximation algorithms that allow
the constraints of the problem to be violated up to a certain degree. This approach proves to
be very helpful and we provide a polynomial-time bicriteria approximation algorithm for at
least one problem of each of the four problem classes we consider. For problems defined on
graphs, the class of series parallel graphs supports this approach very well.
We end the thesis with a summary of the results and several suggestions for future research
on minimum quantity constraints.

To assess ergonomic aspects of a (future) workplace already in the design phase where no physical prototypes exist, the use of digital human models (DHMs) becomes essential. Thereby, the prediction of human motions is a key aspect when simulating human work tasks. For ergonomic assessment e.g. the resulting postures, joint angles, the duration of the motion and muscle loads are important quantities. From a physical point of view, there is an infinite number of possible ways for a human to fulfill a given goal (trajectories, velocities...), which makes human motions and behavior hard to predict. A common approach used in state of the art commercial DHMs is the manual definition of joint angles by the user, which requires expert knowledge and is limited to postural assessments. Another way is to make use of pre-recorded motions from a real human that operates on a physical prototype, which limits assessments to scenarios which have been measured before. Both approaches need further post processing and inverse dynamics calculations with other software tools to get information about inner loads and muscle data, which leads to further uncertainties concerning validity of the simulated data.
In this thesis work a DHM control and validation framework is developed, which allows to investigate in how far the implemented human like actuation and control principles directly lead to human like motions and muscle actuations. From experiments performed in the motion laboratory, motion data is captured and muscle activations are measured using surface electromyography measurements (EMG). From the EMG data, time invariant muscle synergies are extracted by the use of a non-negative Matrix Factorization algorithm (NMF). Muscle synergies are one hypothesis from neuroscience to explain how the human central nervous system might reduce control complexity: instead of activating each muscle separately, muscles are grouped into functional units, whereas each muscle is present in each unit with a fixed amplitude. The measured experiment is then simulated in an optimal control framework. The used framework allows to build up DHMs as multibody system (MBS): bones are modeled as rigid bodies connected via joints, actuated by joint torques or by Hill type muscle models (1D string elements transferring fundamental characteristics of muscle force generation in humans). The OC code calculates the actuation signals for the modeled DHM in a way that a certain goal is fulfilled (e.g. reach for an object) while minimizing some cost function (e.g. minimizing time) and considering the side constraints that the equations of motion of the MBS are fulfilled. Therefore, three different Actuation Modes (AM) can be used (joint torques (AM-T), direct muscle actuation (AM-M) and muscle synergy actuation (AM-S), using the before extracted synergies as control parameters)). Simulation results are then compared with measured data, to investigate the influence of the different Actuation Modes and the solved OC cost function. The approach is applied to three different experiments, the basic reaching test, the weight lift test and a box lifting task, where a human arm model actuated by 29 Hill muscles is used for simulation. It is shown that, in contrast to a joint torque actuation (AM-T), using muscles as actuators (AM-M & AM-S) leads to very human like motion trajectories. Muscle synergies as control parameters, resulted in smoother velocity profiles, which were closer to those measured and appeared to be more robust, concerning the underlying muscle activation signals (compared to AM-M). In combination with a developed biomechanical cost function (a mix of different OC cost functions), the approach showed promising results, concerning the simulation of valid, human like motions, in a predictive manner.

In recent years, the Internet has become a major source of visual information exchange. Popular social platforms have reported an average of 80 million photo uploads a day. These images, are often accompanied with a user provided text one-liner, called an image caption. Deep Learning techniques have made significant advances towards automatic generation of factual image captions. However, captions generated by humans are much more than mere factual image descriptions. This work takes a step towards enhancing a machine's ability to generate image captions with human-like properties. We name this field as Affective Image Captioning, to differentiate it from the other areas of research focused on generating factual descriptions.
To deepen our understanding of human generated captions, we first perform a large-scale Crowd-Sourcing study on a subset of Yahoo Flickr Creative Commons 100 Million Dataset (YFCC100M). Three thousand random image-caption pairs were evaluated by native English speakers w.r.t different dimensions like focus, intent, emotion, meaning, and visibility. Our findings indicate three important underlying properties of human captions: subjectivity, sentiment, and variability. Based on these results, we develop Deep Learning models to address each of these dimensions.
To address the subjectivity dimension, we propose the Focus-Aspect-Value (FAV) model (along with a new task of aspect-detection) to structure the process of capturing subjectivity. We also introduce a novel dataset, aspects-DB, following this way of modeling. To implement the model, we propose a novel architecture called Tensor Fusion. Our experiments show that Tensor Fusion outperforms the state-of-the-art cross residual networks (XResNet) in aspect-detection.
Towards the sentiment dimension, we propose two models:Concept & Syntax Transition Network (CAST) and Show & Tell with Emotions (STEM). The CAST model uses a graphical structure to generate sentiment. The STEM model uses a neural network to inject adjectives into a neutral caption. Achieving a high score of 93% with human evaluation, these models were selected as the top-3 at the ACMMM Grand Challenge 2016.
To address the last dimension, variability, we take a generative approach called Generative Adversarial Networks (GAN) along with multimodal fusion. Our modified GAN, with two discriminators, is trained using Reinforcement Learning. We also show that it is possible to control the properties of the generated caption-variations with an external signal. Using sentiment as the external signal, we show that we can easily outperform state-of-the-art sentiment caption models.

B-spline surfaces are a well-established tool to analytically describe objects. They are commonly used in various fields, e.g., mechanical and aerospace engineering, computer aided design, and computer graphics. Obtaining and using B-spline surface models of real-life objects is an intricate process. Initial virtual representations are usually obtained via scanning technologies in the form of discrete data, e.g., point clouds, surface meshes, or volume images. This data often requires pre-processing to remove noise and artifacts. Even with high quality data, obtaining models of complex or very large structures needs specialized solutions that are viable for the available hardware. Once B-spline models are constructed, their properties can be utilized and combined with application-specific knowledge to provide efficient solutions for practical problems.
This thesis contributes to various aspects of the processing pipeline. It addresses pre-processing, creating B-spline models of large and topologically challenging data, and the use of such models within the context of visual surface inspection. Proposed methods improve existing solutions in terms of efficiency, hardware restrictions, and quality of the results. The following contributions are presented:
Fast and memory-efficient quantile filter: Quantile filters are widely used operations in image processing. The most common instance is the median filter which is a standard solution to treat noise while preserving the shape of an object. Various implementations of such filters are available offering either high performance or low memory complexity, but not both. This thesis proposes a generalization of two existing algorithms: one that favors speed and one that favors low memory usage. An adaptable hybrid algorithm is introduced. It can be tuned for optimal performance on the available hardware. Results show that it outperforms both state-of-the-art reference methods for most practical filter sizes.
Robust B-spline reconstructions of isosurfaces in volume images: The micro-structure of wood-based thermal insulation materials is analyzed to research heat conductivity properties. High-quality scans reveal a complex system of cellulose fibers. B-spline models of individual fibers are highly desirable to conduct simulations. Due to the physical processing of the material, the surfaces of those fibers consist of challenging elements like loose filaments, holes, and tunnels. Standard solutions fail to partition the data into a small number of quadrilateral cells which is required for the B-spline construction step. A novel approach is presented that splits up the data processing into separate topology and geometry pipelines. This robust method is demonstrated by constructing B-spline models with 236 to 676 surfaces from triangulated isosurfaces with 423628 to 1203844 triangles.
Local method for smooth B-spline surface approximations: Constructing smooth B-spline models to approximate discrete data is a challenging task. Various standard solutions exist, often imposing restrictions to knot vectors, spline order, or available degrees of freedom for the data approximation. This thesis presents a local approach with less restrictions aiming for approximate \(G^1\)-continuity. Nonlinear terms are added to standard minimization problems. The local design of the algorithm compensates for the higher computational complexity. Results are shown and evaluated for objects of varying complexity. A comparison with an exact \(G^1\)-continuous method shows that the novel method improves approximation accuracy on average by a factor of 10 at the cost of having small discontinuities in normal vectors of less than 1 degree.
Model-based viewpoint generation for surface inspection: Within modern and flexible factories, surface inspection of products is still a very rigid process. An automated inspection system requires the definition of viewpoints from which a robot then takes pictures during the inspection process. Setting up such a system is a time-intensive process which is primarily done manually by experts. This work presents a purely virtual approach for the generation of viewpoints. Based on an intuitive definition of analytic feature functionals, a non-uniform sampling with respect to inspection-specific criteria is performed on given B-spline models. This leads to the definition of a low number of viewpoint candidates. Results of applying this method to several test objects with varying parameters indicate that good viewpoints can be obtained through a fast process that can be performed fully automatically or interactively through the use of meaningful parameters.

Ethernet has become an established communication technology in industrial automation. This was possible thanks to the tremendous technological advances and enhancements of Ethernet such as increasing the link-speed, integrating the full-duplex transmission and the use of switches. However these enhancements were still not enough for certain high deterministic industrial applications such as motion control, which requires cycle time below one millisecond and jitter or delay deviation below one microsecond. To meet these high timing requirements, machine and plant manufacturers had to extend the standard Ethernet with real-time capability. As a result, vendor-specific and non-IEEE standard-compliant "Industrial Ethernet" (IE) solutions have emerged.
The IEEE Time-Sensitive Networking (TSN) Task Group specifies new IEEE-conformant functionalities and mechanisms to enable the determinism missing from Ethernet. Standard-compliant systems are very attractive to the industry because they guarantee investment security and sustainable solutions. TSN is considered therefore to be an opportunity to increase the performance of established Industrial-Ethernet systems and to move forward to Industry 4.0, which require standard mechanisms.
The challenge remains, however, for the Industrial Ethernet organizations to combine their protocols with the TSN standards without running the risk of creating incompatible technologies. TSN specifies 9 standards and enhancements that handle multiple communication aspects. In this thesis, the evaluation of the use of TSN in industrial real-time communication is restricted to four deterministic standards: IEEE802.1AS-Rev, IEEE802.1Qbu IEEE802.3br and IEEE802.1Qbv. The specification of these TSN sub-standards was finished at an early research stage of the thesis and hardware prototypes were available.
Integrating TSN into the Industrial-Ethernet protocols is considered a substantial strategical challenge for the industry. The benefits, limits and risks are too complex to estimate without a thorough investigation. The large number of Standard enhancements makes it hard to select the required/appropriate functionalities.
In order to cover all real-time classes in the automation [9], four established Industrial-Ethernet protocols have been selected for evaluation and combination with TSN as well as other performance relevant communication features.
The objectives of this thesis are to
(1) Provide theoretical, simulation and experimental evaluation-methodologies for the timing performance analysis of the deterministic TSN-standards mentioned above. Multiple test-plans are specified to evaluate the performance and compatibility of early version TSN-prototypes from different providers.
(2) Investigate multiple approaches and deduce migration strategies to integrate these features into the established Industrial-Ethernet protocols: Sercos III, Profinet IRT, Profinet RT and Ethernet/IP. A scenario of coexistence of time-critical traffic with other traffic in a TSN-network proves that the timing performance for highly deterministic applications, e.g. motion-control, can only be guaranteed by the TSN scheduling algorithm IEEE802.1Qbv.
Based on a requirements survey of highly deterministic industrial applications, multiple network scenarios and experiments are presented. The results are summarized into two case studies. The first case study shows that TSN alone is not enough to meet these requirements. The second case study investigates the benefits of additional mechanisms (Gigabit link-speed, minimum cycle time modeling, frame forwarding mechanisms, frame structure, topology migration, etc.) in combination with the TSN features. An implementation prototype of the proposed system and a simulation case study are used for the evaluation of the approach. The prototype is used for the evaluation and validation of the simulation model. Due to given scalability constraints of the prototype (no cut-through functionalities, limited number of TSN-prototypes, etc…), a realistic simulation model, using the network simulation tool OMNEST / OMNeT++, is conducted.
The obtained evaluation results show that a minimum cycle time ≤1 ms and a maximum jitter ≤1 μs can be achieved with the presented approaches.

The Directive 97/23/EC of the European Parliament and of the Council of 29 May 1997 on the approximation of the laws of the Member States concerning pressure equipment (European Commision, 1997) is the basis of the legal framework for protection of pressure equipment within the European Union. Codes and standards are useful to comply with the legal and regulatory responsibilities stipulated in PED Directive regarding the protection of pressure equipment against overpressure, sizing, and selection safety relief devices.
Rupture disk devices are primary relief devices to protect vessels, pipe, and equipment against overpressure. A rupture disk bursts once the so-called burst pressure is reached in the protected system, thereby discharging flow and preventing further increase in pressure. Currently, rupture disks are sized with standards and codes assuming the worst-case scenario at burst pressure. There is however no standardized procedure for sizing rupture disks with two-phase flow and there lacks suited test-facilities, test-sections, and reliable experimental data for model validation. Sizing rupture disk vent-line systems with current characteristic numbers comes with significant uncertainties, especially for high-velocity compressible flows (Schmidt, 2015).
Zero-Emission and Green Safety are current trends for organizations that seek to attain innovative protection concepts beyond regulatory compliance. A procedure to size a rupture disk vent-line should accurately determine the discharge rate and pressure-drop across a rupture disk, from the point of rupture disk activation to the point when the system depressurizes fully. This procedure is critical for further safety considerations, such as for modeling the dispersion of toxic gases released during emergency-relief and calculating the emissions to the environment with time.
Over-dimensioning is one measure taken today to mitigate uncertainties encountered while sizing with current methods. This is not always an option, as over-dimensioning the rupture disk vent-line system leads to unnecessary financial costs. It may also cause malfunction of the collecting systems downstream when the fluids discharged are more than the design limits. Emissions to the environment are thereby potentially higher than necessary, causing excessive harm to the environment. Under-dimensioning, on the other hand, may lead to hazardous incidents with loss of human life and equipment. This work has therefore focused on the investigation of the mass flow rate and pressure-drop through rupture disk devices with compressible gas and two-phase flow.
The experimental focus was in the design, construction, and commissioning of a high-capacity, high-pressure industry-scale test facility for testing small- to large-diameter rupture disks and other fittings with gas flow. The resulting test facility is suited to test safety devices and pipe fittings at near realistic flow conditions at pressures up to 150 bar. This work also presents the design of a pilot plant for testing rupture disks with air/water two-phase flow. These test facilities open-up new frontiers for capacity testing because they have precise and state-of-the-art measurement and instrumentation. Experimental results from these facilities deliver reliable experimental data to validate proposed sizing procedures for rupture disk devices.
The theoretical focus was on the development of a reliable rupture disk sizing procedure for compressible gas and two-phase flow. This required phenomenological studies of flow through rupture disks with both experiments and CFD studies. Better suited rupture disk characteristic numbers and model parameters for determining the mass flow rate and pressure-drop across rupture disks are identified. The proposed sizing procedure with compressible gas and two-phase flow predicts the dischargeable mass flow rate and pressure-drop across a rupture disk within ±4 % of measured value. Experimental validation has been undertaken with different types of rupture disks. The procedure is suited for determine the mass flow rate and pressure-drop through rupture disk seamlessly, from the point of rupture disk activation (worst-case scenario) to the point when the system fully depressurizes beyond regulatory compliance.

Decentralization is the norm of future smart production as it assists in contextual dynamic decision-making and thereby increases the flexibility required to produce highly customized products. When manufacturing business software is operated as a cloud-based solution, it experiences network latency and connectivity issues. To overcome these problems, the production control should be delegated to the manufacturing edge layer and hence, the argument of decentralization is even more applicable to this narrative. Semantic technologies, on the other hand, assist in discerning the meaning, reasoning and drawing inferences from the data. There are several specifications and frameworks to automate the discovery, orchestration and invocation of web services; the prominent are OWL-S and SAWSDL. This thesis adapts these frameworks for OPC UA, and consequently, the proposed semantically enriched OPC UA concept enables the edge layer to create flexible production orchestration plans in a manufacturing scenario controlled by cloud MES.

With the technological advancement in the field of robotics, it is now quite practical to acknowledge the actuality of social robots being a part of human's daily life in the next decades. Concerning HRI, the basic expectations from a social robot are to perceive words, emotions, and behaviours, in order to draw several conclusions and adapt its behaviour to realize natural HRI. Henceforth, assessment of human personality traits is essential to bring a sense of appeal and acceptance towards the robot during interaction.
Knowledge of human personality is highly relevant as far as natural and efficient HRI is concerned. The idea is taken from human behaviourism, with humans behaving differently based on the personality trait of the communicating partners. This thesis contributes to the development of personality trait assessment system for intelligent human-robot interaction.
The personality trait assessment system is organized in three separate levels. The first level, known as perceptual level, is responsible for enabling the robot to perceive, recognize and understand human actions in the surrounding environment in order to make sense of the situation. Using psychological concepts and theories, several percepts have been extracted. A study has been conducted to validate the significance of these percepts towards personality traits.
The second level, known as affective level, helps the robot to connect the knowledge acquired in the first level to make higher order evaluations such as assessment of human personality traits. The affective system of the robot is responsible for analysing human personality traits. To the best of our knowledge, this thesis is the first work in the field of human-robot interaction that presents an automatic assessment of human personality traits in real-time using visual information. Using psychology and cognitive studies, many theories has been studied. Two theories have been been used to build the personality trait assessment system: Big Five personality traits assessment and temperament framework for personality traits assessment.
By using the information from the perceptual and affective level, the last level, known as behavioural level, enables the robot to synthesize an appropriate behaviour adapted to human personality traits. Multiple experiments have been conducted with different scenarios. It has been shown that the robot, ROBIN, assesses personality traits correctly during interaction and uses the similarity-attraction principle to behave with similar personality type. For example, if the person is found out to be extrovert, the robot also behaves like an extrovert. However, it also uses the complementary attraction theory to adapt its behaviour and complement the personality of the interaction partner. For example, if the person is found out to be self-centred, the robot behaves like an agreeable in order to flourish human-robot interaction.

Interconnection networks enable fast data communication between components of a digital system. The selection of an appropriate interconnection network and its architecture plays an important role in the development process of the system. The selection of a bad network architecture may significantly delay the communication between components and decrease the overall system performance.
There are various interconnection networks available. Most of them are blocking networks. Blocking means that even though a pair of source and target components may be free, a connection between them might still not be possible due to limited capabilities of the network. Moreover, routing algorithms of blocking networks have to avoid deadlocks and livelocks, which typically does only allow poor real-time guarantees for delivering a message. Nonblocking networks can always manage all requests that are coming from their input components and can therefore deliver all messages in guaranteed time, i.e, with strong real-time guarantees. However, only a few networks are nonblocking and easy to implement. The simplest one is the crossbar network which is a comparably simple circuit with also a simple routing algorithm. However, while its circuit depth of O(log(n)) is optimal, its size increases with O(n^2) and quickly becomes infeasible for large networks. Therefore, the construction of nonblocking networks with a quasipolynomial size O(nlog(n)^a) and polylogarithmic depth O(log(n)^b) turned out as a research problem.
Benes [Clos53; Bene65] networks were the first non blocking networks having an optimal size of O(nlog(n)) and an optimal depth of O(log(n)), but their routing algorithms are quite complicated and require circuits of depth O(log(n)^2) [NaSa82].
Other nonblocking interconnection networks are derived from sorting networks. Essentially, there are merge-based (MBS) and radix-based (RBS) sorting networks. MBS and RBS networks can be both implemented in a pipelined fashion which leads to a big advantage for their circuit implementation. While these networks are nonblocking and can implement all n! permutations, they cannot directly handle partial permutations that frequently occur in practice since not every input component communicates at every point of time with an output component. For merge-based sorting networks, there is a well known general solution called the Batcher-Banyan network. However, for the larger class of radix-based sorting networks this does not work, and there is only one solution known for a particular permutation network.
In this thesis, new nonblocking radix-based interconnection networks are presented. In particular, for a certain permutation network, three routing algorithms are developed and their circuit implementations are evaluated concerning their size, depth, and power consumption. A special extension of these networks allows them to route also partial permutations. Moreover, three general constructions to convert any binary sorter into a ternary split module were presented which is the key to construct a radix-based interconnection network that can cope with partial permutations. The thesis compares also chip designs of these networks with other radix-based sorting networks as well as with the Batcher-Banyan networks as competitors. As a result, it turns out that the proposed radix-based networks are superior and could form the basis of larger manycore architectures.

Although today’s bipeds are capable of demonstrating impressive locomotion skills, in many aspects, there’s still a big gap compared to the capabilities observed in humans. Partially, this is due to the deployed control paradigms that are mostly based on analytical approaches. The analytical nature of those approaches entails strong model dependencies – regarding the robotic platform as well as the environment – which makes them prone to unknown disturbances. Recently, an increasing number of biologically-inspired control approaches have been presented from which a human-like bipedal gait emerges. Although the control structures only rely on proprioceptive sensory information, the smoothness of the motions and the robustness against external disturbances is impressive. Due to the lack of suitable robotic platforms, until today the controllers have been mostly applied to
simulations.
Therefore, as the first step towards a suitable platform, this thesis presents the Compliant Robotic Leg (CARL) that features mono- as well as biarticular actuation. The design is driven by a set of core-requirements that is primarily derived from the biologically-inspired behavior-based bipedal locomotion control (B4LC) and complemented by further functional aspects from biomechanical research. Throughout the design process, CARL is understood as a unified dynamic system that emerges from the interplay of the mechanics, the electronics, and the control. Thus, having an explicit control approach and the respective gait in mind, the influence of each subsystem on the characteristics of the overall system is considered
carefully.
The result is a planar robotic leg whose three joints are driven by five highly integrated linear SEAs– three mono- and two biarticular actuators – with minimized reflected inertia. The SEAs are encapsulated by FPGA-based embedded nodes that are designed to meet the hard application requirements while enabling the deployment of a full-featured robotic framework. CARL’s foot is implemented using a COTS prosthetic foot; the sensor information is obtained from the deformation of its main structure. Both subsystems are integrated into a leg structure that matches the proportions of a human with a size of 1.7 m.
The functionality of the subsystems, as well as the overall system, is validated experimentally. In particular, the final experiment demonstrates a coordinated walking motion and thereby confirms that CARL can produce the desired behavior – a natural looking, human-like gait is emerging from the interplay of the behavior-based walking control and the mechatronic system. CARL is robust regarding impacts, the redundant actuation system can render the desired joint torques/impedances, and the foot system supports the walking structurally while it provides the necessary sensory information. Considering that there is no movement of the upper trunk, the angle and torque profiles are comparable to the ones found in humans.

The neural networks have been extensively used for tasks based on image sensors. These models have, in the past decade, consistently performed better than other machine learning methods on tasks of computer vision. It is understood that methods for transfer learning from neural networks trained on large datasets can reduce the total data requirement while training new neural network models. These methods tend not to perform well when the data recording sensor or the recording environment is unique from the existing large datasets. The machine learning literature provides various methods for prior-information inclusion in a learning model. Such methods employ methods like designing biases into the data representation vectors, enforcing priors or physical constraints on the models. Including such information into neural networks for the image frames and image-sequence classification is hard because of the very high dimensional neural network mapping function and little information about the relation between the neural network parameters. In this thesis, we introduce methods for evaluating the statistically learned data representation and combining these information descriptors. We have introduced methods for including information into neural networks. In a series of experiments, we have demonstrated methods for adding the existing model or task information to neural networks. This is done by 1) Adding architectural constraints based on the physical shape information of the input data, 2) including weight priors on neural networks by training them to mimic statistical and physical properties of the data (hand shapes), and 3) by including the knowledge about the classes involved in the classification tasks to modify the neural network outputs. These methods are demonstrated, and their positive influence on the hand shape and hand gesture classification tasks are reported. This thesis also proposes methods for combination of statistical and physical models with parametrized learning models and show improved performances with constant data size. Eventually, these proposals are tied together to develop an in-car hand-shape and hand-gesture classifier based on a Time of Flight sensor.

This thesis investigates how smart sensors can quantify the process of learning. Traditionally, human beings have obtained various skills by inventing technologies. Those who integrate technologies into daily life and enhance their capabilities are called augmented humans. While most existing augmenting human technologies focus on directly assisting specific skills, the objective of this thesis is to assist learning -- the meta-skill to master new skills -- with the aim of long-term augmentations.
Learning consists of cognitive activities such as reading, writing, and watching. It has been considered that tracking them by motion sensors (in the same way as the recognition of physical activities) is a challenging task because dynamic body movements could not be observed during cognitive activities. I have solved this problem with smart sensors monitoring eye movements and physiological signals.
I propose activity recognition methods using sensors built into eyewear computers. Head movements and eye blinks measured by an infrared proximity sensor on Google Glass could classify five activities including reading with 82% accuracy. Head and eye movements measured by electrooculography on JINS MEME could classify four activities with 70% accuracy. In a wild experiment involving seven participants who wore JINS MEME more than two weeks, deep neural networks could detect natural reading activities with 74% accuracy. I demonstrate Wordometer 2.0, an application to estimate the number of rear words on JINS MEME, which was evaluated in a dataset involving five readers with 11% error rate.
Smart sensors can recognize not only activities but also internal states during the activities. I present an expertise recognition method using an eye tracker which performs 70% classification accuracy into three classes using one minute data of reading a textbook, a positive correlation between interest and pupil diameter (p < 0.01), a negative correlation between mental workload and nose temperature measured by an infrared thermal camera (p < 0.05), an interest detection on newspaper articles, and effective gaze and physiological features to estimate self-confidence while solving multiple choice questions and spelling tests of English vocabulary.
The quantified learning process can be utilized for feedback to each learner on the basis of the context. I present HyperMind, an interactive intelligent digital textbook. It can be developed on HyperMind Builder which may be employed to augment any electronic text by multimedia aspects activated via gaze.
Applications mentioned above have already been deployed at several laboratories including Immersive Quantified Learning Lab (iQL-Lab) at the German Research Center for Artificial Intelligence (DFKI).

As visualization as a field matures, the discussion about the development of a
theory of the field becomes increasingly vivid. Despite some voices claiming that
visualization applications would be too different from each other to generalize,
there is a significant push towards a better understanding of the principles underlying
visual data analysis. As of today, visualization is primarily data-driven.
Years of experience in the visalization of all kinds of different data accumulated
a vast reservoir of implicit knowledge in the community of how to best represent
data according to its shape, its format, and what it is meant to express.
This knowledge is complemented by knowledge imported to visualization from
a variety of other fields, for example psychology, vision science, color theory,
and information theory. Yet, a theory of visualization is still only nascent. One
major reason for that is the field's too strong focus on the quantitative aspects
of data analysis. Although when designing visualizations major design decisions
also consider perception and other human factors, the overall appearance
of visualizations as of now is determined primarily by the type and format of
the data to be visualized and its quantitative attributes like scale, range, or
density. This is also reflected by the current approaches in theoretical work on
visualization. The models developed in this regard also concentrate primarily
on perceptual and quantitative aspects of visual data analysis. Qualitative considerations
like the interpretations made by viewers and the conclusions drawn
by analysts currently only play a minor role in the literature. This Thesis contributes
to the nascent theory of visualization by investigating approaches to
the explicit integration of qualitative considerations into visual data analysis.
To this end, it promotes qualitative visual analysis, the explicit discussion of
the interpretation of artifacts and structures in the visualization, of efficient
workflows designed to optimally support an analyst's reasoning strategy and
capturing information about insight provenance, and of design methodology
tailoring visualizations towards the insights they are meant to provide rather
than to the data they show. Towards this aim, three central qualitative principles
of visual information encodings are identified during the development of
a model for the visual data analysis process that explicitly includes the anticipated
reasoning structure into the consideration. This model can be applied
throughout the whole life cycle of a visualization application, from the early
design phase to the documentation of insight provenance during analysis using
the developed visualization application. The three principles identified inspire
novel visual data analysis workflows aiming for an insight-driven data analysis
process. Moreover, two case studies prove the benefit of following the qualitative
principles of visual information encodings for the design of visualization
applications. The formalism applied to the development of the presented theoretical
framework is founded in formal logics, mathematical set theory, and the
theory of formal languages and automata. The models discussed in this Thesis
and the findings derived from them are therefore based on a mathematically
well-founded theoretical underpinning. This Thesis establishes a sound theoretical
framework for the design and description of visualization applications and
the prediction of the conclusions an analyst is capable of drawing from working
with the visualization. Thereby, it contributes an important piece to the yet
unsolved puzzle of developing a visualization theory.

In a recent paper, G. Malle and G. Robinson proposed a modular anologue to Brauer's famous \( k(B) \)-conjecture. If \( B \) is a \( p \)-block of a finite group with defect group \( D \), then they conjecture that \( l(B) \leq p^r \), where \( r \) is the sectional \( p \)-rank of \( D \). Since this conjecture is relatively new, there is obviously still a lot of work to do. This thesis is concerned with proving their conjecture for the finite groups of exceptional Lie type.

This thesis aims to examine various determinants of perceived team diversity on the on hand, and, on the other hand, the individual consequences of perceived team diversity. To ensure a strong theoretical foundation, I integrate and discuss different conceptualizations of and theoretical approaches to team diversity, empirically examined in three independent studies. The first study investigates the relationship between objective team diversity and perceived team diversity, and as moderators individual attitudes toward diversity and perception of one’s own work team’s diversity. The second study answers the questions of why and when dirty-task frequency impairs employees’ work relations and the third study examines how different cognitive mechanisms mediate the relationships between employees’ perceptions of different types of subgroups and their elaboration of information and perspectives. Taken together, study results provide support for the selection-extraction-application model of people perception and the assumption that individuals can integrate objective team characteristics into their mental representation of teams, using them to judging the team. Moreover, results show that a fit between perceived supervisor support and perceived organizational value of diversity can buffer the effects of dirty-task frequency on perception of identity-based subgroups, as well as perceived relationship conflict and surface acting, through employees’ perceptions of identity-based subgroups. Also, perceived social-identity threat and perceived procedural fairness but not perceived distributive fairness and perceived transactive memory systems serve as cognitive mechanisms of the relationships between employees’ perceptions of different types of subgroups and their elaboration of information and perspectives. These results contribute to diversity literature, such as the theory of subgroups in work teams and the categorization-elaboration model. In addition, I propose the input-mediator-output-input model of perceived team diversity, based on the study results, and recommend practitioners to develop diversity mindsets in teams.

The iterative development and evaluation of the gamified stress management app “Stress-Mentor”
(2020)

The gamification of mHealth applications is a critically discussed topic. On one hand, studies show that gamification can have positive impact on an app’s usability and user experience. Furthermore, evidence grows that gamification can positively influence the regular usage of health apps. On the other hand it is questioned whether gamification is useful for health apps in all contexts, especially regarding stress management. However, to this point few studies investigated the gamification of stress management apps.
This thesis describes the iterative development of the gamified stress management app “Stress-Mentor” and examines whether the implemented gamification concept results in changes in the app’s usage behavior, as well as in usability and user experience ratings.
The results outline how the users’ involvement in “Stress-Mentor’s” development through different studies influenced the app’s design and helped to identify necessary improvements. The thesis also shows that users who received a gamified app version used the app more frequently than users of a non-gamified control group.
While gamification of stress management is critically discussed, it was positively received by the users of “Stress-Mentor” throughout the app’s development. The results also showed that gamification can have positive effects on the usage behavior of a stress management app and therefore, results in an increased exposure to the app’s content. Moreover, an expert study outlined the applicability of “Stress-Mentor’s” concept for other health contexts.

In an overall effort to contribute to the steadily expanding EO literature, this cumulative dissertation aims to help the literature to advance with greater clarity, comprehensive modeling, and more robust research designs. To achieve this, the first paper of this dissertation focuses on the consistency and coherence in variable choices and modeling considerations by conducting a systematic quantitative review of the EO-performance literature. Drawing on the plethora of previous EO studies, the second paper employs a comprehensive meta-analytic structural equation modeling approach (MASEM) to explore the potential for unique component-level relationships among EO’s three core dimensions in antecedent to outcome relationships. The third paper draws on these component-level insights and performs a finer-grained replication of the seminal MASEM of Rosenbusch, Rauch, and Bausch (2013) that proposes EO as a full mediator between the task environment and firm performance. The fourth and final paper of this cumulative dissertation illustrates exigent endogeneity concerns inherent in observational EO-performance research and provides guidance on how researchers can move towards establishing causal relationships.

Diversification is one of the main pillars of investment strategies. The prominent 1/N portfolio, which puts equal weight on each asset is, apart from its simplicity, a method which is hard to outperform in realistic settings, as many studies have shown. However, depending on the number of considered assets, this method can lead to very large portfolios. On the other hand, optimization methods like the mean-variance portfolio suffer from estimation errors, which often destroy the theoretical benefits. We investigate the performance of the equal weight portfolio when using fewer assets. For this we explore different naive portfolios, from selecting the best Sharpe ratio assets to exploiting knowledge about correlation structures using clustering methods. The clustering techniques separate the possible assets into non-overlapping clusters and the assets within a cluster are ordered by their Sharpe ratio. Then the best asset of each portfolio is chosen to be a member of the new portfolio with equal weights, the cluster portfolio. We show that this portfolio inherits the advantages of the 1/N portfolio and can even outperform it empirically. For this we use real data and several simulation models. We prove these findings from a statistical point of view using the framework by DeMiguel, Garlappi and Uppal (2009). Moreover, we show the superiority regarding the Sharpe ratio in a setting, where in each cluster the assets are comonotonic. In addition, we recommend the consideration of a diversification-risk ratio to evaluate the performance of different portfolios.

This work describes the development of a continuum phase field model that can describe static as well as dynamic wetting scenarios on the nano- and microscale.
The model reaches this goal by a direct integration of an equation of state as well as a direct integration of the dissipative properties of a specific fluid, which are both obtained from molecular simulations. The presented approach leads to good agreement between the predictions of the phase field model and the physical properties of the regarded fluid.
The implementation of the model employs a mixed finite element formulation, a newly developed semi-implicit time integration scheme, as well as the concept of hyper-dual numbers. This ensures a straightforward and robust exchangeability of the constitutive equation for the regarded fluid.
The presented simulations show good agreement between the results of the present phase field model and results from molecular dynamics simulations. Furthermore, the results show that the model enables the investigation of wetting scenarios on the microscale. The continuum phase field model of this work bridges the gap between the molecular models on the nanoscale and the phenomenologically motivated continuum models on the macroscale.

The advent of heterogeneous many-core systems has increased the spectrum
of achievable performance from multi-threaded programming. As the processor components become more distributed, the cost of synchronization and
communication needed to access the shared resources increases. Concurrent
linearizable access to shared objects can be prohibitively expensive in a high
contention workload. Though there are various mechanisms (e.g., lock-free
data structures) to circumvent the synchronization overhead in linearizable
objects, it still incurs performance overhead for many concurrent data types.
Moreover, many applications do not require linearizable objects and apply
ad-hoc techniques to eliminate synchronous atomic updates.
In this thesis, we propose the Global-Local View Model. This programming model exploits the heterogeneous access latencies in many-core systems.
In this model, each thread maintains different views on the shared object: a
thread-local view and a global view. As the thread-local view is not shared,
it can be updated without incurring synchronization costs. The local updates
become visible to other threads only after the thread-local view is merged
with the global view. This scheme improves the performance at the expense
of linearizability.
Besides the weak operations on the local view, the model also allows strong
operations on the global view. Combining operations on the global and the
local views, we can build data types with customizable consistency semantics
on the spectrum between sequential and purely mergeable data types. Thus
the model provides a framework that captures the semantics of Multi-View
Data Types. We discuss a formal operational semantics of the model. We
also introduce a verification method to verify the correctness of the implementation of several multi-view data types.
Frequently, applications require updating shared objects in an “all-or-nothing” manner. Therefore, the mechanisms to synchronize access to individual objects are not sufficient. Software Transactional Memory (STM)
is a mechanism that helps the programmer to correctly synchronize access to
multiple mutable shared data by serializing the transactional reads and writes.
But under high contention, serializable transactions incur frequent aborts and
limit parallelism, which can lead to severe performance degradation.
Mergeable Transactional Memory (MTM), proposed in this thesis, allows accessing multi-view data types within a transaction. Instead of aborting
and re-executing the transaction, MTM merges its changes using the data-type
specific merge semantics. Thus it provides a consistency semantics that allows
for more scalability even under contention. The evaluation of our prototype
implementation in Haskell shows that mergeable transactions outperform serializable transactions even under low contention while providing a structured
and type-safe interface.

Nowadays a large part of communication is taking place on social media platforms such as Twitter, Facebook, Instagram, or YouTube, where messages often include multimedia contents (e.g., images, GIFs or videos). Since such messages are in digital form, computers can in principle process them in order to make our lives more convenient and help us overcome arising issues. However, these goals require the ability to capture what these messages mean to us, that is, how we interpret them from our own subjective points of view. Thus, the main goal of this dissertation is to advance a machine's ability to interpret social media contents in a more natural, subjective way.
To this end, three research questions are addressed. The first question aims at answering "How to model human interpretation for machine learning?" We describe a way of modeling interpretation which allows for analyzing single or multiple ways of interpretation of both humans and computer models within the same theoretic framework. In a comprehensive survey we collect various possibilities for such a computational analysis. Particularly interesting are machine learning approaches where a single neural network learns multiple ways of interpretation. For example, a neural network can be trained to predict user-specific movie ratings from movie features and user ID, and can then be analyzed to understand how users rate movies. This is a promising direction, as neural networks are capable of learning complex patterns. However, how analysis results depend on network architecture is a largely unexplored topic. For the example of movie ratings, we show that the way of combining information for prediction can affect both prediction performance and what the network learns about the various ways of interpretation (corresponding to users).
Since some application-specific details for dealing with human interpretation only become visible when going deeper into particular use-cases, the other two research questions of this dissertation are concerned with two selected application domains: Subjective visual interpretation and gang violence prevention. The first application study deals with subjectivity that comes from personal attitudes and aims at answering "How can we predict subjective image interpretation one would expect from the general public on photo-sharing platforms such as Flickr?" The predictions in this case take the form of subjective concepts or phrases. Our study on gang violence prevention is more community-centered and considers the question "How can we automatically detect tweets of gang members which could potentially lead to violence?" There, the psychosocial codes aggression, loss and substance use serve as proxy to estimate the subjective implications of online messages.
In these two distinct application domains, we develop novel machine learning models for predicting subjective interpretations of images or tweets with images, respectively. In the process of building these detection tools, we also create three different datasets which we share with the research community. Furthermore, we see that some domains such as Chicago gangs require special care due to high vulnerability of involved users. This motivated us to establish and describe an in-depth collaboration between social work researchers and computer scientists. As machine learning is incorporating more and more subjective components and gaining societal impact, we have good reason to believe that similar collaborations between the humanities and computer science will become increasingly necessary to advance the field in an ethical way.

More than ten years ago, ER-ANT1 was shown to act as an ATP/ADP antiporter and to exist in the endoplasmic reticulum (ER) of higher plants. Because structurally different transporters generally mediate energy provision to the ER, the physiological function of ER-ANT1 was not directly evident.
Interestingly, mutant plants lacking ER-ANT1 exhibit a photorespiratory phenotype. Although many research efforts were undertaken, the possible connection between the transporter and photorespiration also remained elusive. Here, a forward genetic approach was used to decipher the role of ER-ANT1 in the plant context and its association to photorespiration.
This strategy identified that additional absence of a putative HAD-type phosphatase partially restored the photorespiratory phenotype. Localisation studies revealed that the corresponding protein is targeted to the chloroplast. Moreover, biochemical analyses demonstrate that the HAD-type phosphatase is specific for pyridoxal phosphate. These observations, together with transcriptional and metabolic data of corresponding single (ER-ANT1) and double (ER-ANT1, phosphatase) loss-of-function mutant plants revealed an unexpected connection of ER-ANT1 to vitamin B6 metabolism.
Finally, a scenario is proposed, which explains how ER-ANT1 may influence B6 vitamer phosphorylation, by this affects photorespiration and causes several other physiological alterations observed in the corresponding loss-of-function mutant plants.

Spin-crossover and valence tautomeric complexes are of tremendous interest in the field of molecular electronics, electronic storage devices and information processing. Herein, synthesis and characterization of the spin-crossover and valence tautomeric cobalt dioxolene complexes are reported. All the synthesized complexes contain N,N'-di-tert-butyl-2,11-diaza[3.3](2,6)pyridinophane (L-N4tBu2) as ancillary ligands. Only various types of co-ligands which are different dioxolene ligands, have been used. The mononuclear cobalt dioxolene complexes have been synthesized by using dideprotonated form of the dioxolene ligand 4,5-dichlorocatechol (H2DCCat) as co-ligands, and the cobalt bis(dioxolene) complexes have been synthesized by using dideprotonated form of the 3,3'-dihydroxy-diphenoquinone-(4,4') (H2(SQ-SQ)) as co-ligands.
Analytically pure samples of the complexes [Co(L-N4tBu2)(DCCat)] (1), [Co(L-N4tBu2)(DCCat)](BPh4) (2b), [Co2(L-N4tBu2)2(SQ-SQ)](BPh4)2.4 DMF (3b), [Co2(L-N4tBu2)2(Cat-SQ)](BF4)2.Et2O (3d), have been synthesized and characterized by X-ray crystallography, magnetic and electrochemical measurements. The complexes have been investigated by UV/Vis/NIR-, IR-, and NMR spectroscopic measurements.
The complex [Co(L-N4tBu2)(DCCat)] (1) shows temperature invariant high-spin cobalt(II) catecholate state. One-electron oxidation of 1 has yielded the complex [Co(L-N4tBu2)(DCCat)](BPh4) (2b). The solid state properties of 2b are best described by the low-spin cobalt(III) catecholate state, but the solution state properties of the complex 2b are best described by the valence tautomeric transition from the low-spin cobalt(III) catecholate to the low-spin cobalt(II) semiquinonate state.
For the cobalt bis(dioxolene) complexes, it is found that spin-crossover for the two cobalt(II) centers is accompanied by the electronic state changes of the coordinated bis(dioxolene) unit from singlet open-shell biradicaloid to singlet closed-shell quinonoid form in complex 3b. Approaching similar synthetic method to 3b, but performing the metathesis reaction with sodium tetrafluoroborate rather than sodium tetraphenylborate has resulted in the formation of the complex [Co2(L-N4tBu2)2(Cat-SQ)](BF4)2.Et2O (3d). The solid state properties of the complex are best described by the temperature induced valence tautomeric transition for the low-spin cobalt(III) center which is accompanied by the spin-crossover process for the cobalt(II) center. Thus, the electronic state of the complex 3d changes from LS-CoIII-Cat-SQ-CoII-LS to HS-CoII-(SQ-SQ)CS-CoII-HS state upon change in temperature.
Temperature-induced electronic configuration changes of the (SQ-SQ)CS2- ligands from open-shell biradicaloid to closed-shell quinonoid configurations are not observed for the nickel-, copper- and zinc bis(dioxolene) complexes 4a, 5a and 6b, respectively. For these complexes, the metal ions are bridged by (SQ-SQ)CS2- ligand and the paramagnetic metal ions are very weakly antiferromagnetically coupled.

Biological clocks exist across all life forms and serve to coordinate organismal physiology with periodic environmental changes. The underlying mechanism of these clocks is predominantly based on cellular transcription-translation feedback loops in which clock proteins mediate the periodic expression of numerous genes. However, recent studies point to the existence of a conserved timekeeping mechanism independent of cellular transcription and translation, but based on cellular metabolism. These metabolic clocks were concluded based upon the observation of circadian and ultradian oscillations in the level of hyperoxidized peroxiredoxin proteins. Peroxiredoxins are enzymes found almost ubiquitously throughout life. Originally identified as H2O2 scavengers, recent studies show that peroxiredoxins can transfer oxidation to, and thereby regulate, a wide range of cellular proteins. Thus, it is conceivable that peroxiredoxins, using H2O2 as the primary signaling molecule, have the potential to integrate and coordinate much of cellular physiology and behavior with metabolic changes. Nonetheless, it remained unclear if peroxiredoxins are passive reporters of metabolic clock activity or active determinants of cellular timekeeping. Budding yeast possess an ultradian metabolic clock termed the Yeast Metabolic Cycle (YMC). The most obvious feature of the YMC is a high amplitude oscillation in oxygen consumption. Like circadian clocks, the YMC temporally compartmentalizes cellular processes (e.g. metabolism) and coordinates cellular programs such as gene expression and cell division. The YMC also exhibits oscillations in the level of hyperoxidized peroxiredoxin proteins.
In this study, I used the YMC clock model to investigate the role of peroxiredoxins in cellular timekeeping, as well as the coordination of cell division with the metabolic clock. I observed that cytosolic 2-Cys peroxiredoxins are essential for robust metabolic clock function. I provide direct evidence for oscillations in cytosolic H2O2 levels, as well as cyclical changes in oxidation state of a peroxiredoxin and a model peroxiredoxin target protein during the YMC. I noted two distinct metabolic states during the YMC: low oxygen consumption (LOC) and high oxygen consumption (HOC). I demonstrate that thiol-disulfide oxidation and reduction are necessary for switching between LOC and HOC. Specifically, a thiol reductant promotes switching to HOC, whilst a thiol oxidant prevents switching to HOC, forcing cells to remain in LOC. Transient peroxiredoxin inactivation triggered rapid and premature switching from LOC to HOC. Furthermore, I show that cell division is normally synchronized with the YMC and that deletion of typical 2-Cys peroxiredoxins leads to complete uncoupling of cell division from metabolic cycling. Moreover, metabolic oscillations are crucial for regulating cell cycle entry and exit. Intriguingly, switching to HOC is crucial for initiating cell cycle entry whilst switching to LOC is crucial for cell cycle completion and exit. Consequently, forcing cells to remain in HOC by application of a thiol reductant leads to multiple rounds of cell cycle entry despite failure to complete the preceding cell cycle. On the other hand, forcing cells to remain in LOC by treating with a thiol oxidant prevents initiation of cell cycle entry.
In conclusion, I propose that peroxiredoxins – by controlling metabolic cycles, which are in turn crucial for regulating the progression through cell cycle – play a central role in the coordination of cellular metabolism with cell division. This proposition, thus, positions peroxiredoxins as active players in the cellular timekeeping mechanism.

Activity recognition has continued to be a large field in computer science over the last two decades. Research questions from 15 years ago have led to solutions that today support our daily lives. Specifically, the success of smartphones or more recent developments of other smart devices (e.g., smart-watches) is rooted in applications that leverage on activity analysis and location tracking (fitness applications and maps). Today we can track our physical health and fitness and support our physical needs by merely owning (and using) a smart-phone. Still, the quality of our lives does not solely rely on fitness and physical health but also more increasingly on our mental well-being. Since we have learned how practical and easy it is to have a lot of functions, including health support on just one device, it would be specifically helpful if we could also use the smart-phone to support our mental and cognitive health if need be.
The ultimate goal of this work is to use sensor-assisted location and motion analysis to support various aspects of medically valid cognitive assessments.
In this regard, this thesis builds on Hypothesis 3: Sensors in our ubiquitous environment can collect information about our cognitive state, and it is possible to extract that information. In addition, these data can be used to derive complex cognitive states and to predict possible pathological changes in humans. After all, not only is it possible to determine the cognitive state through sensors but also to assist people in difficult situations through these sensors.
Thus, in the first part, this thesis focuses on the detection of mental state and state changes.
The primary purpose is to evaluate possible starting points for sensor systems in order to enable a clinically accurate assessment of mental states. These assessments must work on the condition that a developed system must be able to function within the given limits of a real clinical environment.
Despite the limitations and challenges of real-life deployments, it was possible to develop methods for determining the cognitive state and well-being of the residents. The analysis of the location data provides a correct classification of cognitive state with an average accuracy of 70% to 90%.
Methods to determine the state of bipolar patients provide an accuracy of 70-80\% for the detection of different cognitive states (total seven classes) using single sensors and 76% for merging data from different sensors. Methods for detecting the occurrence of state changes, a highlight of this work, even achieved a precision and recall of 95%.
The comparison of these results with currently used standard methods in psychiatric care even shows a clear advantage of the sensor-based method. The accuracy of the sensor-based analysis is 60% higher than the accuracy of the currently used methods.
The second part of this thesis introduces methods to support people’s actions in stressful situations on the one hand and analyzes the interaction between people during high-pressure activities on the other.
A simple, acceleration based, smartwatch instant feedback application was used to help laypeople to learn to perform CPR (cardiopulmonary resuscitation) in an emergency on the fly.
The evaluation of this application in a study with 43 laypersons showed an instant improvement in the CPR performance of 50%. An investigation of whether training with such an instant feedback device can support improved learning and lead to more permanent effects for gaining skills was able to confirm this theory.
Last but not least, with the main interest shifting from the individual to a group of people at the end of this work, the question: how can we determine the interaction between individuals within a group of people? was answered by developing a methodology to detect un-voiced collaboration in random ad-hoc groups. An evaluation with data retrieved from video footage provides an accuracy of up to more than 95%, and even with artificially introduced errors rates of 20%, still an accuracy of 70% precision, and 90% recall can be achieved.
All scenarios in this thesis address different practical issues of today’s health care. The methods developed are based on real-life datasets and real-world studies.

In this thesis we study a variant of the quadrature problem for stochastic differential equations (SDEs), namely the approximation of expectations \(\mathrm{E}(f(X))\), where \(X = (X(t))_{t \in [0,1]}\) is the solution of an SDE and \(f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R}\) is a functional, mapping each realization of \(X\) into the real numbers. The distinctive feature in this work is that we consider randomized (Monte Carlo) algorithms with random bits as their only source of randomness, whereas the algorithms commonly studied in the literature are allowed to sample from the uniform distribution on the unit interval, i.e., they do have access to random numbers from \([0,1]\).
By assumption, all further operations like, e.g., arithmetic operations, evaluations of elementary functions, and oracle calls to evaluate \(f\) are considered within the real number model of computation, i.e., they are carried out exactly.
In the following, we provide a detailed description of the quadrature problem, namely we are interested in the approximation of
\begin{align*}
S(f) = \mathrm{E}(f(X))
\end{align*}
for \(X\) being the \(r\)-dimensional solution of an autonomous SDE of the form
\begin{align*}
\mathrm{d}X(t) = a(X(t)) \, \mathrm{d}t + b(X(t)) \, \mathrm{d}W(t), \quad t \in [0,1],
\end{align*}
with deterministic initial value
\begin{align*}
X(0) = x_0 \in \mathbb{R}^r,
\end{align*}
and driven by a \(d\)-dimensional standard Brownian motion \(W\). Furthermore, the drift coefficient \(a \colon \mathbb{R}^r \to \mathbb{R}^r\) and the diffusion coefficient \(b \colon \mathbb{R}^r \to \mathbb{R}^{r \times d}\) are assumed to be globally Lipschitz continuous.
For the function classes
\begin{align*}
F_{\infty} = \bigl\{f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R} \colon |f(x) - f(y)| \leq \|x-y\|_{\sup}\bigr\}
\end{align*}
and
\begin{align*}
F_p = \bigl\{f \colon C([0,1],\mathbb{R}^r) \to \mathbb{R} \colon |f(x) - f(y)| \leq \|x-y\|_{L_p}\bigr\}, \quad 1 \leq p < \infty.
\end{align*}
we have established the following.
\[\]
\(\textit{Theorem 1.}\)
There exists a random bit multilevel Monte Carlo (MLMC) algorithm \(M\) using
\[
L = L(\varepsilon,F) = \begin{cases}\lceil{\log_2(\varepsilon^{-2}}\rceil, &\text{if} \ F = F_p,\\
\lceil{\log_2(\varepsilon^{-2} + \log_2(\log_2(\varepsilon^{-1}))}\rceil, &\text{if} \ F = F_\infty
\end{cases}
\]
and replication numbers
\[
N_\ell = N_\ell(\varepsilon,F) = \begin{cases}
\lceil{(L+1) \cdot 2^{-\ell} \cdot \varepsilon^{-2}}\rceil, & \text{if} \ F = F_p,\\
\lceil{(L+1) \cdot 2^{-\ell} \cdot \max(\ell,1) \cdot \varepsilon^{-2}}\rceil, & \text{if} \ F=f_\infty
\end{cases}
\]
for \(\ell = 0,\ldots,L\), for which exists a positive constant \(c\) such that
\begin{align*}
\mathrm{error}(M,F) = \sup_{f \in F} \bigl(\mathrm{E}(S(f) - M(f))^2\bigr)^{1/2} \leq c \cdot \varepsilon
\end{align*}
and
\begin{align*}
\mathrm{cost}(M,F) = \sup_{f \in F} \mathrm{E}(\mathrm{cost}(M,f)) \leq c \cdot \varepsilon^{-2} \cdot \begin{cases}
(\ln(\varepsilon^{-1}))^2, &\text{if} \ F=F_p,\\
(\ln(\varepsilon^{-1}))^3, &\text{if} \ F=F_\infty
\end{cases}
\end{align*}
for every \(\varepsilon \in {]0,1/2[}\).
\[\]
Hence, in terms of the \(\varepsilon\)-complexity
\begin{align*}
\mathrm{comp}(\varepsilon,F) = \inf\bigl\{\mathrm{cost}(M,F) \colon M \ \text{is a random bit MC algorithm}, \mathrm{error}(M,F) \leq \varepsilon\bigr\}
\end{align*}
we have established the upper bound
\begin{align*}
\mathrm{comp}(\varepsilon,F) \leq c \cdot \varepsilon^{-2} \cdot \begin{cases}
(\ln(\varepsilon^{-1}))^2, &\text{if} \ F=F_p,\\
(\ln(\varepsilon^{-1}))^3, &\text{if} \ F=F_\infty
\end{cases}
\end{align*}
for some positive constant \(c\). That is, we have shown the same weak asymptotic upper bound as in the case of random numbers from \([0,1]\). Hence, in this sense, random bits are almost as powerful as random numbers for our computational problem.
Moreover, we present numerical results for a non-analyzed adaptive random bit MLMC Euler algorithm, in the particular cases of the Brownian motion, the geometric Brownian motion, the Ornstein-Uhlenbeck SDE and the Cox-Ingersoll-Ross SDE. We also provide a numerical comparison to the corresponding adaptive random number MLMC Euler method.
A key challenge in the analysis of the algorithm in Theorem 1 is the approximation of probability distributions by means of random bits. A problem very closely related to the quantization problem, i.e., the optimal approximation of a given probability measure (on a separable Hilbert space) by means of a probability measure with finite support size.
Though we have shown that the random bit approximation of the standard normal distribution is 'harder' than the corresponding quantization problem (lower weak rate of convergence), we have been able to establish the same weak rate of convergence as for the corresponding quantization problem in the case of the distribution of a Brownian bridge on \(L_2([0,1])\), the distribution of the solution of a scalar SDE on \(L_2([0,1])\), and the distribution of a centered Gaussian random element in a separable Hilbert space.

Private data analytics systems preferably provide required analytic accuracy to analysts and specified privacy to individuals whose data is analyzed. Devising a general system that works for a broad range of datasets and analytic scenarios has proven to be difficult.
Despite the advent of differentially private systems with proven formal privacy guarantees, industry still uses inferior ad-hoc mechanisms that provide better analytic accuracy. Differentially private mechanisms often need to add large amounts of noise to statistical results, which impairs their usability.
In my thesis I follow two approaches to improve the usability of private data analytics systems in general and differentially private systems in particular. First, I revisit ad-hoc mechanisms and explore the possibilities of systems that do not provide Differential Privacy or only a weak version thereof. Based on an attack analysis I devise a set of new protection mechanisms including Query Based Bookkeeping (QBB). In contrast to previous systems QBB only requires the history of analysts’ queries in order to provide privacy protection. In particular, QBB does not require knowledge about the protected individuals’ data.
In my second approach I use the insights gained with QBB to propose UniTraX, the first differentially private analytics system that allows to analyze part of a protected dataset without affecting the other parts and without giving up on accuracy. I show UniTraX’s usability by way of multiple case studies on real-world datasets across different domains. UniTraX allows more queries than previous differentially private data analytics systems at moderate runtime overheads.

Model uncertainty is a challenge that is inherent in many applications of mathematical models in various areas, for instance in mathematical finance and stochastic control. Optimization procedures in general take place under a particular model. This model, however, might be misspecified due to statistical estimation errors and incomplete information. In that sense, any specified model must be understood as an approximation of the unknown "true" model. Difficulties arise since a strategy which is optimal under the approximating model might perform rather bad in the true model. A natural way to deal with model uncertainty is to consider worst-case optimization.
The optimization problems that we are interested in are utility maximization problems in continuous-time financial markets. It is well known that drift parameters in such markets are notoriously difficult to estimate. To obtain strategies that are robust with respect to a possible misspecification of the drift we consider a worst-case utility maximization problem with ellipsoidal uncertainty sets for the drift parameter and with a constraint on the strategies that prevents a pure bond investment.
By a dual approach we derive an explicit representation of the optimal strategy and prove a minimax theorem. This enables us to show that the optimal strategy converges to a generalized uniform diversification strategy as uncertainty increases.
To come up with a reasonable uncertainty set, investors can use filtering techniques to estimate the drift of asset returns based on return observations as well as external sources of information, so-called expert opinions. In a Black-Scholes type financial market with a Gaussian drift process we investigate the asymptotic behavior of the filter as the frequency of expert opinions tends to infinity. We derive limit theorems stating that the information obtained from observing the discrete-time expert opinions is asymptotically the same as that from observing a certain diffusion process which can be interpreted as a continuous-time expert. Our convergence results carry over to convergence of the value function in a portfolio optimization problem with logarithmic utility.
Lastly, we use our observations about how expert opinions improve drift estimates for our robust utility maximization problem. We show that our duality approach carries over to a financial market with non-constant drift and time-dependence in the uncertainty set. A time-dependent uncertainty set can then be defined based on a generic filter. We apply this to various investor filtrations and investigate which effect expert opinions have on the robust strategies.

In this dissertation we apply financial mathematical modelling to electricity markets. Electricity is different from any other underlying of financial contracts: it is not storable. This means that electrical energy in one time point cannot be transferred to another. As a consequence, power contracts with disjoint delivery time spans basically have a different underlying. The main idea throughout this thesis is exactly this two-dimensionality of time: every electricity contract is not only characterized by its trading time but also by its delivery time.
The basis of this dissertation are four scientific papers corresponding to the Chapters 3 to 6, two of which have already been published in peer-reviewed journals. Throughout this thesis two model classes play a significant role: factor models and structural models. All ideas are applied to or supported by these two model classes. All empirical studies in this dissertation are conducted on electricity price data from the German market and Chapter 4 in particular studies an intraday derivative unique to the German market. Therefore, electricity market design is introduced by the example of Germany in Chapter 1. Subsequently, Chapter 2 introduces the general mathematical theory necessary for modelling electricity prices, such as Lévy processes and the Esscher transform. This chapter is the mathematical basis of the Chapters 3 to 6.
Chapter 3 studies factor models applied to the German day-ahead spot prices. We introduce a qualitative measure for seasonality functions based on three requirements. Furthermore, we introduce a relation of factor models to ARMA processes, which induces a new method to estimate the mean reversion speed.
Chapter 4 conducts a theoretical and empirical study of a pricing method for a new electricity derivative: the German intraday cap and floor futures. We introduce the general theory of derivative pricing and propose a method based on the Hull-White model of interest rate modelling, which is a one-factor model. We include week futures prices to generate a price forward curve (PFC), which is then used instead of a fixed deterministic seasonality function. The idea that we can combine all market prices, and in particular futures prices, to improve the model quality also plays the major role in Chapter 5 and Chapter 6.
In Chapter 5 we develop a Heath-Jarrow-Morton (HJM) framework that models intraday, day-ahead, and futures prices. This approach is based on two stochastic processes motivated by economic interpretations and separates the stochastic dynamics in trading and delivery time. Furthermore, this framework allows for the use of classical day-ahead spot price models such as the ones of Schwartz and Smith (2000), Lucia and Schwartz (2002) and includes many model classes such as structural models and factor models.
Chapter 6 unifies the classical theory of storage and the concept of a risk premium through the introduction of an unobservable intrinsic electricity price. Since all tradable electricity contracts are derivatives of this actual intrinsic price, their prices should all be derived as conditional expectation under the risk-neutral measure. Through the intrinsic electricity price we develop a framework, which also includes many existing modelling approaches, such as the HJM framework of Chapter 5.

The main focus of the research lies in the interpretation and application of results and correlations of soil properties from in situ testing and subsequent use in terramechanical applications. The empirical correlations and current procedures were mainly developed for medium to large depths, and therefore they were re-evaluated and adjusted herein to reflect the current state of knowledge for the assessment of near-surface soil. For testing technologies, a field investigation to a moon analogue site was carried out. Focus was placed in the assessment of the near surface soil properties. Samples were collected for subsequent analysis in laboratory conditions. Further laboratory experiments in extraterrestrial soil simulants and other terrestrial soils were conducted and correlations with relative density and shear strength parameters were attempted. The correlations from the small scale laboratory experiments, and the new re-evaluated correlation for relative density were checked against the data from the field investigation. Additionally, single tire-soil tests were carried out, which enable the investigation of the localized soil response in order to advance current wheel designs and subsequently the vehicle’s mobility. Furthermore, numerical simulations were done to aid the investigation of the tire-soil interaction. Summing up, current relationships for estimating relative density of near surface soil were re-evaluated, and subsequently correlated to shear strength parameters that are the main input to model soil in numerical analyses. Single tire-soil tests were carried out and were used as a reference to calibrate the interaction of the tire and the soil and subsequently were utilized to model rolling scenarios which enable the assessment of soil trafficability and vehicle’s mobility.

Planar force or pressure is a fundamental physical aspect during any people-vs-people and people-vs-environment activities and interactions. It is as significant as the more established linear and angular acceleration (usually acquired by inertial measurement units). There have been several studies involving planar pressure in the discipline of activity recognition, as reviewed in the first chapter. These studies have shown that planar pressure is a promising sensing modality for activity recognition. However, they still take a niche part in the entire discipline, using ad hoc systems and data analysis methods. Mostly these studies were not followed by further elaborative works. The situation calls for a general framework that can help push planar pressure sensing into the mainstream.
This dissertation systematically investigates using planar pressure distribution sensing technology for ubiquitous and wearable activity recognition purposes. We propose a generic Textile Pressure Mapping (TPM) Framework, which encapsulates (1) design knowledge and guidelines, (2) a multi-layered tool including hardware, software and algorithms, and (3) an ensemble of empirical study examples. Through validation with various empirical studies, the unified TPM framework covers the full scope of application recognition, including the ambient, object, and wearable subspaces.
The hardware part constructs a general architecture and implementations in the large-scale and mobile directions separately. The software toolkit consists of four heterogeneous tiers: driver, data processing, machine learning, visualization/feedback. The algorithm chapter describes generic data processing techniques and a unified TPM feature set. The TPM framework offers a universal solution for other researchers and developers to evaluate TPM sensing modality in their application scenarios.
The significant findings from the empirical studies have shown that TPM is a versatile sensing modality. Specifically, in the ambient subspace, a sports mat or carpet with TPM sensors embedded underneath can distinguish different sports activities or different people's gait based on the dynamic change of body-print; a pressure sensitive tablecloth can detect various dining actions by the force propagated from the cutlery through the plates to the tabletop. In the object subspace, swirl office chairs with TPM sensors under the cover can be used to detect the seater's real-time posture; TPM can be used to detect emotion-related touch interactions for smart objects, toys or robots. In the wearable subspace, TPM sensors can be used to perform pressure-based mechanomyography to detect muscle and body movement; it can also be tailored to cover the surface of a soccer shoe to distinguish different kicking angles and intensities.
All the empirical evaluations have resulted in accuracies well-above the chance level of the corresponding number of classes, e.g., the `swirl chair' study has classification accuracy of 79.5% out of 10 posture classes and in the `soccer shoe' study the accuracy is 98.8% among 17 combinations of angle and intensity.