Refine
Year of publication
Document Type
- Preprint (346)
- Doctoral Thesis (196)
- Report (139)
- Article (117)
- Master's Thesis (45)
- Study Thesis (13)
- Conference Proceeding (8)
- Bachelor Thesis (3)
- Habilitation (2)
- Part of a Book (1)
Has Fulltext
- yes (870)
Is part of the Bibliography
- no (870)
Keywords
- AG-RESY (64)
- PARO (31)
- Case-Based Reasoning (20)
- Visualisierung (19)
- SKALP (16)
- CoMo-Kit (15)
- Fallbasiertes Schliessen (12)
- RODEO (12)
- Robotik (12)
- HANDFLEX (11)
- META-AKAD (9)
- Robotics (8)
- Visualization (8)
- Abstraction (6)
- Case Based Reasoning (6)
- motion planning (6)
- COMOKIT (5)
- Case-Based Planning (5)
- Computergraphik (5)
- Evaluation (5)
- Fallbasiertes Schließen (5)
- Force-Torque (5)
- RONAF (5)
- SDL (5)
- SIMERO (5)
- case-based problem solving (5)
- industrial robots (5)
- parallel processing (5)
- Assembly (4)
- Dienstgüte (4)
- Implementierung (4)
- Java 2 Enterprise Edition (4)
- Knowledge Acquisition (4)
- LOADBAL (4)
- Manipulation skills (4)
- Maschinelles Lernen (4)
- Mensch-Roboter-Kooperation (4)
- Wissensverarbeitung (4)
- case-based reasoning (4)
- deformable objects (4)
- resolution (4)
- safe human robot cooperation (4)
- search algorithms (4)
- Bildverarbeitung (3)
- CODET (3)
- Expertensysteme (3)
- Fallbasiertes Planen (3)
- Formalisierung (3)
- Geoinformationssystem (3)
- Internet (3)
- Java (3)
- Knowledge acquisition (3)
- Kooperation (3)
- Model Checking (3)
- Mustererkennung (3)
- Navigation (3)
- Optimierung (3)
- Optische Zeichenerkennung (3)
- Ray casting (3)
- Recommender Systems (3)
- Requirements/Specifications (3)
- Roboter (3)
- Scientific Visualization (3)
- Semantic Web (3)
- Simulation (3)
- Software Engineering (3)
- case-based planning (3)
- computer aided planning (3)
- computer graphics (3)
- distributed software development (3)
- distributed software development process (3)
- document analysis (3)
- explanation-based learning (3)
- on-line algorithms (3)
- optical character recognition (3)
- path planning (3)
- problem solving (3)
- reuse (3)
- theorem prover (3)
- theorem proving (3)
- verification (3)
- vibration (3)
- visualization (3)
- AKLEON (2)
- Activity recognition (2)
- Algorithmus (2)
- Automation (2)
- Automatische Differentiation (2)
- Bahnplanung (2)
- CAP (2)
- CAPlan (2)
- CIM-OSA (2)
- Code Generation (2)
- Computational Fluid Dynamics (2)
- Datenanalyse (2)
- Datenbank (2)
- Decision Trees (2)
- Deduction (2)
- Deep Learning (2)
- Deformable Objects (2)
- Diagnose technischer Systeme (2)
- Distributed Software Development (2)
- Dreidimensionale Bildverarbeitung (2)
- Effizienter Algorithmus (2)
- Eingebettetes System (2)
- Endlicher Automat (2)
- Entscheidungsbäume (2)
- Experiment (2)
- Fallbasierte Planning (2)
- Fallbasierte Planung (2)
- Fräsen (2)
- GPU (2)
- HOT (2)
- Hals-Nasen-Ohren-Chirurgie (2)
- Hals-Nasen-Ohren-Heilkunde (2)
- Interaction (2)
- Interaktion (2)
- KLUEDO (2)
- Kommunikation (2)
- Kontextbezogenes System (2)
- Layout (2)
- MOLTKE-Projekt (2)
- Machine Learning (2)
- Manipulation (2)
- Mapping (2)
- Mensch-Maschine-Kommunikation (2)
- Mensch-Roboter-Koexistenz (2)
- Merkmalsextraktion (2)
- Modellgetriebene Entwicklung (2)
- Modellierung (2)
- Natural Language Processing (2)
- Network Protocols (2)
- Netzwerk (2)
- PATDEX (2)
- Partial functions (2)
- Pattern Recognition (2)
- Prozesssteuerung (2)
- Raumakustik (2)
- Regelung (2)
- Room acoustics (2)
- SWEEPING (2)
- Scattered-Data-Interpolation (2)
- Server (2)
- Software (2)
- Software Agents (2)
- Software engineering (2)
- Softwareentwicklung (2)
- Sprachprofile (2)
- Suchraum (2)
- Systemarchitektur (2)
- TOVE (2)
- Term rewriting systems (2)
- Topology (2)
- Translation Validation (2)
- UML (2)
- UML 2 (2)
- Uncertain Data (2)
- Uncertainty Visualization (2)
- Voronoi-Diagramm (2)
- Wearable computing (2)
- Wissensakquisition (2)
- World Wide Web (2)
- XML (2)
- analogy (2)
- analysis of algorithms (2)
- application (2)
- artificial intelligence (2)
- artificial neural networks (2)
- automated theorem proving (2)
- building automation (2)
- case based reasoning (2)
- combined systems with sha (2)
- computer-supported cooperative work (2)
- confluence (2)
- conservative extension (2)
- consistency (2)
- design patterns (2)
- discretization (2)
- disjoint union (2)
- distributed (2)
- distributed computing (2)
- experience base (2)
- fallbasiertes Schliessen (2)
- formal specification (2)
- frames (2)
- genetic algorithms (2)
- graph search (2)
- human robot cooperation (2)
- image processing (2)
- innermost termination (2)
- layout analysis (2)
- learning system (2)
- load balancing (2)
- machine learning (2)
- many-valued logic (2)
- mobile robots (2)
- modularity (2)
- parallel algorithms (2)
- problem formulation (2)
- regelbasiertes Problemlösen (2)
- requirements engineering (2)
- self-organization (2)
- sensor fusion (2)
- tactics (2)
- temporal logic (2)
- termination (2)
- trajectory planning (2)
- virtual acoustics (2)
- weak termination (2)
- 3D Gene Expression (1)
- 3D Point Data (1)
- ASM (1)
- AUTOSAR (1)
- Ablagestruktur (1)
- Ableitungsschätzung (1)
- Abrechnungsmanagement (1)
- Abstandsregeltempomat (1)
- Abstraction-Based Controller Design (1)
- Abstraktion (1)
- Access Points (1)
- Access System (1)
- Accounting (1)
- Accounting Agent (1)
- Ad-hoc workflow (1)
- Ad-hoc-Netz (1)
- Adaption (1)
- Adaptive Data Structure (1)
- Affine Arithmetic (1)
- Agents (1)
- Akquisition (1)
- Algorithm (1)
- Algorithmic Differentiation (1)
- Ambient Intelligence (1)
- Amharic, Attention, Factored Convolutional Neural Network, OCR (1)
- Anfrageverarbeitung (1)
- Application Framework (1)
- Artificial Intelligence (1)
- Aspektorientierte Programmierung (1)
- Association (1)
- Audiodeskription (1)
- Ausdrucksfähig (1)
- Ausdrucksfähigkeit (1)
- Automat <Automatentheorie> (1)
- Automated Calibration (1)
- Automated Reasoning (1)
- Automated theorem proving (1)
- Automatic Differentiation (1)
- Automatic Image Captioning (1)
- Automatic Theorem Provi (1)
- Automatische Indexierung (1)
- Automatische Klassifikation (1)
- Automatische Messung (1)
- Automatisches Beweisverfahren (1)
- Automotive (1)
- Autonomer Agent (1)
- Autonomer Roboter (1)
- Autonomous mobile robots (1)
- Autorensystem (1)
- Bebauungsplanung (1)
- Befahrbarkeitsanalyse (1)
- Benutzer (1)
- Beschränkte Arithmetik (1)
- Bewegungsplanung (1)
- Bibliothekskatalog (1)
- Bio-inspired (1)
- Bioinformatik (1)
- Bipedal Locomotion (1)
- Bitlisten (1)
- Blackboard architecture (1)
- Bluetooth (1)
- Boosting (1)
- C (1)
- CAN-Bus (1)
- CAR <Roboter> (1)
- CAS (1)
- CBR (1)
- CNC-Maschine (1)
- CUDA (1)
- Caching (1)
- Carbon footprint (1)
- Case Study (1)
- Case Study Erfahrungsdatenbank (1)
- Case-Based Classification Algorithms (1)
- Case-Based Diagnosis (1)
- Case-Based Learning (1)
- Case-Based Reasoning Systems (1)
- Case-Based Representability (1)
- Case-based problem solving (1)
- Castor (1)
- Causal Ordering (1)
- Causality (1)
- Certifying Compilers (1)
- Channel Hopping (1)
- Channel Scheduling (1)
- Chromium Browser (1)
- Classification (1)
- Classification Tasks (1)
- Cluster-Analyse (1)
- Clustering (1)
- Cochlea Implant (1)
- Cochlea Implantat (1)
- Cognitive Amplification (1)
- Collaboration (1)
- Combinatorial Testing (1)
- Compiler (1)
- Completion (1)
- Computer Graphic (1)
- Computer Science (1)
- Computer Supported Cooperative Work (1)
- Computer assistierte Chirurgie (1)
- Computer assistierte Chirurgie (CAS) (1)
- Computer graphics (1)
- Computer supported cooperative work (1)
- Computerphysik (1)
- Computersimulation (1)
- Computertomographie (1)
- Computerunterstütztes Lernen (1)
- Computervision (1)
- Concept mapping (1)
- Concept maps (1)
- Concurrent data structures (1)
- Congitive Radio Networks (1)
- Constraint Graphs (1)
- Containertypen (1)
- Containertypes (1)
- Content Management (1)
- Context of Use (1)
- Contract net (1)
- Controller Synthesis (1)
- Cooperative decision making (1)
- Coq (1)
- Correct-by-Design Controller Synthesis (1)
- Correlation (1)
- Cyber-Physical Systems (1)
- Cycle Accuracy (1)
- DCE <Programm> (1)
- DES (1)
- DFG (1)
- DPN (1)
- Data Modeling (1)
- Dataset (1)
- Datenbanken (1)
- Datenreduktion (1)
- Decision Support Systems (1)
- Declarative and Procedural Knowledge (1)
- Delaunay-Triangulierung (1)
- Dependency Factors (1)
- Derivative Estimation (1)
- Design Rationales (1)
- Diagnosesystem (1)
- Didaktik (1)
- Dienstschnittstellen (1)
- Difference Reduction (1)
- Direct Numerical Simulation (1)
- Discrete Event Simulation (DES) (1)
- Distributed Computation (1)
- Distributed Deb (1)
- Distributed Multimedia Applications (1)
- Distributed Rendering (1)
- Distributed Software Development Projects (1)
- Distributed System (1)
- Distributed software development support (1)
- Distributed system (1)
- Distributed systems (1)
- Distribution and Combination of Theorem Provers (1)
- DoS (1)
- DoS-Attacke (1)
- Domänenspezifische Sprachen (1)
- Doppler-Radar (1)
- Drahtloses lokales Netz (1)
- Dreidimensionale Rekonstruktion (1)
- Dreidimensionale Strömung (1)
- Dublin Core (1)
- Duplicate Identification (1)
- Duplikaterkennung (1)
- Dynamischer Test (1)
- E-Learning (1)
- EBG (1)
- ECMAScript (1)
- ESTELLE (1)
- Earley-Parser (1)
- Ecommerce (1)
- Effizienz (1)
- Effizienzsteigerung (1)
- Eingebettete Systeme (1)
- Elektrizitätsverbrauch (1)
- Embedded software (1)
- Empfehlungssysteme (1)
- Energie (1)
- Ensemble Visualization (1)
- Enterprise modeling (1)
- Enterprise modelling (1)
- Entwurf (1)
- Equality reasoning (1)
- Equational Reasoning (1)
- Ernergie effiziente Bewegung (1)
- Erstellung von Expertensystemen (1)
- Eventual consistency (1)
- Evolutionary Algorithm (1)
- Experience Base (1)
- Experience Database (1)
- Experimentation (1)
- Exposed Datapath Architectures (1)
- Expressiveness (1)
- Extensibility (1)
- Extraction (1)
- Extrapolation (1)
- Eyewear Computing (1)
- FERAL (1)
- Fachdidaktik (1)
- Fahrassistenzsysteme (1)
- Fahrtkostenmodelle (1)
- Fallbasierte Diagnose (1)
- Fallstu (1)
- Farbmodell (1)
- Fast Mode-Signaling (1)
- Fault Tree Analysis (1)
- Feasibility study (1)
- Feature (1)
- Feature Detection (1)
- Feature Extraction (1)
- Fehlerbaumanalyse (1)
- Fließanalyse (1)
- Flooding Attack (1)
- Flow Visualization (1)
- Force Feedback (1)
- Formal Semantics (1)
- Formale Beschreibungstechnik (1)
- Formale Grammatik (1)
- Formale Sprache (1)
- Formaler Beweis (1)
- Framework (1)
- Framework <Informatik> (1)
- Functional Programs (1)
- Functional Safety (1)
- Funknetz (1)
- Funktionale Sicherheit (1)
- Fusion (1)
- Future Internet (1)
- GRAPHICS (1)
- Gateway (1)
- Gauß-Filter (1)
- Gebäude (1)
- Gebäudeautomation (1)
- Gefahren- und Risikoanalyse (1)
- Gene expression programming (1)
- General Knowledge (1)
- Generic Methods (1)
- Generierung (1)
- Genetic Algorithm (1)
- Geovisualization (1)
- Global Predicate Detection (1)
- Global Software Highway (1)
- Google Earth (1)
- HOL (1)
- HPC (1)
- HTE (1)
- HTML (1)
- Harvest (1)
- Hazard Analysis (1)
- Heterogeneous (1)
- Hitting families (1)
- Hub-and-Spoke-System (1)
- Huffman (1)
- Huffman-Code (1)
- Human-Computer Interaction (1)
- Human-Robot-Coexistence (1)
- Human-Robot-Cooperation (1)
- Hybrid CBR (1)
- Hybridregler (1)
- Hypergraph (1)
- Hörgerät (1)
- IDEA (1)
- IEEE 802.15.4 (1)
- INRECA (1)
- IP Address (1)
- IP Traffic Accounting (1)
- IP-XACT (1)
- ISO 26262 (1)
- Image Processing (1)
- Imote2 (1)
- Implan (1)
- Implantation (1)
- Implementation (1)
- Incremental recomputation (1)
- Indexierung <Inhaltserschließung> (1)
- Inductive Theorem Proving (1)
- Induktive Logische Programmierung (1)
- Induktive logische Programmierung (1)
- Induktivem Schliessen (1)
- Induktiven Logischen Programmierung (1)
- Information Extraction (1)
- Information Visualization (1)
- Instance-based Learning (1)
- Intel XScale (1)
- Intelligent Agents (1)
- Intelligent Object Fusion (1)
- Intelligent agents (1)
- Intensity estimation (1)
- Interleaved Planning (1)
- Intermediate Composition (1)
- Internet Based Software Process Management Environment (1)
- Internet knowledge base (1)
- Internet knowledge reuse (1)
- Interpolation (1)
- Invariante (1)
- Invariante Momente (1)
- Isabelle/HOL (1)
- JSR 170 JCR (1)
- Jacobian (1)
- JavaScript (1)
- Kellerautomat (1)
- Klassifikation (1)
- Knowledge Management (1)
- Knowledge-based Techniques (1)
- Knuth-Bendix completion (1)
- Knuth-Bendix completion algorithm (1)
- Knuth-Bendix-Vervollständigung (1)
- Kollisionserkennung (1)
- Kommunikationsprotokoll (1)
- Komplexitätsklasse NP (1)
- Komponente <Software> (1)
- Komponentenmodell (1)
- Kompression (1)
- Komprimierung (1)
- Korrelationsanalyse (1)
- Kraftregelung (1)
- Kraftrückkopplung (1)
- Kryptographie (1)
- Kryptologie (1)
- Kurve (1)
- Künstliche Intelligenz (1)
- LIDAR (1)
- LIR-Tree (1)
- Language Constructs (1)
- Large Data (1)
- Large High-Resolution Displays (1)
- Laser Wakefield Particle Accelerator (1)
- Laufkomprimierung (1)
- Learning Analytics (1)
- Leistungsmessung (1)
- Lernalgorithmen (1)
- Linked Data (1)
- Linking Data Analysis and Visualization (1)
- Linux (1)
- Location Awareness (1)
- Logical Time (1)
- Logische Programmierung (1)
- Lokalisierung (1)
- Lärmbelastung (1)
- Lärmimmission (1)
- MAC protocols (1)
- MDA <Vorgehensmodell> (1)
- MEGI (1)
- MHEG (1)
- MIDI <Musikelektronik> (1)
- MIP-Emissionsspektroskopie (1)
- MIP-Massenspektrometrie (1)
- MIR (1)
- MOO (1)
- MP3 (1)
- MVP-L (1)
- Machine learning (1)
- Magnetfeldbasierter Lokalisierung (1)
- Magnetfelder (1)
- Manufacturing (1)
- Map Building (1)
- MapReduce (1)
- Maschinelle Übersetzung (1)
- Mastoid (1)
- Mastoidektomie (1)
- Matrix Completion (1)
- Maturity of Software Engineering (1)
- Maximum Intensity Projection (1)
- Mehragentensystem (1)
- Memory Architecture (1)
- Memory Consistency (1)
- Menschenmenge (1)
- Merkmalsraum (1)
- Mesh-Free (1)
- Metadaten (1)
- Meter (1)
- Methods (1)
- Mikrocontroller AVR (1)
- Mikroskopie (1)
- Minimal Cut Set Visualization (1)
- Minimal training (1)
- MoCAS/2 (1)
- Mobile Computing (1)
- Mobile system (1)
- Mobiler Roboter (1)
- Mode-Based Scheduling with Fast Mode-Signaling (1)
- Model Based User Interface Development (1)
- Modelling (1)
- Modularisierung (1)
- Modusbasierte Signalisierung (1)
- Molekulare Bioinformatik (1)
- Moment Invariants (1)
- Mood-based Music Recommendations (1)
- Multi-Edge Graph (1)
- Multi-Field (1)
- Multi-Variate Data (1)
- Multidisciplinary Optimization (1)
- Multifield Data (1)
- Mund-Kiefer-Gesichts-Chirurgie (1)
- Music Information Retrieval (1)
- Musik / Artes liberales (1)
- NP-hard (1)
- Nachhaltigkeit (1)
- Natural Neighbor (1)
- Natural Neighbor Interpolation (1)
- Natürliche Nachbarn (1)
- Nearest-Neighbor Classification (1)
- Network Architecture (1)
- Network Calculus (1)
- Netz-Architekturen (1)
- Netzwerkmanagement (1)
- Neural Networks (1)
- NoSQL (1)
- Node-Link Diagram (1)
- Numerische Strömungssimulation (1)
- Nvidia (1)
- OCL 2.0 (1)
- OCR (1)
- Object-OrientedCase Representation (1)
- Object-Relational DataBase Management Systems (ORDBMS) (1)
- Object-Relational Database Systems (1)
- Object-orientation (1)
- Objektorientierung (1)
- Off-road Robotics (1)
- Off-road Robotik (1)
- Ohrenchirurgie (1)
- Online chain partitioning (1)
- Ontolingua (1)
- Ontology (1)
- Open Estelle (1)
- Open-Source (1)
- Operationsroboter (1)
- Optimierender Compiler (1)
- P2P (1)
- PABS-Methode (1)
- PATDEX 2 (1)
- PC-based robot control (1)
- PDF <Dateiformat> (1)
- PERA (1)
- PI-Regler (1)
- PLAN Abstraction (1)
- PVM (1)
- Parallel Virtual Machines (1)
- Paralleler Hybrid (1)
- Pareto Optimality (1)
- Parser (1)
- Partial Differential Equations (1)
- Partially ordered sets (1)
- Participatory Sensing (1)
- Peer-to-Peer-Netz (1)
- Performance (1)
- Personalisation (1)
- Pervasive health (1)
- Physical activity monitoring (1)
- Planar Pressure (1)
- Planning and Verification (1)
- Position Sensitive Device (1)
- Position- and Orientation Estimation (1)
- Problem Solvers (1)
- Problemlösung (1)
- Process Data (1)
- Process Management (1)
- Process configuration (1)
- Process creation (1)
- Process support (1)
- Processor Architecture (1)
- Processor Architectures (1)
- Produktionsdesign (1)
- Profiles (1)
- Project Management (1)
- Protocol Composition (1)
- Prototyp (1)
- Prototype (1)
- Prozessmodellen (1)
- Prozessvisualisierung (1)
- Quality (1)
- Quality Improvement Paradigm (QIP) (1)
- Qualität (1)
- Quantitative Bildanalyse (1)
- Quartz (1)
- Quicksort (1)
- RDF (1)
- RGB (1)
- RLE (1)
- RNA interaction (1)
- RSA (1)
- Radar (1)
- Random testing (1)
- Raumordnung (1)
- Ray tracing (1)
- ReasoningSystems (1)
- Rechnernetze (1)
- Rechtecksgitter (1)
- Rectilinear Grid (1)
- Redundanzvermeidung (1)
- Regler (1)
- Reglerentwurf (1)
- Reinforcement Learning (1)
- Rekonstruktion (1)
- Relationales Datenbanksystem (1)
- Repositories (1)
- Repository <Informatik> (1)
- Representation (1)
- Requirements engineering (1)
- Reservierungsprotokoll (1)
- Resource Description Framework (1)
- Reuse (1)
- Risikobewertung (1)
- Risk Assessment (1)
- Robot Calibration (1)
- Roboterarm (1)
- Rogue AP (1)
- Routing (1)
- Räumliche Statistik (1)
- SAHARA (1)
- SAX2 (1)
- SCAD (1)
- SCM (1)
- SDL extensions (1)
- SDL-2000 (1)
- SDL-oriented Object Modeling Technique (1)
- SDL-pattern a (1)
- SFB-EB (1)
- SIMD architectures (1)
- SOMT (1)
- SPARQL (1)
- SPARQL query learning (1)
- SQL (1)
- Safety Analysis (1)
- Scalar (1)
- Scheduling (1)
- Schema <Informatik> (1)
- Schematisation (1)
- Schematisierung (1)
- Schädelbasis (1)
- Schädelchirurgie (1)
- Scientific Computing (1)
- Self-Referencing (1)
- Semantic Wikis (1)
- Semantics of Programming Languages (1)
- Semantische Modellierung (1)
- Semantisches Datenmodell (1)
- Sensing (1)
- Service Access Points (1)
- Shared Resource Modeling (1)
- Sicherheit (1)
- Sicherheitsanalyse (1)
- Similarity Assessment (1)
- Similarity Join (1)
- Similarity Joins (1)
- Simulationen (1)
- Simultaneous quantifier elimination (1)
- Skalar (1)
- Skalierbarkeit (1)
- Smalltalk (1)
- Smart City (1)
- Smart Device (1)
- Smart Mobile Device (1)
- Smart Textile (1)
- SmartFactory (1)
- Smartphone (1)
- Smartwatch (1)
- Socio-Semantic Web (1)
- Software Comprehension (1)
- Software Configuration Management (1)
- Software Dependencies (1)
- Software Evolution (1)
- Software Maintenance (1)
- Software Measurement (1)
- Software Process Support (1)
- Software Testing (1)
- Software Visualization (1)
- Software development (1)
- Software development environment (1)
- Software transactional memory (1)
- Software-Architektur (1)
- Software-Entwicklung (1)
- Softwaremetrie (1)
- Softwarespezifikation (1)
- Softwarewartung (1)
- Spezifikation (1)
- Sprachdefinition (1)
- Sprache (1)
- Sprachen (1)
- Stereovision (1)
- Stimmungsbasierte Musikempfehlungen (1)
- Stokes Equations (1)
- Streaming (1)
- Structural Adaptation (1)
- Structure (1)
- Structuring Approach (1)
- Strukturiertes Gitter (1)
- Strömung (1)
- Suchve (1)
- Support-Vektor-Maschine (1)
- Surface Reconstruction (1)
- Symbolic Methods (1)
- Synchronous Control Asynchronous Dataflow (1)
- System Abstractions (1)
- SystemC (1)
- Systemdesign (1)
- TCP/IP (1)
- Tactics (1)
- Task-based (1)
- Tcl (1)
- Technology combination (1)
- Technology decision (1)
- Technology selection (1)
- Temporal Decoupling (1)
- Temporal Logic (1)
- Temporal data processing (1)
- Tensor (1)
- Tensorfeld (1)
- Termination (1)
- Tesselation (1)
- Tetraeder (1)
- Tetraedergi (1)
- Tetrahedral Grid (1)
- Tetrahedral Mesh (1)
- Textual CBR (1)
- Themenbasierte Empfehlungen von Ressourcen (1)
- Time-motion-Ultraschallkardiographie (1)
- Tonsignal (1)
- Topic-based Resource Recommendations (1)
- Topologie (1)
- Topology Preserving Networks (1)
- Topology visualization (1)
- Transaction Level Modeling (TLM) (1)
- Transport Protocol (1)
- Traversability Analysis (1)
- UML Profile (1)
- Ubiquitous system (1)
- Ultraschallkardiographie (1)
- Umweltinformatik (1)
- Universal Control Device (1)
- Unobtrusive instrumentations (1)
- Unorganized Data (1)
- Unsicherheit (1)
- Unstrukturiertes Gitter (1)
- Unterricht (1)
- Urban sprawl (1)
- UrbanSim (1)
- Usage Control (1)
- User Model (1)
- VIACOBI (1)
- VIROP (1)
- Validierung (1)
- Vector (1)
- Vector Field (1)
- Vector Time (1)
- Vektor (1)
- Vektorfelder (1)
- Verifikation (1)
- Versionierungssysteme (1)
- Verteiltes System (1)
- Verzerrungstensor (1)
- Virtual Corporation (1)
- Virtual Prototyping (1)
- Virtual Software Projects (1)
- Vision (1)
- Visual Analytics (1)
- Visual Queries (1)
- Visualization Theory (1)
- Volume rendering (1)
- Volumen-Rendering (1)
- Voronoi diagram (1)
- WETICE 98 (1)
- Weak Memory Model (1)
- Wearable Computing (1)
- WiFi (1)
- Wide Area Multimedia Group Interaction (1)
- Wide-column stores (1)
- Wireless Networks (1)
- Wissenschaftliches Rechnen (1)
- Wissenserwerb (1)
- Workflow Replication (1)
- Workflowmanagement (1)
- Workstation-Cluster (1)
- World-Wide Web (1)
- XDBMS (1)
- XEC (1)
- XML query estimation (1)
- XML summary (1)
- Yaroslavskiy-Bentley-Bloch Quicksort (1)
- Zeitplanung (1)
- Zugriffstruktur (1)
- Zugriffsystem (1)
- abstract description (1)
- access rights (1)
- acoustic modeling (1)
- active damping (1)
- adaption (1)
- affective user interface (1)
- affine arithmetic (1)
- aliasing (1)
- analogical reasoning (1)
- artificial neural network (1)
- aspect-oriented programming (1)
- assembly sequence design (1)
- associations (1)
- authentication (1)
- automated code generation (1)
- automated computer learning (1)
- automated proof planner (1)
- automated synchronization (1)
- automotive (1)
- autonomes Lernen (1)
- autonomous learning (1)
- autonomous systems (1)
- average-case analysis (1)
- bedingte Aktionen (1)
- behaviour-based system (1)
- behaviourbased (1)
- bi-directional search (1)
- bidirectional search (1)
- binary countdown protocol (1)
- biological motivated (1)
- biologisch motiviert (1)
- black bursts (1)
- body-IMU calibration (1)
- branch-and-bound (1)
- business process modelling (1)
- business process reengineering (1)
- byte code (1)
- case-based planner (1)
- classification (1)
- client/server-architecture (1)
- co-learning (1)
- collaborative information visualization (1)
- collaborative mobile sensing (1)
- collective intelligence (1)
- collision detection (1)
- combinatorial algorithms (1)
- communication architectures (1)
- communication protocols (1)
- communication subsystem (1)
- compilation (1)
- compiler (1)
- completeness (1)
- complex System Development (1)
- component-based (1)
- comprehensive reuse (1)
- computational biology (1)
- computational fluid dynamics (1)
- concept representation (1)
- conceptual design (1)
- conceptual representation (1)
- concurrent (1)
- concurrent software (1)
- constraint satisfaction problem (CSP) (1)
- constraint-based reasoning (1)
- content-and-structure summary (1)
- continuous master theorem (1)
- continuous media (1)
- cooperative problem solving (1)
- crowd condition estimation (1)
- crowd density estimation (1)
- crowd scanning (1)
- crowd sensing (1)
- crowdsourcing (1)
- curves and surfaces (1)
- customization of communication protocols (1)
- data sets (1)
- data-flow (1)
- dataset (1)
- decidability (1)
- decision support (1)
- deep learning (1)
- deformable object (1)
- dependability (1)
- dependable systems (1)
- dependency management (1)
- description of reactive systems (1)
- design (1)
- design processes (1)
- deterministic arbitration (1)
- diagnostic problems (1)
- directed graphs (1)
- display algorithms (1)
- distributed and parallel processing (1)
- distributed c (1)
- distributed compliant control (1)
- distributed control system (1)
- distributed deduction (1)
- distributed document management (1)
- distributed enterprise (1)
- distributed groupware environment (1)
- distributed multi-platform software development (1)
- distributed multi-platform software development projects (1)
- distributed processing (1)
- distributed real-time systems (1)
- distributed software configuration management (1)
- distributed softwaredevelopment tools (1)
- distributed tasks (1)
- distributedknowledge-based systems (1)
- domain-specific language (1)
- domains (1)
- efficiency (1)
- embedded (1)
- embedding (1)
- emotion visualization (1)
- encapsulation (1)
- end-to-end learning (1)
- energy consumption (1)
- environmental noise (1)
- epidemic algorithms (1)
- epidemische Algorithmen (1)
- ernergy effcient motion (1)
- evolutionary algorithm (1)
- experience management (1)
- experimental software engineering (1)
- fallbasiertes planen (1)
- firewall (1)
- flexible workflows (1)
- flexible-link (1)
- flexible-link robot (1)
- flow visualization (1)
- force control (1)
- force following (1)
- formal description techniques (1)
- formal reasoning (1)
- foundational translation validation (1)
- frame buffer operations (1)
- framework (1)
- gaussian filter (1)
- generative Programmierung (1)
- generative programming (1)
- generic design of a customized communication subsystem (1)
- geographic information systems (1)
- geology (1)
- goal oriented completion (1)
- graph drawing algorithm (1)
- graph embedding (1)
- graph layout (1)
- graphic processors (1)
- guarded actions (1)
- hand pose, hand shape, depth image, convolutional neural networks (1)
- heterogeneous large-scale distributed DBMS (1)
- high-level caching of potentially shared networked documents (1)
- higher order logic (1)
- higher order tableau (1)
- higher-order calculi (1)
- higher-order tableaux calculus (1)
- higher-order theorem prover (1)
- historical documents (1)
- hub location (1)
- human body motion tracking (1)
- humanoid robot (1)
- humanoide Roboter (1)
- hybrid control (1)
- hybrid knowledge representation (1)
- hypercubes (1)
- hypergraph (1)
- iB2C (1)
- idle times (1)
- image analysis (1)
- implementation (1)
- industrial supervision (1)
- inertial sensors (1)
- information systems (1)
- information systems engineering (1)
- intelligent agents (1)
- intentional programming (1)
- interference (1)
- internet event synchronizer (1)
- interpolation (1)
- interpreter (1)
- interval arithmetic (1)
- invariant (1)
- inverses Pendel (1)
- isochronous streams (1)
- knowledge processing (1)
- knowledge space (1)
- knowledge-based planning (1)
- konzeptuelle Modelierung (1)
- kraftbasiertes Führen (1)
- language definition (1)
- language modeling (1)
- language profiles (1)
- learning (1)
- learning algorithms (1)
- linked abstraction workflows (1)
- linked data (1)
- load sharing (1)
- local communication (1)
- long short-term memory (1)
- long tail (1)
- machine-checkable proof (1)
- magnetic field based localization (1)
- magnetometer calibration (1)
- manipulation (1)
- mastoid (1)
- mastoidectomy (1)
- mathematical concept (1)
- matrix visualization (1)
- measurement (1)
- message-passing (1)
- metadata (1)
- middleware (1)
- migration (1)
- mixed-signal (1)
- mobile agents (1)
- mobile agents approach (1)
- modelling time (1)
- modularisation (1)
- moment (1)
- monitoring and managing distributed development processes (1)
- morphism (1)
- multi-agent architecture (1)
- multi-language (1)
- multicore (1)
- multidimensional datasets (1)
- multimedia (1)
- multinomial regression (1)
- multiple context free grammar (1)
- multiple-view product modeling (1)
- multithreading (1)
- multitype code coupling (1)
- multiway partitioning (1)
- narrowing (1)
- natural language semantics (1)
- navigation (1)
- negotiation (1)
- nestable tangibles (1)
- object frameworks (1)
- object-orientation (1)
- object-oriented software modeling (1)
- off-line programming (1)
- operations research (1)
- optimization (1)
- optimization correctness (1)
- order-sorted logic (1)
- oscillating magnetic fields (1)
- oscillation (1)
- otorhinolaryngological surgery (1)
- out-of-order (1)
- ownership (1)
- parallel (1)
- parallelism and concurrency (1)
- paramodulation (1)
- participatory sensing (1)
- path cost models (1)
- pattern recognition (1)
- peer-to-peer (1)
- persistence (1)
- pivot sampling (1)
- plan enactment (1)
- planning (1)
- point cloud (1)
- point-to-point (1)
- problem solvers (1)
- process model (1)
- process modelling (1)
- process support system (PROSYT) (1)
- process-centred environments (1)
- profiles (1)
- programmable client-server systems (1)
- programming by demonstration (1)
- project coordination (1)
- proof generating optimizer (1)
- proof plans (1)
- proof presentation (1)
- protocol (1)
- rapid authoring (1)
- raster graphics (1)
- rate control (1)
- ray casting (1)
- ray tracing (1)
- reactive systems (1)
- real time (1)
- real-time (1)
- real-time tasks (1)
- real-time temporal logic (1)
- receptive safety properties (1)
- redundancy (1)
- redundant robots (1)
- rekursiv aufzählbare Sprachfamilien (1)
- relaxed memory models (1)
- reliability (1)
- requirements (1)
- reuse repositories (1)
- review (1)
- rings (1)
- risk management (1)
- robot (1)
- robot calibration (1)
- robot control (1)
- robot control architectures (1)
- robot kinematics (1)
- robot motion planning (1)
- robotergestützt (1)
- robotics (1)
- robustness (1)
- roles (1)
- rule-based reasoning (1)
- safe human robot coexistence (1)
- scene flow (1)
- search algorithm (1)
- search alogorithms (1)
- search-space-problem (1)
- second order logic (1)
- secondary structure prediction (1)
- security domain (1)
- seed filling (1)
- self-localization (1)
- semantic web (1)
- semiring parsing (1)
- sequent calculus (1)
- shortest sequence (1)
- similarity measure (1)
- skolemization (1)
- small-multiples node-link visualization (1)
- social media (1)
- software agents (1)
- software comprehension (1)
- software engineering (1)
- software engineering task (1)
- software project (1)
- software project management (1)
- software reuse (1)
- sorted logic (1)
- soundness (1)
- sparse-to-dense (1)
- spatial statistics (1)
- state-based formalism (1)
- static load balancing (1)
- static software structure (1)
- stationary sensing (1)
- statistics (1)
- stochastic context free grammar (1)
- structural summary (1)
- subjective evaluation (1)
- symbolic simulation (1)
- synchrone Sprachen (1)
- synchronous (1)
- synchronous languages (1)
- system architecture (1)
- system behaviour (1)
- tableau (1)
- tabletop (1)
- task sequence (1)
- tensor (1)
- tensorfield (1)
- terrain rendering (1)
- time-varying flow fields (1)
- topology preserving maps (1)
- tori (1)
- touch surfaces (1)
- traceability (1)
- trajectory optimization (1)
- translation (1)
- translation contract (1)
- translation validation (1)
- traveling salesman problem (1)
- types (1)
- typical examples (1)
- typical instance (1)
- urban planning (1)
- user-centered design (1)
- vector field visualization (1)
- vectorfield (1)
- verhaltensbasiert (1)
- verlustfrei (1)
- verlässlichkeit (1)
- verteilte Berechnung (1)
- vetreilte nachgiebige Regelung (1)
- virtual market place (1)
- virtual programming (1)
- virtual reality (1)
- visual analytics, machine learning, interaction, user experience, assistive technologies (1)
- visual process modelling environment (1)
- visual structure (1)
- wearable systems (1)
- weighted finite-state transducers (1)
- wireless signal (1)
- wissensbasierte Systeme (1)
- wissensbasierter Systeme der Arbeitsplanerstellung (1)
- work coordination (1)
- world modelling (1)
- world-modelling (1)
- zeitabhängige Strömungen (1)
- Ähnlichkeit (1)
- Übersetzung (1)
Faculty / Organisational entity
- Fachbereich Informatik (870) (remove)
Controller design for continuous dynamical systems is a core algorithmic problem in the design of cyber-physical systems (CPS). When the CPS application is safety critical, additionally we require the controller to have strong correctness guarantees. One approach for this design problem is to use simpler discrete abstraction of the original continuous system, on which known reactive synthesis methods can be used to design the controller. This approach is known as the abstraction-based controller design (ABCD) paradigm.
In this thesis, we build ABCD procedures which are faster and more modular compared to the state-of-the-art, and can handle problems which were beyond the scope of the existing techniques.
Usually, existing ABCD approaches use state space discretization for computing the abstractions, for which the procedures do not scale well for larger systems. Our first contribution is a multi-layered ABCD algorithm, where we combine coarse abstractions and lazily computed fine abstractions to improve scalability. So far, we only address reach-avoid and safety specifications, for which our prototype tool (called Mascot) showed up to an order of magnitude speedup on standard benchmark examples.
Second, we consider the problem of modular design of sound local controllers for a network of local discrete abstractions communicating via discrete/boolean variables and having local specifications. We propose a sound algorithm, where the systems negotiate a pair of local assume-guarantee contracts, in order to synchronize on a set of non-conflicting and correct behaviors. As a by-product, we also obtain a set of local controllers for the systems which ensure simultaneous satisfaction of the local specifications. We show the effectiveness of the our algorithm using a prototype tool (called Agnes) on a set of discrete benchmark examples.
Our third contribution is a novel ABCD algorithm for a more expressive model of nonlinear dynamical systems with stochastic disturbances and ω-regular specifications. This part has two subparts, which are of significant merits on their own rights. First, we present an abstraction algorithm for nonlinear stochastic systems using 2.5-player games (turn-based stochastic graph games). We show that an almost sure winning strategy in this abstract 2.5-player game gives us a sound controller for the original system for satisfying the specification with probability one. Second, we present symbolic algorithms for a seemingly different class of 2-player games with certain environmental fairness assumptions, which can also be used to efficiently compute winning strategies in the aforementioned abstract 2.5-player game. Using our prototype tool (Mascot-SDS), we show that our algorithm significantly outperforms the state-of-the-art implementation on standard benchmark examples from the literature.
Comparative Uncertainty Visualization for High-Level Analysis of Scalar- and Vector-Valued Ensembles
(2022)
With this thesis, I contribute to the research field of uncertainty visualization, considering parameter dependencies in multi valued fields and the uncertainty of automated data analysis. Like uncertainty visualization in general, both of these fields are becoming more and more important due to increasing computational power, growing importance and availability of complex models and collected data, and progress in artificial intelligence. I contribute in the following application areas:
Uncertain Topology of Scalar Field Ensembles.
The generalization of topology-based visualizations to multi valued data involves many challenges. An example is the comparative visualization of multiple contour trees, complicated by the random nature of prevalent contour tree layout algorithms. I present a novel approach for the comparative visualization of contour trees - the Fuzzy Contour Tree.
Uncertain Topological Features in Time-Dependent Scalar Fields.
Tracking features in time-dependent scalar fields is an active field of research, where most approaches rely on the comparison of consecutive time steps. I created a more holistic visualization for time-varying scalar field topology by adapting Fuzzy Contour Trees to the time-dependent setting.
Uncertain Trajectories in Vector Field Ensembles.
Visitation maps are an intuitive and well-known visualization of uncertain trajectories in vector field ensembles. For large ensembles, visitation maps are not applicable, or only with extensive time requirements. I developed Visitation Graphs, a new representation and data reduction method for vector field ensembles that can be calculated in situ and is an optimal basis for the efficient generation of visitation maps. This is accomplished by bringing forward calculation times to the pre-processing.
Visually Supported Anomaly Detection in Cyber Security.
Numerous cyber attacks and the increasing complexity of networks and their protection necessitate the application of automated data analysis in cyber security. Due to uncertainty in automated anomaly detection, the results need to be communicated to analysts to ensure appropriate reactions. I introduce a visualization system combining device readings and anomaly detection results: the Security in Process System. To further support analysts I developed an application agnostic framework that supports the integration of knowledge assistance and applied it to the Security in Process System. I present this Knowledge Rocks Framework, its application and the results of evaluations for both, the original and the knowledge assisted Security in Process System. For all presented systems, I provide implementation details, illustrations and applications.
Robotic systems are entering the stage. Enabled by advances in both hardware components and software techniques, robots are increasingly able to operate outside of factories, assist humans, and work alongside them. The limiting factor of robots’ expansion remains the programming of robotic systems. Due to the many diverse skills necessary to build a multi-robot system, only the biggest organizations are able to innovate in the space of services provided by robots.
To make developing new robotic services easier, in this dissertation I propose a program- ming model in which users (programmers) give a declarative specification of what needs to be accomplished, and then a backend system makes sure that the specification is safely and reliably executed. I present Antlab, one such backend system. Antlab accepts Linear Temporal Logic (LTL) specifications from multiple users and executes them using a set of robots of different capabilities.
Building on the experience acquired implementing Antlab, I identify problems arising from the proposed programming model. These problems fall into two broad categories, specification and planning.
In the category of specification problems, I solve the problem of inferring an LTL formula from sets of positive and negative example traces, as well as from a set of positive examples only. Building on top of these solutions, I develop a method to help users transfer their intent into a formal specification. The approach taken in this dissertation is combining the intent signals from a single demonstration and a natural language description given by a user. A set of candidate specifications is inferred by encoding the problem as a satisfiability problem for propositional logic. This set is narrowed down to a single specification through interaction with the user; the user approves or declines generated simulations of the robot’s behavior in different situations.
In the category of planning problems, I first solve the problem of planning for robots that are currently executing their tasks. In such a situation, it is unclear what to take as the initial state for planning. I solve the problem by considering multiple, speculative initial states. The paths from those states are explored based on a quality function that repeatedly estimates the planning time. The second problem is a problem of reinforcement learning when the reward function is non-Markovian. The proposed solution consists of iteratively learning an automaton representing the reward function and using it to guide the exploration.
Data-driven and Sparse-to-Dense Concepts in Scene Flow Estimation for Automotive Applications
(2022)
Highly assisted driving and autonomous vehicles require a detailed and accurate perception of the environment. This includes the perception of the 3D geometry of the scene and the 3D motion of other road users. The estimation of both based on images is known as the scene flow problem in computer vision. This thesis deals with a solution to the scene flow problem that is suitable for application in autonomous vehicles. This application imposes strict requirements on accuracy, robustness, and speed. Previous work was lagging behind in at least one of these metrics. To work towards the fulfillment of those requirements, the sparse-to-dense concept for scene flow estimation is introduced in this thesis. The idea can be summarized as follows: First, scene flow is estimated for some points of the scene for which this can be done comparatively easily and reliably. Then, an interpolation is performed to obtain a dense estimate for the entire scene. Because of the separation into two steps, each part can be optimized individually. In a series of experiments, it is shown that the proposed methods achieve competitive results and are preferable to previous techniques in some aspects. As a second contribution, individual components in the sparse-to-dense pipeline are replaced by deep learning modules. These are a highly localized and highly accurate feature descriptor to represent pixels for dense matching, and a network for robust and generic sparse-to-dense interpolation. Compared to end-to-end architectures, the advantage of deep modules is that they can be trained more effciently with data from different domains. The recombination approach applies a similar concept as the sparse-to-dense approach by solving and combining less diffcult, auxiliary sub-problems. 3D geometry and 2D motion are estimated separately, the individual results are combined, and then also interpolated into a dense scene flow. As a final contribution, the thesis proposes a set of monolithic end-to-end networks for scene flow estimation.
Today’s digital world would be unthinkable without complex data sets. Whether in private, business or industrial environments, complex data provide the basis for important and critical decisions and determine many processes, some of which are automated. This is often associated with Big Data. However, often only one aspect of the usual Big Data definitions is sufficient and a human observer can no longer capture the data completely and correctly. In this thesis, different approaches are presented in order to master selected challenges in a more effective, efficient and userfriendly way. The approaches range from easier pre-processing of data sets for later analysis and the identification of design guidelines of such assistants, new visualization techniques for presenting uncertainty, extensions of existing visualizations for categorical data, concepts for time-saving selection methods for subsets of data points and faster navigation and zoom interaction–especially in the web-based area with enormous amounts of data–to new and innovative orientation-based interaction metaphors for mobile devices as well as stationary working environments. Evaluations and appropriate use case of the individual approaches show the usability also in comparison with state-of-the-art techniques.
Industrial manufacturing companies have different IT control functions that can be represented with a so-called hierarchical automation pyramid. While these conventional software systems especially support the mass production with consistent demand, the future project “Industry 4.0” focuses on customer-oriented and adaptable production processes. In order to move from conventional production systems to a factory of the future, the control levels must be redistributed. With the help of cyber-physical production systems, an interoperable architecture must be, implemented which removes the hierarchical connection of the former control levels. The accompanied digitalisation of industrial companies makes the transition to modular production possible. At the same time, the requirements for production planning and control are increasing, which can be solved with approaches such as multi-agent systems (MASs). These software solutions are autonomous and intelligent objects with a distinct collaborative ability. There are different modelling methods, communication and interaction structures, as well as different development frameworks for these new systems. Since multi-agent systems have not yet been established as an industrial standard due to their high complexity, they are usually only tested in simulations. In this bachelor thesis, a detailed literature review on the topic of MASs in the field of production planning and control is presented. In addition, selected multi-agent approaches are evaluated and compared using specific classification criteria. In addition, the applicability of using these systems in digital and modular production is assessed.
Sequence learning describes the process of understanding the spatio-temporal
relations in a sequence in order to classify it, label its elements or generate
new sequences. Due to the prevalence of structured sequences in nature
and everyday life, it has many practical applications including any language
related processing task. One particular such task that has seen recent success
using sequence learning techniques is the optical recognition of characters
(OCR).
State-of-the-art sequence learning solutions for OCR achieve high performance
through supervised training, which requires large amounts of transcribed
training data. On the other hand, few solutions have been proposed on how
to apply sequence learning in the absence of such data, which is especially
common for hard to transcribe historical documents. Rather than solving
the unsupervised training problem, research has focused on creating efficient
methods for collecting training data through smart annotation tools or generating
synthetic training data. These solutions come with various limitations
and do not solve all of the related problems.
In this work, first the use of erroneous transcriptions for supervised sequence
learning is introduced and it is described how this concept can be applied in
unsupervised training scenarios by collecting or generating such transcriptions.
The proposed OCR pipeline reduces the need of domain specific expertise
to apply OCR, with the goal of making it more accessible. Furthermore, an
approach for evaluating sequence learning OCR models in the absence of
reference transcriptions is presented and its different properties compared
to the standard method are discussed. In a second approach, unsupervised
OCR is treated as an alignment problem between the latent features of the
different language modalities. The outlined solution is to extract language
properties from both the text and image domain through adversarial training
and learn to align them by adding a cycle consistency constraint. The proposed
approach has some strict limitations on the input data, but the results
encourage future research into more widespread applications.
Recommender systems recommend items (e.g., movies, products, books) to users. In this thesis, we proposed two comprehensive and cluster-induced recommendation-based methods: Orthogonal Inductive Matrix Completion (OMIC) and Burst-induced Multi-armed Bandit (BMAB). Given the presence of side information, the first method is categorized as context-aware. OMIC is the first matrix completion method to approach the problem of incorporating biases, side information terms and a pure low-rank term into a single flexible framework with a well-principled optimization procedure. The second method, BMAB, is context-free. That is, it does not require any side data about users or items. Unlike previous context-free multi-armed bandit approaches, our method considers the temporal dynamics of human communication on the web and treats the problem in a continuous time setting. We built our models' assumptions under solid theoretical foundations. For OMIC, we provided theoretical guarantees in the form of generalization bounds by considering the distribution-free case: no assumptions about the sampling distribution are made. Additionally, we conducted a theoretical analysis of community side information when the sampling distribution is known and an adjusted nuclear norm regularization is applied. We showed that our method requires just a few entries to accurately recover the ratings matrix if the structure of the ground truth closely matches the cluster side information. For BMAB, we provided regret guarantees under mild conditions that demonstrate how the system's stability affects the expected reward. Furthermore, we conducted extensive experiments to validate our proposed methodologies. In a controlled environment, we implemented synthetic data generation techniques capable of replicating the domains for which OMIC and BMAB were designed. As a result, we were able to analyze our algorithms' performance across a broad spectrum of ground truth regimes. Finally, we replicated a real-world scenario by utilizing well-established recommender datasets. After comparing our approaches to several baselines, we observe that they achieved state-of-the-art results in terms of accuracy. Apart from being highly accurate, these methods improve interpretability by describing and quantifying features of the datasets they characterize.
In the past, information and knowledge dissemination was relegated to the
brick-and-mortar classrooms, newspapers, radio, and television. As these
processes were simple and centralized, the models behind them were well
understood and so were the empirical methods for optimizing them. In today’s
world, the internet and social media has become a powerful tool for information
and knowledge dissemination: Wikipedia gets more than 1 million edits per day,
Stack Overflow has more than 17 million questions, 25% of US population visits
Yahoo! News for articles and discussions, Twitter has more than 60 million
active monthly users, and Duolingo has 25 million users learning languages
online. These developments have introduced a paradigm shift in the process of
dissemination. Not only has the nature of the task moved from being centralized
to decentralized, but the developments have also blurred the boundary between
the creator and the consumer of the content, i.e., information and knowledge.
These changes have made it necessary to develop new models, which are better
suited to understanding and analysing the dissemination, and to develop new
methods to optimize them.
At a broad level, we can view the participation of users in the process of
dissemination as falling in one of two settings: collaborative or competitive.
In the collaborative setting, the participants work together in crafting
knowledge online, e.g., by asking questions and contributing answers, or by
discussing news or opinion pieces. In contrast, as competitors, they vie for
the attention of their followers on social media. This thesis investigates both
these settings.
The first part of the thesis focuses on the understanding and analysis of
content being created online collaboratively. To this end, I propose models for
understanding the complexity of the content of collaborative online discussions
by looking exclusively at the signals of agreement and disagreement expressed
by the crowd. This leads to a formal notion of complexity of opinions and
online discussions. Next, I turn my attention to the participants of the crowd,
i.e., the creators and consumers themselves, and propose an intuitive model for
both, the evolution of their expertise and the value of the content they
collaboratively contribute and learn from on online Q&A based forums. The
second part of the thesis explores the competitive setting. It provides methods
to help the creators gain more attention from their followers on social media.
In particular, I consider the problem of controlling the timing of the posts of
users with the aim of maximizing the attention that their posts receive under
the idealized setting of full-knowledge of timing of posts of others. To solve
it, I develop a general reinforcement learning based method which is shown to
have good performance on the when-to-post problem and which can be employed in
many other settings as well, e.g., determining the reviewing times for spaced
repetition which lead to optimal learning. The last part of the thesis looks at
methods for relaxing the idealized assumption of full knowledge. This basic
question of determining the visibility of one’s posts on the followers’ feeds
becomes difficult to answer on the internet when constantly observing the feeds
of all the followers becomes unscalable. I explore the links of this problem to
the well-studied problem of web-crawling to update a search engine’s index and
provide algorithms with performance guarantees for feed observation policies
which minimize the error in the estimate of visibility of one’s posts.
Data is the new gold and serves as a key to answer the five W’s (Who, What, Where, When, Why) and How’s of any business. Companies are now mining data more than ever and one of the most important aspects while analyzing this data is to detect anomalous patterns to identify critical patterns and points. To tackle the vital aspects of timeseries analysis, this thesis presents a novel hybrid framework that stands on three pillars: Anomaly Detection, Uncertainty Estimation,
and Interpretability and Explainability.
The first pillar is comprised of contributions in the area of time-series anomaly detection. Deep Anomaly Detection for Time-series (DeepAnT), a novel deep learning-based anomaly detection method, lies at the foundation of the proposed hybrid framework and addresses the inadequacy of traditional anomaly detection methods. To the best of the author’s knowledge, Convolutional Neural Network (CNN) was used for the first time in Deep Anomaly Detection for Time-series (DeepAnT) to robustly detect multiple types of anomalies in the tricky
and continuously changing time-series data. To further improve the anomaly detection performance, a fusion-based method, Fusion of
Statistical and Deep Learning for Anomaly Detection (FuseAD) is proposed. This method aims to combine the strengths of existing wellfounded
statistical methods and powerful data-driven methods.
In the second pillar of this framework, a hybrid approach that combines the high accuracy of the deterministic models with the posterior distribution approximation of Bayesian neural networks is proposed.
In the third pillar of the proposed framework, mechanisms to enable both HOW and WHY parts are presented.
In order to improve performance or conserve energy, modern hardware implementations have adopted weak memory models; that is, models of concurrency that allow more outcomes than the classic sequentially consistent (SC) model of execution. Modern programming languages similarly provide their own language-level memory models, which strive to allow all the behaviors allowed by the various hardware-level memory models, as well as those that can occur as a result of desired compiler optimizations.
As these weak memory models are often rather intricate, it can be difficult for programmers to keep track of all the possible behaviors of their programs. It is therefore very useful to have an abstraction layer over the model that can be used to ensure program correctness without reasoning about the underlying memory model. Program logics are a way of constructing such an abstraction—one can use their syntactic rules to reason about programs, without needing to understand the messy details of the memory model for which the logic has been proven sound.
Unfortunately, most of the work on formal verification in general, and program logics in particular, has so far assumed the SC model of execution. This means that new logics for weak memory have to be developed.
This thesis presents two such logics—fenced separation logic (FSL) and weak separation logic (Weasel)—which are sound for reasoning under two different weak memory models.
FSL considers the C/C++ concurrency memory model, supporting several of its advanced features. The soundness of FSL depends crucially on a specific strengthening of the model which eliminates a certain class of undesired behaviors (so-called out-of-thin-air behaviors) that were inadvertently allowed by the original C/C++ model.
Weasel works under weaker assumptions than FSL, considering a model which takes a more fine-grained approach to the out-of-thin-air problem. Weasel's focus is on exploring the programming constructs directly related to out-of-thin-air behaviors, and is therefore significantly less feature-rich than FSL.
Using FSL and Weasel, the thesis explores the key challenges in reasoning under weak memory models, and what effect different solutions to the out-of-thin-air problem have on such reasoning. It explains which reasoning principles are preserved when moving from a stronger to a weaker model, and develops novel proof techniques to establish soundness of logics under weaker models.
Using Enhanced Logic Programming Semantics for Extending and Optimizing Synchronous System Design
(2021)
The semantics of programming languages assign a meaning to the written program syntax.
Currently, the meaning of synchronous programming languages, which are especially designed to develop programs for reactive and embedded systems, is based on a formal semantics definition similar to Fitting`s fixpoint semantics for logic programs.
Nevertheless, it is possible to write a synchronous program code that does not evaluate to concrete values with the current semantics, which means those programs are currently seen to be not constructive.
In the last decades, the theoretical knowledge and representation of semantics for logic programming has increased, but not all theoretical results and achievements have found their way to practice and application in system design.
This thesis, in a first part, focuses on extensions to the semantics of synchronous programming languages to an evaluation similar to a well-founded semantics as defined in logic programming by van Gelder, Ross and Schlipf and to the stable model semantics as defined by Gelfond and Lifschitz. Particularly, this allows an evaluation for some of the currently not constructive programs where the semantics based on Fitting`s fixpoint fails.
It is shown that the extension to well-founded semantics is a conservative extension of Fitting`s semantics, so that the meaning for programs which were already constructive does not change. Finally, it is considered how one can still generate circuits that implement the considered synchronous programs with the well-founded semantics. Again, this is a conservative approach that does not modify the circuits generated by the so-far used synthesis procedures.
Answer set programming and the underlying stable model semantics describe problems by constraints and the related answer set solvers give all solutions to that problem as so-called answers. This allows the formulation of searching and planning problems as well as efficient solutions without having the need to develop special and possibly error-prone algorithms for every single application.
The semantics of the synchronous programming language Quartz is also extended to the stable model semantics. For this extension, two alternatives are discussed: First of all, a direct extension similar to the extension to well-founded semantics is discussed. Second, a transformation of synchronous programs to the available answer set programming languages is given, as this allows to directly use answer set solvers for the synthesis and optimization of synchronous systems.
The second part of the thesis contains further examples of the use of answer set programming in system design to emphasis their benefits for system design in general. The first example is hereby the generation of optimal/minimal interconnection-networks which allow non-blocking connections between n sources and n targets in parallel. As a second example, the stable model semantics is used to build a complete compiler chain, which transforms a given program to an optimal assembler code (called move code) for the new SCAD processor architecture which was developed at the University of Kaiserslautern. As a final part, the lessons learned from the two examples are shown by the means of some enhancement ideas for the synchronous programming language paradigm.
Deep learning has achieved significant improvements in a variety of tasks in computer vision applications with an open image dataset which has a large amount of data. However, the acquisition of a large number of the dataset is a challenge in real-world applications, especially if they are new eras for deep learning. Furthermore, the distribution of class in the dataset is often imbalanced. The data imbalance problem is frequently bottlenecks of the neural network performance in classification. Recently, the potential of generative adversarial networks (GAN) as a data augmentation method on minority data has been studied.
This dissertation investigates using GAN and transfer learning to improve the performance of the classification under imbalanced data conditions. We first propose a classification enhancement generative adversarial networks (CEGAN) to enhance the quality of generated synthetic minority data and more importantly, to improve the prediction accuracy in data imbalanced condition. Our experiments show that approximating the real data distribution using CEGAN improves the classification performance significantly in data imbalanced conditions compared with various standard data augmentation methods.
To further improve the performance of the classification, we propose a novel supervised discriminative feature generation method (DFG) for minority class dataset. DFG is based on the modified structure of Generative Adversarial Network consisting of four independent networks: generator, discriminator, feature extractor, and classifier. To augment the selected discriminative features of minority class data by adopting attention mechanism, the generator for class-imbalanced target task is trained while feature extractor and classifier are regularized with the pre-trained ones from large source data. The experimental results show that the generator of DFG enhances the augmentation of label-preserved and diverse features, and classification results are significantly improved on the target task.
In this thesis, these proposals are deployed to bearing fault detection and diagnosis of induction motor and shipping label recognition and validation for logistics. The experimental results for bearing fault detection and diagnosis conclude that the proposed GAN-based framework has good performance on the imbalanced fault diagnosis of rotating machinery. The experimental results for shipping label recognition and validation also show that the proposed method achieves better performance than many classical and state-of-the-art algorithms.
Medical cyber-physical systems (MCPS) emerged as an evolution of the relations between connected health systems, healthcare providers, and modern medical devices. Such systems combine independent medical devices at runtime in order to render new patient monitoring/control functionalities, such as physiological closed loops for controlling drug infusion or optimization of alarms. Despite the advances regarding alarm precision, healthcare providers still struggle with alarm flooding caused by the limited risk assessment models. Furthermore, these limitations also impose severe barriers on the adoption of automated supervision through autonomous actions, such as safety interlocks for avoiding overdosage. The literature has focused on the verification of safety parameters to assure the safety of treatment at runtime and thus optimize alarms and automated actions. Such solutions have relied on the definition of actuation ranges based on thresholds for a few monitored parameters. Given the very dynamic nature of the relevant context conditions (e.g., the patient’s condition, treatment details, system configurations, etc.), fixed thresholds are a weak means for assessing the current risk. This thesis presents an approach for enabling dynamic risk assessment for cooperative MCPS based on an adaptive Bayesian Networks (BN) model. The main aim of the approach is to support continuous runtime risk assessment of the current situation based on relevant context and system information. The presented approach comprises (i) a dynamic risk analysis constituent, which corresponds to the elicitation of relevant risk parameters, risk metric building, and risk metric management; and (ii) a runtime risk classification constituent, which aims to analyze the current situation risk, establish risk classes, and identify and deploy mitigation measures. The proposed approach was evaluated and its feasibility proved by means of simulated experiments guided by an international team of medical experts with a focus on the requirements of efficacy, efficiency, and availability of patient treatment.
Dataflow process networks (DPNs) consist of statically defined process nodes with First-In-First-Out (FIFO) buffered point-to-point connections. DPNs are intrinsically data-driven, i.e., node actions are not synchronized among each other and may fire whenever sufficient input operands arrived at a node. In this original form, DPNs are data-driven and therefore a suitable model of computation (MoC) for asynchronous and distributed systems. For DPNs having nodes with only static consumption/production rates, however, one can easily derive an optimal schedule that can then be used to implement the DPN in a time-driven (clock-driven) way, where each node fires according to the schedule.
Both data-driven and time-driven MoCs have their own advantages and disadvantages. For this reason, desynchronization techniques are used to convert clock-driven models into data-driven ones in order to more efficiently support distributed implementations. These techniques preserve the functional specification of the synchronous models and moreover preserve properties like deadlock-freedom and bounded memory usage that are otherwise difficult to ensure in DPNs. These desynchronized models are the starting point of this thesis.
While the general MoC of DPNs does not impose further restrictions, many different subclasses of DPNs representing different dataflow MoCs have been considered over time like Kahn process networks, cyclo-static and synchronous DPNs. These classes mainly differ in the kinds of behaviors of the processes which affect on the one hand the expressiveness of the DPN class as well as the methods for their analysis (predictability) and synthesis (efficiency). A DPN may be heterogeneous in the sense that different processes in the network may exhibit different kinds of behaviors. A heterogeneous DPN therefore can be effectively used to model and implement different components of a system with different kinds of processes and therefore different dataflow MoCs.
Design tools for modeling like Ptolemy and FERAL are used to model and to design parallel embedded systems using well-defined and precise MoCs, including different dataflow MoCs. However, there is a lack of automatic synthesis methods to analyze and to evaluate the artifacts exhibited by particular MoCs. Second, the existing design tools for synthesis are usually restricted to the weakest classes of DPNs, i.e., cyclo-static and synchronous DPNs where each tool only supports a specific dataflow MoC.
This thesis presents a model-based design based on different dataflow MoCs including their heterogeneous combinations. This model-based design covers in particular the automatic software synthesis of systems from DPN models. The main objective is to validate, evaluate and compare the artifacts exhibited by different dataflow MoCs at the implementation level of embedded systems under the supervision of a common design tool. We are mainly concerned about how these different dataflow MoCs affect the synthesis, in particular, how they affect the code generation and the final implementation on the target hardware. Moreover, this thesis also aims at offering an efficient synthesis method that targets and exploits heterogeneity in DPNs by generating implementations based on the kinds of behaviors of the processes.
The proposed synthesis design flow therefore generally starts from the desynchronized dataflow models and automatically synthesizes them for cross-vendor target hardware. In particular, it provides a synthesis tool chain, including different specialized code generators for specific dataflow MoCs, and a runtime system that finally maps models using a combination of different dataflow MoCs on the target hardware. Moreover, the tool chain offers a platform-independent code synthesis method based on the open computing language (OpenCL) that enables a more generalized synthesis targeting cross-vendor commercial off-the-shelf (COTS) heterogeneous platforms.
DeepKAF: A Knowledge Intensive Framework for Heterogeneous Case-Based Reasoning in Textual Domains
(2021)
Business-relevant domain knowledge can be found in plain text across message exchanges
among customer support tickets, employee message exchanges and other business transactions.
Decoding text-based domain knowledge can be a very demanding task since traditional
methods focus on a comprehensive representation of the business and its relevant paths. Such
a process can be highly complex, time-costly and of high maintenance effort, especially in
environments that change dynamically.
In this thesis, a novel approach is presented for developing hybrid case-based reasoning
(CBR) systems that bring together the benefits of deep learning approaches with CBR advantages.
Deep Knowledge Acquisition Framework (DeepKAF) is a domain-independent
framework that features the usage of deep neural networks and big data technologies to decode
the domain knowledge with the minimum involvement from the domain experts. While
this thesis is focusing more on the textual data because of the availability of the datasets, the
target CBR systems based on DeepKAF are able to deal with heterogeneous data where a
case can be represented by different attribute types and automatically extract the necessary
domain knowledge while keeping the ability to provide an adequate level of explainability.
The main focus within this thesis are automatic knowledge acquisition, building similarity
measures and cases retrieval.
Throughout the progress of this research, several sets of experiments have been conducted
and validated by domain experts. Past textual data produced over around 15 years have
been used for the needs of the conducted experiments. The text produced is a mixture
between English and German texts that were used to describe specific domain problems
with a lot of abbreviations. Based on these, the necessary knowledge repositories were built
and used afterwards in order to evaluate the suggested approach towards effective monitoring
and diagnosis of business workflows. Another public dataset has been used, the CaseLaw
dataset, to validate DeepKAF when dealing with longer text and cases with more attributes.
The CaseLaw dataset represents around 22 million cases from different US states.
Further work motivated by this thesis could investigate how different deep learning models
can be used within the CBR paradigm to solve some of the chronic CBR challenges and be
of benefit to large-scale multi-dimensional enterprises.
Today, information systems are often distributed to achieve high availability and low latency.
These systems can be realized by building on a highly available database to manage the distribution of data.
However, it is well known that high availability and low latency are not compatible with strong consistency guarantees.
For application developers, the lack of strong consistency on the database layer can make it difficult to reason about their programs and ensure that applications work as intended.
We address this problem from the perspective of formal verification.
We present a specification technique, which allows specifying functional properties of the application.
In addition to data invariants, we support history properties.
These let us express relations between events, including invocations of the application API and operations on the database.
To address the verification problem, we have developed a proof technique that handles concurrency using invariants and thereby reduces the problem to sequential verification.
The underlying system semantics, technique and its soundness proof are all formalized in the interactive theorem prover Isabelle/HOL.
Additionally, we have developed a tool named Repliss which uses the proof technique to enable partially automated verification and testing of applications.
For verification, Repliss generates verification conditions via symbolic execution and then uses an SMT solver to discharge them.
This work presents a visual analytics-driven workflow for an interpretable and understandable machine learning model. The model is driven by a reverse
engineering task in automotive assembly processes. The model aims
to predict the assembly parameters leading to the given displacement field
on the geometries surface. The derived model can work on both measurement
and simulation data. The proposed approach is driven by the scientific
goals from visual analytics and interpretable artificial intelligence alike. First, a concept for systematic uncertainty monitoring, an object-oriented, virtual reference scheme (OOVRS), is developed. Afterward, the prediction task is solved via a regressive machine learning model using adversarial neural networks.
A profound model parameter study is conducted and assisted with an interactive visual analytics pipeline. Further, the effects of the learned
variance in displacement fields are analyzed in detail. Therefore a visual analytics pipeline is developed, resulting in a sensitivity benchmarking tool. This allows the testing of various segmentation approaches to lower the machine learning input dimensions. The effects of the assembly parameters are
investigated in domain space to find a suitable segmentation of the training
data set’s geometry. Therefore, a sensitivity matrix visualization is developed. Further, it is shown how this concept could directly compare results
from various segmentation methods, e.g., topological segmentation, concerning the assembly parameters and their impact on the displacement field variance. The resulting databases are still of substantial size for complex simulations with large and high-dimensional parameter spaces. Finally, the applicability of video compression techniques towards compressing visualization image databases is studied.
In the increasingly competitive public-cloud marketplace, improving the efficiency of data centers is a major concern. One way to improve efficiency is to consolidate as many VMs onto as few physical cores as possible, provided that performance expectations are not violated. However, as a prerequisite for increased VM densities, the hypervisor’s VM scheduler must allocate processor time efficiently and in a timely fashion. As we show in this thesis, contemporary VM schedulers leave substantial room for improvements in both regards when facing challenging high-VM-density workloads that frequently trigger the VM scheduler. As root causes, we identify (i) high runtime overheads and (ii) unpredictable scheduling heuristics.
To better support high VM densities, we propose Tableau, a VM scheduler that guarantees a minimum processor share and a maximum bound on scheduling delay for every VM in the system. Tableau combines a low-overhead, core-local, table-driven dispatcher with a fast on-demand table-generation procedure (triggered on VM creation/teardown) that employs scheduling techniques typically used in hard real-time systems. Further, we show that, owing to its focus on efficiency and scalability, Tableau provides comparable or better throughput than existing Xen schedulers in dedicated-core scenarios as are commonly employed in public clouds today.
Tableau also extends this design by providing the ability to use idle cycles in the system to perform low-priority background work, without affecting the performance of primary VMs, a common requirement in public clouds.
Finally, VM churn and workload variations in multi-tenant public clouds result in changing interference patterns at runtime, resulting in performance variation. In particular, variation in last-level cache (LLC) interference has been shown to have a significant impact on virtualized application performance in cloud environments. Tableau employs a novel technique for dealing with dynamically changing interference, which involves periodically regenerating tables with the same guarantees on utilization and scheduling latency for all VMs in the system, but having different LLC interference characteristics. We present two strategies to mitigate LLC interference: a randomized approach, and one that uses performance counters to detect VMs running cache-intensive workloads and selectively mitigate interference.
In recent decades, there has been increasing interest in analyzing the behavior of complex systems. A popular approach for analyzing such systems is a network analytic approach where the system is represented by a graph structure (Wassermann&Faust 1994, Boccaletti et al. 2006, Brandes&Erlebach 2005, Vespignani 2018): Nodes represent the system’s entities, edges their interactions. A large toolbox of network analytic methods, such as measures for structural properties (Newman 2010), centrality measures (Koschützki et al. 2005), or methods for identifying communities (Fortunato 2010), is readily available to be applied on any network structure. However, it is often overlooked that a network representation of a system and the (technically applicable) methods contain assumptions that need to be met; otherwise, the results are not interpretable or even misleading. The most important assumption of a network representation is the presence of indirect effects: If A has an impact on B, and B has an impact on C, then A has an impact on C (Zweig 2016, Brandes et al. 2013). The presence of indirect effects can be explained by ”something” flowing through the network by moving from node to node. Such network flows (or network processes) may be the propagation of information in social networks, the spread of infections, or entities using the network as infrastructure, such as in transportation networks. Also several network measures, particularly most centrality measures, assume the presence of such a network process, but additionally assume specific properties of the network processes (Borgatti 2005). Then, a centrality value indicates a node’s importance with respect to a process with these properties.
While this has been known for several years, only recently have datasets containing real-world network flows become accessible. In this context, the goal of this dissertation is to provide a better understanding of the actual behavior of real-world network processes, with a particular focus on centrality measures: If real-world network processes turn out to show different properties than those assumed by classic centrality measures, these measures might considerably under- or overestimate the importance of nodes for the actual network flow. To the best of our knowledge, there are only very few works addressing this topic.
The contributions of this thesis are therefore as follows: (i) We investigate in which aspects real-world network flows meet the assumptions contained about them in centrality measures. (ii) Since we find that the real-world flows show considerably different properties than assumed, we test to which extent the found properties can be explained by models, i.e., models based on shortest paths or random walks. (iii) We study whether the deviations from the assumed behavior have an impact on the results of centrality measures.
To this end, we introduce flow-based variants of centrality measures which are either based on the assumed behavior or on the actual behavior of the real-world network flow. This enables systematic evaluation of the impact of each assumption on the resulting rankings of centrality measures.
While–on a large scale–we observe a surprisingly large robustness of the measures against deviations in their assumptions, there are nodes whose importance is rated very differently when the real-world network flow is taken into account. (iv) As a technical contribution, we provide a method for an efficient handling of large sets of flow trajectories by summarizing them into groups of similar trajectories. (v) We furthermore present the results of an interdisciplinary research project in which the trajectories of humans in a network were analyzed in detail. In general, we are convinced that a process-driven perspective on network analysis in which the network process is considered in addition to the network representation, can help to better understand the behavior of complex systems.