Refine
Year of publication
Document Type
- Preprint (346)
- Doctoral Thesis (191)
- Report (139)
- Article (117)
- Master's Thesis (45)
- Study Thesis (13)
- Conference Proceeding (8)
- Bachelor Thesis (3)
- Habilitation (2)
- Part of a Book (1)
Has Fulltext
- yes (865)
Is part of the Bibliography
- no (865)
Keywords
- AG-RESY (64)
- PARO (31)
- Case-Based Reasoning (20)
- Visualisierung (17)
- SKALP (16)
- CoMo-Kit (15)
- Fallbasiertes Schliessen (12)
- RODEO (12)
- Robotik (12)
- HANDFLEX (11)
- META-AKAD (9)
- Robotics (8)
- Abstraction (6)
- Case Based Reasoning (6)
- Visualization (6)
- motion planning (6)
- COMOKIT (5)
- Case-Based Planning (5)
- Computergraphik (5)
- Fallbasiertes Schließen (5)
- Force-Torque (5)
- RONAF (5)
- SDL (5)
- SIMERO (5)
- case-based problem solving (5)
- industrial robots (5)
- parallel processing (5)
- Assembly (4)
- Dienstgüte (4)
- Evaluation (4)
- Implementierung (4)
- Java 2 Enterprise Edition (4)
- Knowledge Acquisition (4)
- LOADBAL (4)
- Manipulation skills (4)
- Maschinelles Lernen (4)
- Mensch-Roboter-Kooperation (4)
- Wissensverarbeitung (4)
- case-based reasoning (4)
- deformable objects (4)
- resolution (4)
- safe human robot cooperation (4)
- search algorithms (4)
- Bildverarbeitung (3)
- CODET (3)
- Expertensysteme (3)
- Fallbasiertes Planen (3)
- Formalisierung (3)
- Geoinformationssystem (3)
- Internet (3)
- Java (3)
- Knowledge acquisition (3)
- Kooperation (3)
- Model Checking (3)
- Mustererkennung (3)
- Navigation (3)
- Optimierung (3)
- Optische Zeichenerkennung (3)
- Ray casting (3)
- Recommender Systems (3)
- Requirements/Specifications (3)
- Roboter (3)
- Scientific Visualization (3)
- Semantic Web (3)
- Simulation (3)
- Software Engineering (3)
- case-based planning (3)
- computer aided planning (3)
- computer graphics (3)
- distributed software development (3)
- distributed software development process (3)
- document analysis (3)
- explanation-based learning (3)
- on-line algorithms (3)
- optical character recognition (3)
- path planning (3)
- problem solving (3)
- reuse (3)
- theorem prover (3)
- theorem proving (3)
- verification (3)
- vibration (3)
- visualization (3)
- AKLEON (2)
- Activity recognition (2)
- Algorithmus (2)
- Automation (2)
- Automatische Differentiation (2)
- Bahnplanung (2)
- CAP (2)
- CAPlan (2)
- CIM-OSA (2)
- Code Generation (2)
- Computational Fluid Dynamics (2)
- Datenanalyse (2)
- Datenbank (2)
- Decision Trees (2)
- Deduction (2)
- Deep Learning (2)
- Deformable Objects (2)
- Diagnose technischer Systeme (2)
- Distributed Software Development (2)
- Dreidimensionale Bildverarbeitung (2)
- Effizienter Algorithmus (2)
- Eingebettetes System (2)
- Endlicher Automat (2)
- Entscheidungsbäume (2)
- Experiment (2)
- Fallbasierte Planning (2)
- Fallbasierte Planung (2)
- Fräsen (2)
- GPU (2)
- HOT (2)
- Hals-Nasen-Ohren-Chirurgie (2)
- Hals-Nasen-Ohren-Heilkunde (2)
- KLUEDO (2)
- Kommunikation (2)
- Kontextbezogenes System (2)
- Layout (2)
- MOLTKE-Projekt (2)
- Machine Learning (2)
- Manipulation (2)
- Mapping (2)
- Mensch-Maschine-Kommunikation (2)
- Mensch-Roboter-Koexistenz (2)
- Merkmalsextraktion (2)
- Modellgetriebene Entwicklung (2)
- Modellierung (2)
- Natural Language Processing (2)
- Network Protocols (2)
- Netzwerk (2)
- PATDEX (2)
- Partial functions (2)
- Pattern Recognition (2)
- Prozesssteuerung (2)
- Raumakustik (2)
- Regelung (2)
- Room acoustics (2)
- SWEEPING (2)
- Scattered-Data-Interpolation (2)
- Server (2)
- Software (2)
- Software Agents (2)
- Software engineering (2)
- Softwareentwicklung (2)
- Sprachprofile (2)
- Suchraum (2)
- Systemarchitektur (2)
- TOVE (2)
- Term rewriting systems (2)
- Topology (2)
- Translation Validation (2)
- UML (2)
- UML 2 (2)
- Uncertainty Visualization (2)
- Voronoi-Diagramm (2)
- Wearable computing (2)
- Wissensakquisition (2)
- World Wide Web (2)
- XML (2)
- analogy (2)
- analysis of algorithms (2)
- application (2)
- artificial intelligence (2)
- artificial neural networks (2)
- automated theorem proving (2)
- building automation (2)
- case based reasoning (2)
- combined systems with sha (2)
- computer-supported cooperative work (2)
- confluence (2)
- conservative extension (2)
- consistency (2)
- design patterns (2)
- discretization (2)
- disjoint union (2)
- distributed (2)
- distributed computing (2)
- experience base (2)
- fallbasiertes Schliessen (2)
- formal specification (2)
- frames (2)
- genetic algorithms (2)
- graph search (2)
- human robot cooperation (2)
- image processing (2)
- innermost termination (2)
- layout analysis (2)
- learning system (2)
- load balancing (2)
- machine learning (2)
- many-valued logic (2)
- mobile robots (2)
- modularity (2)
- parallel algorithms (2)
- problem formulation (2)
- regelbasiertes Problemlösen (2)
- requirements engineering (2)
- self-organization (2)
- sensor fusion (2)
- tactics (2)
- temporal logic (2)
- termination (2)
- trajectory planning (2)
- virtual acoustics (2)
- weak termination (2)
- 3D Gene Expression (1)
- 3D Point Data (1)
- ASM (1)
- AUTOSAR (1)
- Ablagestruktur (1)
- Ableitungsschätzung (1)
- Abrechnungsmanagement (1)
- Abstandsregeltempomat (1)
- Abstraktion (1)
- Access Points (1)
- Access System (1)
- Accounting (1)
- Accounting Agent (1)
- Ad-hoc workflow (1)
- Ad-hoc-Netz (1)
- Adaption (1)
- Adaptive Data Structure (1)
- Affine Arithmetic (1)
- Agents (1)
- Akquisition (1)
- Algorithm (1)
- Algorithmic Differentiation (1)
- Ambient Intelligence (1)
- Amharic, Attention, Factored Convolutional Neural Network, OCR (1)
- Anfrageverarbeitung (1)
- Application Framework (1)
- Artificial Intelligence (1)
- Aspektorientierte Programmierung (1)
- Association (1)
- Audiodeskription (1)
- Ausdrucksfähig (1)
- Ausdrucksfähigkeit (1)
- Automat <Automatentheorie> (1)
- Automated Calibration (1)
- Automated Reasoning (1)
- Automated theorem proving (1)
- Automatic Differentiation (1)
- Automatic Image Captioning (1)
- Automatic Theorem Provi (1)
- Automatische Indexierung (1)
- Automatische Klassifikation (1)
- Automatische Messung (1)
- Automatisches Beweisverfahren (1)
- Automotive (1)
- Autonomer Agent (1)
- Autonomer Roboter (1)
- Autonomous mobile robots (1)
- Autorensystem (1)
- Bebauungsplanung (1)
- Befahrbarkeitsanalyse (1)
- Benutzer (1)
- Beschränkte Arithmetik (1)
- Bewegungsplanung (1)
- Bibliothekskatalog (1)
- Bio-inspired (1)
- Bioinformatik (1)
- Bipedal Locomotion (1)
- Bitlisten (1)
- Blackboard architecture (1)
- Bluetooth (1)
- Boosting (1)
- C (1)
- CAN-Bus (1)
- CAR <Roboter> (1)
- CAS (1)
- CBR (1)
- CNC-Maschine (1)
- CUDA (1)
- Caching (1)
- Carbon footprint (1)
- Case Study (1)
- Case Study Erfahrungsdatenbank (1)
- Case-Based Classification Algorithms (1)
- Case-Based Diagnosis (1)
- Case-Based Learning (1)
- Case-Based Reasoning Systems (1)
- Case-Based Representability (1)
- Case-based problem solving (1)
- Castor (1)
- Causal Ordering (1)
- Causality (1)
- Certifying Compilers (1)
- Channel Hopping (1)
- Channel Scheduling (1)
- Chromium Browser (1)
- Classification (1)
- Classification Tasks (1)
- Cluster-Analyse (1)
- Clustering (1)
- Cochlea Implant (1)
- Cochlea Implantat (1)
- Cognitive Amplification (1)
- Collaboration (1)
- Combinatorial Testing (1)
- Compiler (1)
- Completion (1)
- Computer Graphic (1)
- Computer Science (1)
- Computer Supported Cooperative Work (1)
- Computer assistierte Chirurgie (1)
- Computer assistierte Chirurgie (CAS) (1)
- Computer graphics (1)
- Computer supported cooperative work (1)
- Computerphysik (1)
- Computersimulation (1)
- Computertomographie (1)
- Computerunterstütztes Lernen (1)
- Computervision (1)
- Concept mapping (1)
- Concept maps (1)
- Concurrent data structures (1)
- Congitive Radio Networks (1)
- Constraint Graphs (1)
- Containertypen (1)
- Containertypes (1)
- Content Management (1)
- Context of Use (1)
- Contract net (1)
- Controller Synthesis (1)
- Cooperative decision making (1)
- Coq (1)
- Correlation (1)
- Cycle Accuracy (1)
- DCE <Programm> (1)
- DES (1)
- DFG (1)
- DPN (1)
- Data Modeling (1)
- Dataset (1)
- Datenbanken (1)
- Datenreduktion (1)
- Decision Support Systems (1)
- Declarative and Procedural Knowledge (1)
- Delaunay-Triangulierung (1)
- Dependency Factors (1)
- Derivative Estimation (1)
- Design Rationales (1)
- Diagnosesystem (1)
- Didaktik (1)
- Dienstschnittstellen (1)
- Difference Reduction (1)
- Direct Numerical Simulation (1)
- Discrete Event Simulation (DES) (1)
- Distributed Computation (1)
- Distributed Deb (1)
- Distributed Multimedia Applications (1)
- Distributed Rendering (1)
- Distributed Software Development Projects (1)
- Distributed System (1)
- Distributed software development support (1)
- Distributed system (1)
- Distributed systems (1)
- Distribution and Combination of Theorem Provers (1)
- DoS (1)
- DoS-Attacke (1)
- Domänenspezifische Sprachen (1)
- Doppler-Radar (1)
- Drahtloses lokales Netz (1)
- Dreidimensionale Rekonstruktion (1)
- Dreidimensionale Strömung (1)
- Dublin Core (1)
- Duplicate Identification (1)
- Duplikaterkennung (1)
- Dynamischer Test (1)
- E-Learning (1)
- EBG (1)
- ECMAScript (1)
- ESTELLE (1)
- Earley-Parser (1)
- Ecommerce (1)
- Effizienz (1)
- Effizienzsteigerung (1)
- Eingebettete Systeme (1)
- Elektrizitätsverbrauch (1)
- Embedded software (1)
- Empfehlungssysteme (1)
- Energie (1)
- Ensemble Visualization (1)
- Enterprise modeling (1)
- Enterprise modelling (1)
- Entwurf (1)
- Equality reasoning (1)
- Equational Reasoning (1)
- Ernergie effiziente Bewegung (1)
- Erstellung von Expertensystemen (1)
- Eventual consistency (1)
- Evolutionary Algorithm (1)
- Experience Base (1)
- Experience Database (1)
- Experimentation (1)
- Exposed Datapath Architectures (1)
- Expressiveness (1)
- Extensibility (1)
- Extraction (1)
- Extrapolation (1)
- Eyewear Computing (1)
- FERAL (1)
- Fachdidaktik (1)
- Fahrassistenzsysteme (1)
- Fahrtkostenmodelle (1)
- Fallbasierte Diagnose (1)
- Fallstu (1)
- Farbmodell (1)
- Fast Mode-Signaling (1)
- Fault Tree Analysis (1)
- Feasibility study (1)
- Feature (1)
- Feature Detection (1)
- Feature Extraction (1)
- Fehlerbaumanalyse (1)
- Fließanalyse (1)
- Flooding Attack (1)
- Flow Visualization (1)
- Force Feedback (1)
- Formal Semantics (1)
- Formale Beschreibungstechnik (1)
- Formale Grammatik (1)
- Formale Sprache (1)
- Formaler Beweis (1)
- Framework (1)
- Framework <Informatik> (1)
- Functional Programs (1)
- Functional Safety (1)
- Funknetz (1)
- Funktionale Sicherheit (1)
- Fusion (1)
- Future Internet (1)
- GRAPHICS (1)
- Gateway (1)
- Gauß-Filter (1)
- Gebäude (1)
- Gebäudeautomation (1)
- Gefahren- und Risikoanalyse (1)
- Gene expression programming (1)
- General Knowledge (1)
- Generic Methods (1)
- Generierung (1)
- Genetic Algorithm (1)
- Geovisualization (1)
- Global Predicate Detection (1)
- Global Software Highway (1)
- Google Earth (1)
- HOL (1)
- HPC (1)
- HTE (1)
- HTML (1)
- Harvest (1)
- Hazard Analysis (1)
- Heterogeneous (1)
- Hitting families (1)
- Hub-and-Spoke-System (1)
- Huffman (1)
- Huffman-Code (1)
- Human-Computer Interaction (1)
- Human-Robot-Coexistence (1)
- Human-Robot-Cooperation (1)
- Hybrid CBR (1)
- Hybridregler (1)
- Hypergraph (1)
- Hörgerät (1)
- IDEA (1)
- IEEE 802.15.4 (1)
- INRECA (1)
- IP Address (1)
- IP Traffic Accounting (1)
- IP-XACT (1)
- ISO 26262 (1)
- Image Processing (1)
- Imote2 (1)
- Implan (1)
- Implantation (1)
- Implementation (1)
- Incremental recomputation (1)
- Indexierung <Inhaltserschließung> (1)
- Inductive Theorem Proving (1)
- Induktive Logische Programmierung (1)
- Induktive logische Programmierung (1)
- Induktivem Schliessen (1)
- Induktiven Logischen Programmierung (1)
- Information Extraction (1)
- Information Visualization (1)
- Instance-based Learning (1)
- Intel XScale (1)
- Intelligent Agents (1)
- Intelligent Object Fusion (1)
- Intelligent agents (1)
- Intensity estimation (1)
- Interaction (1)
- Interaktion (1)
- Interleaved Planning (1)
- Intermediate Composition (1)
- Internet Based Software Process Management Environment (1)
- Internet knowledge base (1)
- Internet knowledge reuse (1)
- Interpolation (1)
- Invariante (1)
- Invariante Momente (1)
- Isabelle/HOL (1)
- JSR 170 JCR (1)
- Jacobian (1)
- JavaScript (1)
- Kellerautomat (1)
- Klassifikation (1)
- Knowledge Management (1)
- Knowledge-based Techniques (1)
- Knuth-Bendix completion (1)
- Knuth-Bendix completion algorithm (1)
- Knuth-Bendix-Vervollständigung (1)
- Kollisionserkennung (1)
- Kommunikationsprotokoll (1)
- Komplexitätsklasse NP (1)
- Komponente <Software> (1)
- Komponentenmodell (1)
- Kompression (1)
- Komprimierung (1)
- Korrelationsanalyse (1)
- Kraftregelung (1)
- Kraftrückkopplung (1)
- Kryptographie (1)
- Kryptologie (1)
- Kurve (1)
- Künstliche Intelligenz (1)
- LIDAR (1)
- LIR-Tree (1)
- Language Constructs (1)
- Large High-Resolution Displays (1)
- Laser Wakefield Particle Accelerator (1)
- Laufkomprimierung (1)
- Learning Analytics (1)
- Leistungsmessung (1)
- Lernalgorithmen (1)
- Linked Data (1)
- Linking Data Analysis and Visualization (1)
- Linux (1)
- Location Awareness (1)
- Logical Time (1)
- Logische Programmierung (1)
- Lokalisierung (1)
- Lärmbelastung (1)
- Lärmimmission (1)
- MAC protocols (1)
- MDA <Vorgehensmodell> (1)
- MEGI (1)
- MHEG (1)
- MIDI <Musikelektronik> (1)
- MIP-Emissionsspektroskopie (1)
- MIP-Massenspektrometrie (1)
- MIR (1)
- MOO (1)
- MP3 (1)
- MVP-L (1)
- Machine learning (1)
- Magnetfeldbasierter Lokalisierung (1)
- Magnetfelder (1)
- Manufacturing (1)
- Map Building (1)
- MapReduce (1)
- Maschinelle Übersetzung (1)
- Mastoid (1)
- Mastoidektomie (1)
- Matrix Completion (1)
- Maturity of Software Engineering (1)
- Maximum Intensity Projection (1)
- Mehragentensystem (1)
- Memory Architecture (1)
- Memory Consistency (1)
- Menschenmenge (1)
- Merkmalsraum (1)
- Mesh-Free (1)
- Metadaten (1)
- Meter (1)
- Methods (1)
- Mikrocontroller AVR (1)
- Mikroskopie (1)
- Minimal Cut Set Visualization (1)
- Minimal training (1)
- MoCAS/2 (1)
- Mobile Computing (1)
- Mobile system (1)
- Mobiler Roboter (1)
- Mode-Based Scheduling with Fast Mode-Signaling (1)
- Model Based User Interface Development (1)
- Modelling (1)
- Modularisierung (1)
- Modusbasierte Signalisierung (1)
- Molekulare Bioinformatik (1)
- Moment Invariants (1)
- Mood-based Music Recommendations (1)
- Multi-Edge Graph (1)
- Multi-Field (1)
- Multi-Variate Data (1)
- Multidisciplinary Optimization (1)
- Multifield Data (1)
- Mund-Kiefer-Gesichts-Chirurgie (1)
- Music Information Retrieval (1)
- Musik / Artes liberales (1)
- NP-hard (1)
- Nachhaltigkeit (1)
- Natural Neighbor (1)
- Natural Neighbor Interpolation (1)
- Natürliche Nachbarn (1)
- Nearest-Neighbor Classification (1)
- Network Architecture (1)
- Network Calculus (1)
- Netz-Architekturen (1)
- Netzwerkmanagement (1)
- Neural Networks (1)
- NoSQL (1)
- Node-Link Diagram (1)
- Numerische Strömungssimulation (1)
- Nvidia (1)
- OCL 2.0 (1)
- OCR (1)
- Object-OrientedCase Representation (1)
- Object-Relational DataBase Management Systems (ORDBMS) (1)
- Object-Relational Database Systems (1)
- Object-orientation (1)
- Objektorientierung (1)
- Off-road Robotics (1)
- Off-road Robotik (1)
- Ohrenchirurgie (1)
- Online chain partitioning (1)
- Ontolingua (1)
- Ontology (1)
- Open Estelle (1)
- Open-Source (1)
- Operationsroboter (1)
- Optimierender Compiler (1)
- P2P (1)
- PABS-Methode (1)
- PATDEX 2 (1)
- PC-based robot control (1)
- PDF <Dateiformat> (1)
- PERA (1)
- PI-Regler (1)
- PLAN Abstraction (1)
- PVM (1)
- Parallel Virtual Machines (1)
- Paralleler Hybrid (1)
- Pareto Optimality (1)
- Parser (1)
- Partial Differential Equations (1)
- Partially ordered sets (1)
- Participatory Sensing (1)
- Peer-to-Peer-Netz (1)
- Performance (1)
- Personalisation (1)
- Pervasive health (1)
- Physical activity monitoring (1)
- Planar Pressure (1)
- Planning and Verification (1)
- Position Sensitive Device (1)
- Position- and Orientation Estimation (1)
- Problem Solvers (1)
- Problemlösung (1)
- Process Data (1)
- Process Management (1)
- Process configuration (1)
- Process creation (1)
- Process support (1)
- Processor Architecture (1)
- Processor Architectures (1)
- Produktionsdesign (1)
- Profiles (1)
- Project Management (1)
- Protocol Composition (1)
- Prototyp (1)
- Prototype (1)
- Prozessmodellen (1)
- Prozessvisualisierung (1)
- Quality (1)
- Quality Improvement Paradigm (QIP) (1)
- Qualität (1)
- Quantitative Bildanalyse (1)
- Quartz (1)
- Quicksort (1)
- RDF (1)
- RGB (1)
- RLE (1)
- RNA interaction (1)
- RSA (1)
- Radar (1)
- Random testing (1)
- Raumordnung (1)
- Ray tracing (1)
- ReasoningSystems (1)
- Rechnernetze (1)
- Rechtecksgitter (1)
- Rectilinear Grid (1)
- Redundanzvermeidung (1)
- Regler (1)
- Reglerentwurf (1)
- Reinforcement Learning (1)
- Rekonstruktion (1)
- Relationales Datenbanksystem (1)
- Repositories (1)
- Repository <Informatik> (1)
- Representation (1)
- Requirements engineering (1)
- Reservierungsprotokoll (1)
- Resource Description Framework (1)
- Reuse (1)
- Risikobewertung (1)
- Risk Assessment (1)
- Robot Calibration (1)
- Roboterarm (1)
- Rogue AP (1)
- Routing (1)
- Räumliche Statistik (1)
- SAHARA (1)
- SAX2 (1)
- SCAD (1)
- SCM (1)
- SDL extensions (1)
- SDL-2000 (1)
- SDL-oriented Object Modeling Technique (1)
- SDL-pattern a (1)
- SFB-EB (1)
- SIMD architectures (1)
- SOMT (1)
- SPARQL (1)
- SPARQL query learning (1)
- SQL (1)
- Safety Analysis (1)
- Scalar (1)
- Scheduling (1)
- Schema <Informatik> (1)
- Schematisation (1)
- Schematisierung (1)
- Schädelbasis (1)
- Schädelchirurgie (1)
- Scientific Computing (1)
- Self-Referencing (1)
- Semantic Wikis (1)
- Semantics of Programming Languages (1)
- Semantische Modellierung (1)
- Semantisches Datenmodell (1)
- Sensing (1)
- Service Access Points (1)
- Shared Resource Modeling (1)
- Sicherheit (1)
- Sicherheitsanalyse (1)
- Similarity Assessment (1)
- Similarity Join (1)
- Similarity Joins (1)
- Simulationen (1)
- Simultaneous quantifier elimination (1)
- Skalar (1)
- Skalierbarkeit (1)
- Smalltalk (1)
- Smart City (1)
- Smart Textile (1)
- SmartFactory (1)
- Smartphone (1)
- Smartwatch (1)
- Socio-Semantic Web (1)
- Software Comprehension (1)
- Software Configuration Management (1)
- Software Dependencies (1)
- Software Evolution (1)
- Software Maintenance (1)
- Software Measurement (1)
- Software Process Support (1)
- Software Testing (1)
- Software Visualization (1)
- Software development (1)
- Software development environment (1)
- Software transactional memory (1)
- Software-Architektur (1)
- Software-Entwicklung (1)
- Softwaremetrie (1)
- Softwarespezifikation (1)
- Softwarewartung (1)
- Spezifikation (1)
- Sprachdefinition (1)
- Sprache (1)
- Sprachen (1)
- Stereovision (1)
- Stimmungsbasierte Musikempfehlungen (1)
- Stokes Equations (1)
- Streaming (1)
- Structural Adaptation (1)
- Structure (1)
- Structuring Approach (1)
- Strukturiertes Gitter (1)
- Strömung (1)
- Suchve (1)
- Support-Vektor-Maschine (1)
- Surface Reconstruction (1)
- Symbolic Methods (1)
- Synchronous Control Asynchronous Dataflow (1)
- System Abstractions (1)
- SystemC (1)
- Systemdesign (1)
- TCP/IP (1)
- Tactics (1)
- Task-based (1)
- Tcl (1)
- Technology combination (1)
- Technology decision (1)
- Technology selection (1)
- Temporal Decoupling (1)
- Temporal Logic (1)
- Temporal data processing (1)
- Tensor (1)
- Tensorfeld (1)
- Termination (1)
- Tesselation (1)
- Tetraeder (1)
- Tetraedergi (1)
- Tetrahedral Grid (1)
- Tetrahedral Mesh (1)
- Textual CBR (1)
- Themenbasierte Empfehlungen von Ressourcen (1)
- Time-motion-Ultraschallkardiographie (1)
- Tonsignal (1)
- Topic-based Resource Recommendations (1)
- Topologie (1)
- Topology Preserving Networks (1)
- Topology visualization (1)
- Transaction Level Modeling (TLM) (1)
- Transport Protocol (1)
- Traversability Analysis (1)
- UML Profile (1)
- Ubiquitous system (1)
- Ultraschallkardiographie (1)
- Umweltinformatik (1)
- Universal Control Device (1)
- Unobtrusive instrumentations (1)
- Unorganized Data (1)
- Unstrukturiertes Gitter (1)
- Unterricht (1)
- Urban sprawl (1)
- UrbanSim (1)
- Usage Control (1)
- User Model (1)
- VIACOBI (1)
- VIROP (1)
- Validierung (1)
- Vector (1)
- Vector Field (1)
- Vector Time (1)
- Vektor (1)
- Vektorfelder (1)
- Verifikation (1)
- Versionierungssysteme (1)
- Verteiltes System (1)
- Verzerrungstensor (1)
- Virtual Corporation (1)
- Virtual Prototyping (1)
- Virtual Software Projects (1)
- Vision (1)
- Visual Analytics (1)
- Visual Queries (1)
- Visualization Theory (1)
- Volume rendering (1)
- Volumen-Rendering (1)
- Voronoi diagram (1)
- WETICE 98 (1)
- Weak Memory Model (1)
- Wearable Computing (1)
- WiFi (1)
- Wide Area Multimedia Group Interaction (1)
- Wide-column stores (1)
- Wireless Networks (1)
- Wissenschaftliches Rechnen (1)
- Wissenserwerb (1)
- Workflow Replication (1)
- Workflowmanagement (1)
- Workstation-Cluster (1)
- World-Wide Web (1)
- XDBMS (1)
- XEC (1)
- XML query estimation (1)
- XML summary (1)
- Yaroslavskiy-Bentley-Bloch Quicksort (1)
- Zeitplanung (1)
- Zugriffstruktur (1)
- Zugriffsystem (1)
- abstract description (1)
- access rights (1)
- acoustic modeling (1)
- active damping (1)
- adaption (1)
- affective user interface (1)
- affine arithmetic (1)
- aliasing (1)
- analogical reasoning (1)
- artificial neural network (1)
- aspect-oriented programming (1)
- assembly sequence design (1)
- associations (1)
- authentication (1)
- automated code generation (1)
- automated computer learning (1)
- automated proof planner (1)
- automated synchronization (1)
- automotive (1)
- autonomes Lernen (1)
- autonomous learning (1)
- autonomous systems (1)
- average-case analysis (1)
- bedingte Aktionen (1)
- behaviour-based system (1)
- behaviourbased (1)
- bi-directional search (1)
- bidirectional search (1)
- binary countdown protocol (1)
- biological motivated (1)
- biologisch motiviert (1)
- black bursts (1)
- body-IMU calibration (1)
- branch-and-bound (1)
- business process modelling (1)
- business process reengineering (1)
- byte code (1)
- case-based planner (1)
- classification (1)
- client/server-architecture (1)
- co-learning (1)
- collaborative information visualization (1)
- collaborative mobile sensing (1)
- collective intelligence (1)
- collision detection (1)
- combinatorial algorithms (1)
- communication architectures (1)
- communication protocols (1)
- communication subsystem (1)
- compilation (1)
- compiler (1)
- completeness (1)
- complex System Development (1)
- component-based (1)
- comprehensive reuse (1)
- computational biology (1)
- computational fluid dynamics (1)
- concept representation (1)
- conceptual design (1)
- conceptual representation (1)
- concurrent (1)
- concurrent software (1)
- constraint satisfaction problem (CSP) (1)
- constraint-based reasoning (1)
- content-and-structure summary (1)
- continuous master theorem (1)
- continuous media (1)
- cooperative problem solving (1)
- crowd condition estimation (1)
- crowd density estimation (1)
- crowd scanning (1)
- crowd sensing (1)
- crowdsourcing (1)
- curves and surfaces (1)
- customization of communication protocols (1)
- data sets (1)
- data-flow (1)
- dataset (1)
- decidability (1)
- decision support (1)
- deformable object (1)
- dependability (1)
- dependable systems (1)
- dependency management (1)
- description of reactive systems (1)
- design (1)
- design processes (1)
- deterministic arbitration (1)
- diagnostic problems (1)
- directed graphs (1)
- display algorithms (1)
- distributed and parallel processing (1)
- distributed c (1)
- distributed compliant control (1)
- distributed control system (1)
- distributed deduction (1)
- distributed document management (1)
- distributed enterprise (1)
- distributed groupware environment (1)
- distributed multi-platform software development (1)
- distributed multi-platform software development projects (1)
- distributed processing (1)
- distributed real-time systems (1)
- distributed software configuration management (1)
- distributed softwaredevelopment tools (1)
- distributed tasks (1)
- distributedknowledge-based systems (1)
- domain-specific language (1)
- domains (1)
- efficiency (1)
- embedded (1)
- embedding (1)
- emotion visualization (1)
- encapsulation (1)
- end-to-end learning (1)
- energy consumption (1)
- environmental noise (1)
- epidemic algorithms (1)
- epidemische Algorithmen (1)
- ernergy effcient motion (1)
- evolutionary algorithm (1)
- experience management (1)
- experimental software engineering (1)
- fallbasiertes planen (1)
- firewall (1)
- flexible workflows (1)
- flexible-link (1)
- flexible-link robot (1)
- flow visualization (1)
- force control (1)
- force following (1)
- formal description techniques (1)
- formal reasoning (1)
- foundational translation validation (1)
- frame buffer operations (1)
- framework (1)
- gaussian filter (1)
- generative Programmierung (1)
- generative programming (1)
- generic design of a customized communication subsystem (1)
- geographic information systems (1)
- geology (1)
- goal oriented completion (1)
- graph drawing algorithm (1)
- graph embedding (1)
- graph layout (1)
- graphic processors (1)
- guarded actions (1)
- hand pose, hand shape, depth image, convolutional neural networks (1)
- heterogeneous large-scale distributed DBMS (1)
- high-level caching of potentially shared networked documents (1)
- higher order logic (1)
- higher order tableau (1)
- higher-order calculi (1)
- higher-order tableaux calculus (1)
- higher-order theorem prover (1)
- historical documents (1)
- hub location (1)
- human body motion tracking (1)
- humanoid robot (1)
- humanoide Roboter (1)
- hybrid control (1)
- hybrid knowledge representation (1)
- hypercubes (1)
- hypergraph (1)
- iB2C (1)
- idle times (1)
- image analysis (1)
- implementation (1)
- industrial supervision (1)
- inertial sensors (1)
- information systems (1)
- information systems engineering (1)
- intelligent agents (1)
- intentional programming (1)
- interference (1)
- internet event synchronizer (1)
- interpreter (1)
- interval arithmetic (1)
- invariant (1)
- inverses Pendel (1)
- isochronous streams (1)
- knowledge processing (1)
- knowledge space (1)
- knowledge-based planning (1)
- konzeptuelle Modelierung (1)
- kraftbasiertes Führen (1)
- language definition (1)
- language modeling (1)
- language profiles (1)
- learning (1)
- learning algorithms (1)
- linked abstraction workflows (1)
- linked data (1)
- load sharing (1)
- local communication (1)
- long short-term memory (1)
- long tail (1)
- machine-checkable proof (1)
- magnetic field based localization (1)
- magnetometer calibration (1)
- manipulation (1)
- mastoid (1)
- mastoidectomy (1)
- mathematical concept (1)
- matrix visualization (1)
- measurement (1)
- message-passing (1)
- metadata (1)
- middleware (1)
- migration (1)
- mixed-signal (1)
- mobile agents (1)
- mobile agents approach (1)
- modelling time (1)
- modularisation (1)
- moment (1)
- monitoring and managing distributed development processes (1)
- morphism (1)
- multi-agent architecture (1)
- multi-language (1)
- multicore (1)
- multidimensional datasets (1)
- multimedia (1)
- multinomial regression (1)
- multiple context free grammar (1)
- multiple-view product modeling (1)
- multithreading (1)
- multitype code coupling (1)
- multiway partitioning (1)
- narrowing (1)
- natural language semantics (1)
- navigation (1)
- negotiation (1)
- nestable tangibles (1)
- object frameworks (1)
- object-orientation (1)
- object-oriented software modeling (1)
- off-line programming (1)
- operations research (1)
- optimization (1)
- optimization correctness (1)
- order-sorted logic (1)
- oscillating magnetic fields (1)
- oscillation (1)
- otorhinolaryngological surgery (1)
- out-of-order (1)
- ownership (1)
- parallel (1)
- parallelism and concurrency (1)
- paramodulation (1)
- participatory sensing (1)
- path cost models (1)
- pattern recognition (1)
- peer-to-peer (1)
- persistence (1)
- pivot sampling (1)
- plan enactment (1)
- planning (1)
- point cloud (1)
- point-to-point (1)
- problem solvers (1)
- process model (1)
- process modelling (1)
- process support system (PROSYT) (1)
- process-centred environments (1)
- profiles (1)
- programmable client-server systems (1)
- programming by demonstration (1)
- project coordination (1)
- proof generating optimizer (1)
- proof plans (1)
- proof presentation (1)
- protocol (1)
- rapid authoring (1)
- raster graphics (1)
- rate control (1)
- ray casting (1)
- ray tracing (1)
- reactive systems (1)
- real time (1)
- real-time (1)
- real-time tasks (1)
- real-time temporal logic (1)
- receptive safety properties (1)
- redundancy (1)
- redundant robots (1)
- rekursiv aufzählbare Sprachfamilien (1)
- relaxed memory models (1)
- reliability (1)
- requirements (1)
- reuse repositories (1)
- review (1)
- rings (1)
- risk management (1)
- robot (1)
- robot calibration (1)
- robot control (1)
- robot control architectures (1)
- robot kinematics (1)
- robot motion planning (1)
- robotergestützt (1)
- robotics (1)
- robustness (1)
- roles (1)
- rule-based reasoning (1)
- safe human robot coexistence (1)
- search algorithm (1)
- search alogorithms (1)
- search-space-problem (1)
- second order logic (1)
- secondary structure prediction (1)
- security domain (1)
- seed filling (1)
- self-localization (1)
- semantic web (1)
- semiring parsing (1)
- sequent calculus (1)
- shortest sequence (1)
- similarity measure (1)
- skolemization (1)
- small-multiples node-link visualization (1)
- social media (1)
- software agents (1)
- software comprehension (1)
- software engineering (1)
- software engineering task (1)
- software project (1)
- software project management (1)
- software reuse (1)
- sorted logic (1)
- soundness (1)
- spatial statistics (1)
- state-based formalism (1)
- static load balancing (1)
- static software structure (1)
- stationary sensing (1)
- statistics (1)
- stochastic context free grammar (1)
- structural summary (1)
- subjective evaluation (1)
- symbolic simulation (1)
- synchrone Sprachen (1)
- synchronous (1)
- synchronous languages (1)
- system architecture (1)
- system behaviour (1)
- tableau (1)
- tabletop (1)
- task sequence (1)
- tensor (1)
- tensorfield (1)
- terrain rendering (1)
- time-varying flow fields (1)
- topology preserving maps (1)
- tori (1)
- touch surfaces (1)
- traceability (1)
- trajectory optimization (1)
- translation (1)
- translation contract (1)
- translation validation (1)
- traveling salesman problem (1)
- types (1)
- typical examples (1)
- typical instance (1)
- urban planning (1)
- user-centered design (1)
- vector field visualization (1)
- vectorfield (1)
- verhaltensbasiert (1)
- verlustfrei (1)
- verlässlichkeit (1)
- verteilte Berechnung (1)
- vetreilte nachgiebige Regelung (1)
- virtual market place (1)
- virtual programming (1)
- virtual reality (1)
- visual analytics, machine learning, interaction, user experience, assistive technologies (1)
- visual process modelling environment (1)
- visual structure (1)
- wearable systems (1)
- weighted finite-state transducers (1)
- wireless signal (1)
- wissensbasierte Systeme (1)
- wissensbasierter Systeme der Arbeitsplanerstellung (1)
- work coordination (1)
- world modelling (1)
- world-modelling (1)
- zeitabhängige Strömungen (1)
- Ähnlichkeit (1)
- Übersetzung (1)
Faculty / Organisational entity
- Fachbereich Informatik (865) (remove)
Grob skizziert soll das System in der Lage sein, aus einer vorgegebenen Konstruktionszeichnung eines Drehteils einen Plan f"ur die maschinelle Fertigung dieses Teils zu erstellen. Ausgehend vom Ansatz des fallbasierten Schliessens besteht die Aufgabe des Systems darin, aus einer Menge bekannter Drehteile, für die bereits ein Fertigungsplan erstellt worden ist, das Teil zu finden, dessen Darstellung zu der des eingegebenen Teils am ähnlichsten ist. Der Plan dieses ähnlichsten Teils ist dann so zu modifizieren und anzupassen, dass damit das vorgegebene Teil gefertigt werden kann. Ein zentrales Problem ist hierbei die Definition des Ähnlichkeitsbegriffes, der auf jeden Fall den fertigungstechnischen Aspekt berücksichtigen muss.
Based on experiences from an autonomous mobile robot project called MOBOT -III, we found hard realtime-constraints for the operating-system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operatingsystem-kernel", not limited to mobile- robot-projects only, but which might be useful also wherever you have to guarantee a high reliability of a realtime-system. The focus in this article is on a communication-scheme fulfilling the demanded (hard realtime-) assurances although not implying time-delays or jitters on the critical informationchannels. The central chapters discuss a locking-free shared buffer management, without the need for interrupts and a way to arrange the communication architecture in order to produce minimal protocol-overhead and short cycle-times. Most of the remaining communication-capacity (if there is any) is used for redundant transfers, increasing the reliability of the whole system. ALBATROSS is actually implemented on a multi-processor VMEbus-system.
This paper refers to the problem of adaptability over an infinite period of time, regarding dynamic networks. A never ending flow of examples have to be clustered, based on a distance measure. The developed model is based on the self-organizing feature maps of Kohonen [6], [7] and some adaptations by Fritzke [3]. The problem of dynamic surface classification is embedded in the SPIN project, where sub-symbolic abstractions, based on a 3-d scanned environment is being done.
The problem to be discussed here, is the usage of neural network clustering techniques on a mobile robot, in order to build qualitative topologic environment maps. This has to be done in realtime, i.e. the internal world model has to be adapted by the flow of sensor- samples without the possibility to stop this data-flow.Our experiments are done in a simulation environment as well as on a robot, called ALICE.
Based on the experiences from an autonomous mobile robot project called MOBOT-III, we found hard realtime-constraints for the operating- system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operating-system-kernel". The focusin this article is on a communication-scheme fulfilling the previous demanded assurances. The centralchapters discuss the shared buffer management and the way to design the communication architecture.Some further aspects beside the strict realtime-requirements like the possibilities to control and watch a running system, are mentioned. ALBATROSS is actually implemented on a multi-processor VMEbus-system.
Based on the idea of using topologic feature-mapsinstead of geometric environment maps in practical mobile robot tasks, we show an applicable way tonavigate on such topologic maps. The main features regarding this kind of navigation are: handling of very inaccurate position (and orientation) information as well as implicit modelling of complex kinematics during an adaptation phase. Due to the lack of proper a-priori knowledge, a re-inforcement based model is used for the translation of navigator commands to motor actions. Instead of employing a backpropagation network for the cen-tral associative memory module (attaching actionprobabilities to sensor situations resp. navigatorcommands) a much faster dynamic cell structure system based on dynamic feature maps is shown. Standard graph-search heuristics like A* are applied in the planning phase.
SPIN-NFDS Learning and Preset Knowledge for Surface Fusion - A Neural Fuzzy Decision System -
(1993)
The problem to be discussed in this paper may be characterized in short by the question: "Are these two surface fragments belonging together (i.e. belonging to the same surface)?" The presented techniques try to benefit from some predefined knowledge as well as from the possibility to refine and adapt this knowledge according to a (changing) real environment, resulting in a combination of fuzzy-decision systems and neural networks. The results are encouraging (fast convergence speed, high accuracy), and the model might be used for a wide range of applications. The general frame surrounding the work in this paper is the SPIN- project, where emphasis is on sub-symbolic abstractions, based on a 3-d scanned environment.
World models for mobile robots as introduced in many projects, are mostly redundant regarding similar situations detected in different places. The present paper proposes a method for dynamic generation of a minimal world model based on these redundancies. The technique is an extention of the qualitative topologic world modelling methods. As a central aspect the reliability regarding errortolerance and stability will be emphasized. The proposed technique demands very low constraints on the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard realtime constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot "
This article will discuss a qualitative, topological and robust world-modelling technique with special regard to navigation-tasks for mobile robots operating in unknownenvironments. As a central aspect, the reliability regarding error-tolerance and stability will be emphasized. Benefits and problems involved in exploration, as well as in navigation tasks, are discussed. The proposed method demands very low constraints for the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot
Self-localization in unknown environments respectively correlation of current and former impressions of the world is an essential ability for most mobile robots. The method,proposed in this article is the construction of a qualitative, topological world model as a basis for self-localization. As a central aspect the reliability regarding error-tolerance and stability will be emphasized. The proposed techniques demand very low constraints for the kind and quality of the employed sensors as well as for the kinematic precisionof the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot.
Visual Search has been investigated by many researchers inspired by the biological fact, that the sensory elements on the mammal retina are not equably distributed. Therefore the focus of attention (the area of the retina with the highest density of sensory elements) has to be directed in a way to efficiently gather data according to certain criteria. The work discussed in this article concentrates on applying a laser range finder instead of a silicon retina. The laser range finder is maximal focused at any time, but therefore a low resolution total-scene-image, available with camera-like devices from scratch on, cannot be used here. By adapting a couple of algorithms, the edge-scanning module steering the laser range finder is able to trace a detected edge. Based on the data scanned so far , two questions have to be answered. First: "Should the actual (edge-) scanning be interrupted in order to give another area of interest a chance of being investigated?" and second: "Where to start a new edge-scanning, after being interrupted?". These two decision-problems might be solved by a range of decision systems. The correctness of the decisions depends widely on the actual environment and the underlying rules may not be well initialized with a-priori knowledge. So we will present a version of a reinforcement decision system together with an overall scheme for efficiently controlling highly focused devices.
ALICE
(1994)
Die am Fraunhofer-Institut für Experimentelles Software Engineering entwickelte MARMOT-Methode beschreibt einen Ansatz für die komponentenbasierte Entwicklung eingebetteter Systeme. Sie baut auf der ebenfalls am IESE entwickelten KobrA-Methode auf und erweitert diese um spezielle Anforderungen für eingebettete Systeme. Die Idee dahinter ist es, einzelne Komponenten zu modellieren, implementieren und zu testen um später auf vorhandene qualitätsgesicherte Komponenten zurückgreifen zu können, und zu Applikationen zu komponieren ohne diese immer wieder neu entwickeln und testen zu müssen. Im Rahmen dieser Projektarbeit sollte mit Hilfe der MARMOT-Methode ein Antikollisionssystem für ein Modellauto entwickelt werden. Nach Auswahl der hierfür geeigneten Hardware wurde zunächst ein Grundkonzept für die Sensorik entwickelt. Die vom verwendeten RADAR-Sensor gelieferten Signale müssen für die weitere Verwendung durch einen Mikrocontroller aufbereitet werden. Vor der eigentlichen Systemmodellierung musste deshalb zu diesem Zweck eine Sensorplatine entwickelt werden. Anschließend folgte die Modellierung des Antikollisionssystems in UML 2.0 und die Implementierung in C. Zum Abschluss wurde das Zusammenspiel der Hard- und Software getestet.
Planar force or pressure is a fundamental physical aspect during any people-vs-people and people-vs-environment activities and interactions. It is as significant as the more established linear and angular acceleration (usually acquired by inertial measurement units). There have been several studies involving planar pressure in the discipline of activity recognition, as reviewed in the first chapter. These studies have shown that planar pressure is a promising sensing modality for activity recognition. However, they still take a niche part in the entire discipline, using ad hoc systems and data analysis methods. Mostly these studies were not followed by further elaborative works. The situation calls for a general framework that can help push planar pressure sensing into the mainstream.
This dissertation systematically investigates using planar pressure distribution sensing technology for ubiquitous and wearable activity recognition purposes. We propose a generic Textile Pressure Mapping (TPM) Framework, which encapsulates (1) design knowledge and guidelines, (2) a multi-layered tool including hardware, software and algorithms, and (3) an ensemble of empirical study examples. Through validation with various empirical studies, the unified TPM framework covers the full scope of application recognition, including the ambient, object, and wearable subspaces.
The hardware part constructs a general architecture and implementations in the large-scale and mobile directions separately. The software toolkit consists of four heterogeneous tiers: driver, data processing, machine learning, visualization/feedback. The algorithm chapter describes generic data processing techniques and a unified TPM feature set. The TPM framework offers a universal solution for other researchers and developers to evaluate TPM sensing modality in their application scenarios.
The significant findings from the empirical studies have shown that TPM is a versatile sensing modality. Specifically, in the ambient subspace, a sports mat or carpet with TPM sensors embedded underneath can distinguish different sports activities or different people's gait based on the dynamic change of body-print; a pressure sensitive tablecloth can detect various dining actions by the force propagated from the cutlery through the plates to the tabletop. In the object subspace, swirl office chairs with TPM sensors under the cover can be used to detect the seater's real-time posture; TPM can be used to detect emotion-related touch interactions for smart objects, toys or robots. In the wearable subspace, TPM sensors can be used to perform pressure-based mechanomyography to detect muscle and body movement; it can also be tailored to cover the surface of a soccer shoe to distinguish different kicking angles and intensities.
All the empirical evaluations have resulted in accuracies well-above the chance level of the corresponding number of classes, e.g., the `swirl chair' study has classification accuracy of 79.5% out of 10 posture classes and in the `soccer shoe' study the accuracy is 98.8% among 17 combinations of angle and intensity.
Nowadays, accounting, charging and billing users' network resource consumption are commonly used for the purpose of facilitating reasonable network usage, controlling congestion, allocating cost, gaining revenue, etc. In traditional IP traffic accounting systems, IP addresses are used to identify the corresponding consumers of the network resources. However, there are some situations in which IP addresses cannot be used to identify users uniquely, for example, in multi-user systems. In these cases, network resource consumption can only be ascribed to the owners of these hosts instead of corresponding real users who have consumed the network resources. Therefore, accurate accountability in these systems is practically impossible. This is a flaw of the traditional IP address based IP traffic accounting technique. This dissertation proposes a user based IP traffic accounting model which can facilitate collecting network resource usage information on the basis of users. With user based IP traffic accounting, IP traffic can be distinguished not only by IP addresses but also by users. In this dissertation, three different schemes, which can achieve the user based IP traffic accounting mechanism, are discussed in detail. The inband scheme utilizes the IP header to convey the user information of the corresponding IP packet. The Accounting Agent residing in the measured host intercepts IP packets passing through it. Then it identifies the users of these IP packets and inserts user information into the IP packets. With this mechanism, a meter located in a key position of the network can intercept the IP packets tagged with user information, extract not only statistic information, but also IP addresses and user information from the IP packets to generate accounting records with user information. The out-of-band scheme is a contrast scheme to the in-band scheme. It also uses an Accounting Agent to intercept IP packets and identify the users of IP traffic. However, the user information is transferred through a separated channel, which is different from the corresponding IP packets' transmission. The Multi-IP scheme provides a different solution for identifying users of IP traffic. It assigns each user in a measured host a unique IP address. Through that, an IP address can be used to identify a user uniquely without ambiguity. This way, traditional IP address based accounting techniques can be applied to achieve the goal of user based IP traffic accounting. In this dissertation, a user based IP traffic accounting prototype system developed according to the out-of-band scheme is also introduced. The application of user based IP traffic accounting model in the distributed computing environment is also discussed.
Conditional Compilation (CC) is frequently used as a variation mechanism in software product lines (SPLs). However, as a SPL evolves the variable code realized by CC erodes in the sense that it becomes overly complex and difficult to understand and maintain. As a result, the SPL productivity goes down and puts expected advantages more and more at risk. To investigate the variability erosion and keep the productivity above a sufficiently good level, in this paper we 1) investigate several erosion symptoms in an industrial SPL; 2) present a variability improvement process that includes two major improvement strategies. While one strategy is to optimize variable code within the scope of CC, the other strategy is to transition CC to a new variation mechanism called Parameterized Inclusion. Both of these two improvement strategies can be conducted automatically, and the result of CC optimization is provided. Related issues such as applicability and cost of the improvement are also discussed.
As a Software Product Line (SPL) evolves with increasing number of features and feature values, the feature correlations become extremely intricate, and the specifications of these correlations tend to be either incomplete or inconsistent with their realizations, causing misconfigurations in practice. In order to guide product configuration processes, we present a solution framework to recover complex feature correlations from existing product configurations. These correlations are further pruned automatically and validated by domain experts. During implementation, we use association mining techniques to automatically extract strong association rules as potential feature correlations. This approach is evaluated using a large-scale industrial SPL in the embedded system domain, and finally we identify a large number of complex feature correlations.
Automata theory has given rise to a variety of automata models that consist
of a finite-state control and an infinite-state storage mechanism. The aim
of this work is to provide insights into how the structure of the storage
mechanism influences the expressiveness and the analyzability of the
resulting model. To this end, it presents generalizations of results about
individual storage mechanisms to larger classes. These generalizations
characterize those storage mechanisms for which the given result remains
true and for which it fails.
In order to speak of classes of storage mechanisms, we need an overarching
framework that accommodates each of the concrete storage mechanisms we wish
to address. Such a framework is provided by the model of valence automata,
in which the storage mechanism is represented by a monoid. Since the monoid
serves as a parameter to specifying the storage mechanism, our aim
translates into the question: For which monoids does the given
(automata-theoretic) result hold?
As a first result, we present an algebraic characterization of those monoids
over which valence automata accept only regular languages. In addition, it
turns out that for each monoid, this is the case if and only if valence
grammars, an analogous grammar model, can generate only context-free
languages.
Furthermore, we are concerned with closure properties: We study which
monoids result in a Boolean closed language class. For every language class
that is closed under rational transductions (in particular, those induced by
valence automata), we show: If the class is Boolean closed and contains any
non-regular language, then it already includes the whole arithmetical
hierarchy.
This work also introduces the class of graph monoids, which are defined by
finite graphs. By choosing appropriate graphs, one can realize a number of
prominent storage mechanisms, but also combinations and variants thereof.
Examples are pushdowns, counters, and Turing tapes. We can therefore relate
the structure of the graphs to computational properties of the resulting
storage mechanisms.
In the case of graph monoids, we study (i) the decidability of the emptiness
problem, (ii) which storage mechanisms guarantee semilinear Parikh images,
(iii) when silent transitions (i.e. those that read no input) can be
avoided, and (iv) which storage mechanisms permit the computation of
downward closures.
Today, information systems are often distributed to achieve high availability and low latency.
These systems can be realized by building on a highly available database to manage the distribution of data.
However, it is well known that high availability and low latency are not compatible with strong consistency guarantees.
For application developers, the lack of strong consistency on the database layer can make it difficult to reason about their programs and ensure that applications work as intended.
We address this problem from the perspective of formal verification.
We present a specification technique, which allows specifying functional properties of the application.
In addition to data invariants, we support history properties.
These let us express relations between events, including invocations of the application API and operations on the database.
To address the verification problem, we have developed a proof technique that handles concurrency using invariants and thereby reduces the problem to sequential verification.
The underlying system semantics, technique and its soundness proof are all formalized in the interactive theorem prover Isabelle/HOL.
Additionally, we have developed a tool named Repliss which uses the proof technique to enable partially automated verification and testing of applications.
For verification, Repliss generates verification conditions via symbolic execution and then uses an SMT solver to discharge them.
Wireless LANs operating within unlicensed frequency bands require random access schemes such as CSMA/ CA, so that wireless networks from different administrative domains (for example wireless community networks) may co-exist without central coordination, even when they happen to operate on the same radio channel. Yet, it is evident that this Jack of coordination leads to an inevitable loss in efficiency due to contention on the MAC layer. The interesting question is, which efficiency may be gained by adding coordination to existing, unrelated wireless networks, for example by self-organization. In this paper, we present a methodology based on a mathematical programming formulation to determine the
parameters (assignment of stations to access points, signal strengths and channel assignment of both access points and stations) for a scenario of co-existing CSMA/ CA-based wireless networks, such that the contention between these networks is minimized. We demonstrate how it is possible to solve this discrete, non-linear optimization problem exactly for small
problems. For larger scenarios, we present a genetic algorithm specifically tuned for finding near-optimal solutions, and compare its results to theoretical lower bounds. Overall, we provide a benchmark on the minimum contention problem for coordination mechanisms in CSMA/CA-based wireless networks.
With the technological advancement in the field of robotics, it is now quite practical to acknowledge the actuality of social robots being a part of human's daily life in the next decades. Concerning HRI, the basic expectations from a social robot are to perceive words, emotions, and behaviours, in order to draw several conclusions and adapt its behaviour to realize natural HRI. Henceforth, assessment of human personality traits is essential to bring a sense of appeal and acceptance towards the robot during interaction.
Knowledge of human personality is highly relevant as far as natural and efficient HRI is concerned. The idea is taken from human behaviourism, with humans behaving differently based on the personality trait of the communicating partners. This thesis contributes to the development of personality trait assessment system for intelligent human-robot interaction.
The personality trait assessment system is organized in three separate levels. The first level, known as perceptual level, is responsible for enabling the robot to perceive, recognize and understand human actions in the surrounding environment in order to make sense of the situation. Using psychological concepts and theories, several percepts have been extracted. A study has been conducted to validate the significance of these percepts towards personality traits.
The second level, known as affective level, helps the robot to connect the knowledge acquired in the first level to make higher order evaluations such as assessment of human personality traits. The affective system of the robot is responsible for analysing human personality traits. To the best of our knowledge, this thesis is the first work in the field of human-robot interaction that presents an automatic assessment of human personality traits in real-time using visual information. Using psychology and cognitive studies, many theories has been studied. Two theories have been been used to build the personality trait assessment system: Big Five personality traits assessment and temperament framework for personality traits assessment.
By using the information from the perceptual and affective level, the last level, known as behavioural level, enables the robot to synthesize an appropriate behaviour adapted to human personality traits. Multiple experiments have been conducted with different scenarios. It has been shown that the robot, ROBIN, assesses personality traits correctly during interaction and uses the similarity-attraction principle to behave with similar personality type. For example, if the person is found out to be extrovert, the robot also behaves like an extrovert. However, it also uses the complementary attraction theory to adapt its behaviour and complement the personality of the interaction partner. For example, if the person is found out to be self-centred, the robot behaves like an agreeable in order to flourish human-robot interaction.
The paper focuses on the problem of trajectory planning of flexible redundant robot manipulators (FRM) in joint space. Compared to irredundant flexible manipulators, FRMs present additional possibilities in trajectory planning due to their kinematics redundancy. A trajectory planning method to minimize vibration of FRMs is presented based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as a planning variable. Quadrinomial and quintic polynomials are used to describe the segments which connect the initial, intermediate, and final points in joint space. The trajectory planning of FRMs is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. A case study shows that the method is applicable.
Point-to-Point Trajectory Planning of Flexible Redundant Robot Manipulators Using Genetic Algorithms
(2001)
The paper focuses on the problem of point-to-point trajectory planning for flexible redundant robot manipulators (FRM) in joint space. Compared with irredundant flexible manipulators, a FRM possesses additional possibilities during point-to-point trajectory planning due to its kinematics redundancy. A trajectory planning method to minimize vibration and/or executing time of a point-to-point motion is presented for FRM based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as planning variables. Quadrinomial and quintic polynomial are used to describe the segments that connect the initial, intermediate, and final points in joint space. The trajectory planning of FRM is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. Case studies show that the method is applicable.
The vibration induced in a deformable object upon automatic handling by robot manipulators can often be bothersome. This paper presents a force/torque sensor-based method for handling deformable linear objects (DLOs) in a manner suitable to eliminate acute vibration. An adjustment-motion that can be attached to the end of an arbitrary end-effector's trajectory is employed to eliminate vibration of deformable objects. Differently from model-based methods, the presented sensor-based method does not employ any information from previous motions. The adjustment-motion is generated automatically by analyzing data from a force/torque sensor mounted on the robot wrist. Template matching technique is used to find out the matching point between the vibrational signal of the DLO and a template. Experiments are conducted to test the new method under various conditions. Results demonstrate the effectiveness of the sensor-based adjustment-motion.
Manipulating Deformable Linear Objects: Attachable Adjustment-Motions for Vibration Reduction
(2001)
This paper addresses the problem of handling deformable linear objects (DLOs) in a suitable way to avoid acute vibration. Different types of adjustment-motions that eliminate vibration of deformable objects and can be attached to the end of an arbitrary end-effector trajectory are presented. For describing the dynamics of deformable linear objects, the finite element method is used to derive the dynamic differential equations. Genetic algorithm is used to find the optimal adjustment motion for each simulation example. Experiments are conducted to verify the presented manipulating method.
Manipulating Deformable Linear Objects: Model-Based Adjustment-Motion for Vibration Reduction
(2001)
This paper addresses the problem of handling deformable linear objects (DLOs) in a suitable way to avoid acute vibration. An adjustment-motion that eliminates vibration of DLOs and can be attached to the end of any arbitrary end-effector's trajectory is presented, based on the concept of open-loop control. The presented adjustment-motion is a kind of agile end-effector motion with limited scope. To describe the dynamics of deformable linear objects, the finite element method is used to derive the dynamic differential equations. Genetic algorithm is used to find the optimal adjustment-motion for each simulation example. In contrast to previous approaches, the presented method can be treated as one of the manipulation skills and can be applied to different cases without major changes to the method.
It is difficult for robots to handle a vibrating deformable object. Even for human beings it is a high-risk operation to, for example, insert a vibrating linear object into a small hole. However, fast manipulation using a robot arm is not just a dream; it may be achieved if some important features of the vibration are detected online. In this paper, we present an approach for fast manipulation using a force/torque sensor mounted on the robot's wrist. Template matching method is employed to recognize the vibrational phase of the deformable objects. Therefore, a fast manipulation can be performed with a high success rate, even if there is acute vibration. Experiments inserting a deformable object into a hole are conducted to test the presented method. Results demonstrate that the presented sensor-based online fast manipulation is feasible.
The safety of embedded systems is becoming more and more important nowadays. Fault Tree Analysis (FTA) is a widely used technique for analyzing the safety of embedded systems. A standardized tree-like structure called a Fault Tree (FT) models the failures of the systems. The Component Fault Tree (CFT) provides an advanced modeling concept for adapting the traditional FTs to the hierarchical architecture model in system design. Minimal Cut Set (MCS) analysis is a method that works for qualitative analysis based on the FTs. Each MCS represents a minimal combination of component failures of a system called basic events, which may together cause the top-level system failure. The ordinary representations of MCSs consist of plain text and data tables with little additional supporting visual and interactive information. Importance analysis based on FTs or CFTs estimates the contribution of each potential basic event to a top-level system failure. The resulting importance values of basic events are typically represented in summary views, e.g., data tables and histograms. There is little visual integration between these forms and the FT (or CFT) structure. The safety of a system can be improved using an iterative process, called the safety improvement process, based on FTs taking relevant constraints into account, e.g., cost. Typically, relevant data regarding the safety improvement process are presented across multiple views with few interactive associations. In short, the ordinary representation concepts cannot effectively facilitate these analyses.
We propose a set of visualization approaches for addressing the issues above mentioned in order to facilitate those analyses in terms of the representations.
Contribution:
1. To support the MCS analysis, we propose a matrix-based visualization that allows detailed data of the MCSs of interest to be viewed while maintaining a satisfactory overview of a large number of MCSs for effective navigation and pattern analysis. Engineers can also intuitively analyze the influence of MCSs of a CFT.
2. To facilitate the importance analysis based on the CFT, we propose a hybrid visualization approach that combines the icicle-layout-style architectural views with the CFT structure. This approach facilitates to identify the vulnerable components taking the hierarchies of system architecture into account and investigate the logical failure propagation of the important basic events.
3. We propose a visual safety improvement process that integrates an enhanced decision tree with a scatter plot. This approach allows one to visually investigate the detailed data related to individual steps of the process while maintaining the overview of the process. The approach facilitates to construct and analyze improvement solutions of the safety of a system.
Using our visualization approaches, the MCS analysis, the importance analysis, and the safety improvement process based on the CFT can be facilitated.
This paper discusses the problem of automatic off-line programming and motion planning for industrial robots. At first, a new concept consisting of three steps is proposed. The first step, a new method for on-line motion planning is introduced. The motion planning method is based on the A*-search algorithm and works in the implicit configuration space. During searching, the collisions are detected in the explicitly represented Cartesian workspace by hierarchical distance computation. In the second step, the trajectory planner has to transform the path into a time and energy optimal robot program. The practical application of these two steps strongly depends on the method for robot calibration with high accuracy, thus, mapping the virtual world onto the real world, which is discussed in the third step.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
A new problem for the automated off-line programming of industrial robot application is investigated. The Multi-Goal Path Planning is to find the collision-free path connecting a set of goal poses and minimizing e.g. the total path length. Our solution is based on an earlier reported path planner for industrial robot arms with 6 degrees-of-freedom in an on-line given 3D environment. To control the path planner, four different goal selection methods are introduced and compared. While the Random and the Nearest Pair Selection methods can be used with any path planner, the Nearest Goal and the Adaptive Pair Selection method are favorable for our planner. With the latter two goal selection methods, the Multi-Goal Path Planning task can be significantly accelerated, because they are able to automatically solve the simplest path planning problems first. Summarizing, compared to Random or Nearest Pair Selection, this new Multi-Goal Path Planning approach results in a further cost reduction of the programming phase.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A*-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear, and sometimes even superlinear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
Die Bewegungsplanung für Industrieroboter ist eine notwendige Voraussetzung, damit sich autonome Systeme kollisionsfrei durch die Umwelt bewegen können. Die Berücksichtigung von dynamischen Hindernissen zur Laufzeit erfordert allerdings leistungsfähige Algorithmen, zur Lösung dieser Aufgabenstellung in Echtzeit. Eine Möglichkeit zur Beschleunigung der Algorithmen ist der effiziente Einsatz von skalierbarer Parallelverarbeitung. Die softwaretechnische Umsetzung kann aber nur dann erfolgreich sein, wenn ein Parallelrechner zur Verfügung steht, der einen hohen Datendurchsatz bei geringer Latenzzeit bietet. Darüber hinaus muß dieser Parallelrechner unter vertretbarem Aufwand bedienbar sein und ein gutes Preisleistungsverhältnis aufweisen, damit die Parallelverarbeitung verstärkt in der Industrie zum Einsatz kommt. In diesem Artikel wird ein Workstation-Cluster auf der Basis von neun Standard- PCs vorgestellt, die über eine spezielle Kommunikationskarte miteinander vernetzt sind. In den einzelnen Abschnitten werden die gesammelten Erfahrungen bei der Inbetriebnahme, Systemadministration und Anwendung geschildert. Als Beispiel für eine Anwendung auf diesem Cluster wird ein paralleler Bewegungsplaner für Industrieroboter beschrieben.
This article presents contributions in the field of path planning for industrial robots with 6 degrees of freedom. This work presents the results of our research in the last 4 years at the Institute for Process Control and Robotics at the University of Karlsruhe. The path planning approach we present works in an implicit and discretized C-space. Collisions are detected in the Cartesian workspace by a hierarchical distance computation. The method is based on the A* search algorithm and needs no essential off-line computation. A new optimal discretization method leads to smaller search spaces, thus speeding up the planning. For a further acceleration, the search was parallelized. With a static load distribution good speedups can be achieved. By extending the algorithm to a bidirectional search, the planner is able to automatically select the easier search direction. The new dynamic switching of start and goal leads finally to the multi-goal path planning, which is able to compute a collision-free path between a set of goal poses (e.g., spot welding points) while minimizing the total path length.
Anwendungen effizienter Verfahren in Automation - Universität Karlsruhe auf der SPS97 in Nürnberg -
(1998)
On the Complexity of the Uncapacitated Single Allocation p-Hub Median Problem with Equal Weights
(2007)
The Super-Peer Selection Problem is an optimization problem in network topology construction. It may be cast as a special case of a Hub Location Problem, more exactly an Uncapacitated Single Allocation p-Hub Median Problem with equal weights. We show that this problem is still NP-hard by reduction from Max Clique.
We present a convenient notation for positive/negativeADconditional equations. Theidea is to merge rules specifying the same function by using caseAD, ifAD, matchAD, and letADexpressions.Based on the presented macroADruleADconstruct, positive/negativeADconditional equational specifiADcations can be written on a higher level. A rewrite system translates the macroADruleADconstructsinto positive/negativeADconditional equations.
We present an inference system for clausal theorem proving w.r.t. various kinds of inductivevalidity in theories specified by constructor-based positive/negative-conditional equations. The reductionrelation defined by such equations has to be (ground) confluent, but need not be terminating. Our con-structor-based approach is well-suited for inductive theorem proving in the presence of partially definedfunctions. The proposed inference system provides explicit induction hypotheses and can be instantiatedwith various wellfounded induction orderings. While emphasizing a well structured clear design of theinference system, our fundamental design goal is user-orientation and practical usefulness rather thantheoretical elegance. The resulting inference system is comprehensive and relatively powerful, but requiresa sophisticated concept of proof guidance, which is not treated in this paper.This research was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D4-Projekt)
We study the combination of the following already known ideas for showing confluence ofunconditional or conditional term rewriting systems into practically more useful confluence criteria forconditional systems: Our syntactic separation into constructor and non-constructor symbols, Huet's intro-duction and Toyama's generalization of parallel closedness for non-noetherian unconditional systems, theuse of shallow confluence for proving confluence of noetherian and non-noetherian conditional systems, theidea that certain kinds of limited confluence can be assumed for checking the fulfilledness or infeasibilityof the conditions of conditional critical pairs, and the idea that (when termination is given) only primesuperpositions have to be considered and certain normalization restrictions can be applied for the sub-stitutions fulfilling the conditions of conditional critical pairs. Besides combining and improving alreadyknown methods, we present the following new ideas and results: We strengthen the criterion for overlayjoinable noetherian systems, and, by using the expressiveness of our syntactic separation into constructorand non-constructor symbols, we are able to present criteria for level confluence that are not criteria forshallow confluence actually and also able to weaken the severe requirement of normality (stiffened withleft-linearity) in the criteria for shallow confluence of noetherian and non-noetherian conditional systems tothe easily satisfied requirement of quasi-normality. Finally, the whole paper also gives a practically usefuloverview of the syntactic means for showing confluence of conditional term rewriting systems.
Modern digital imaging technologies, such as digital microscopy or micro-computed tomography, deliver such large amounts of 2D and 3D-image data that manual processing becomes infeasible. This leads to a need for robust, flexible and automatic image analysis tools in areas such as histology or materials science, where microstructures are being investigated (e.g. cells, fiber systems). General-purpose image processing methods can be used to analyze such microstructures. These methods usually rely on segmentation, i.e., a separation of areas of interest in digital images. As image segmentation algorithms rarely adapt well to changes in the imaging system or to different analysis problems, there is a demand for solutions that can easily be modified to analyze different microstructures, and that are more accurate than existing ones. To address these challenges, this thesis contributes a novel statistical model for objects in images and novel algorithms for the image-based analysis of microstructures. The first contribution is a novel statistical model for the locations of objects (e.g. tumor cells) in images. This model is fully trainable and can therefore be easily adapted to many different image analysis tasks, which is demonstrated by examples from histology and materials science. Using algorithms for fitting this statistical model to images results in a method for locating multiple objects in images that is more accurate and more robust to noise and background clutter than standard methods. On simulated data at high noise levels (peak signal-to-noise ratio below 10 dB), this method achieves detection rates up to 10% above those of a watershed-based alternative algorithm. While objects like tumor cells can be described well by their coordinates in the plane, the analysis of fiber systems in composite materials, for instance, requires a fully three dimensional treatment. Therefore, the second contribution of this thesis is a novel algorithm to determine the local fiber orientation in micro-tomographic reconstructions of fiber-reinforced polymers and other fibrous materials. Using simulated data, it will be demonstrated that the local orientations obtained from this novel method are more robust to noise and fiber overlap than those computed using an established alternative gradient-based algorithm, both in 2D and 3D. The property of robustness to noise of the proposed algorithm can be explained by the fact that a low-pass filter is used to detect local orientations. But even in the absence of noise, depending on fiber curvature and density, the average local 3D-orientation estimate can be about 9° more accurate compared to that alternative gradient-based method. Implementations of that novel orientation estimation method require repeated image filtering using anisotropic Gaussian convolution filters. These filter operations, which other authors have used for adaptive image smoothing, are computationally expensive when using standard implementations. Therefore, the third contribution of this thesis is a novel optimal non-orthogonal separation of the anisotropic Gaussian convolution kernel. This result generalizes a previous one reported elsewhere, and allows for efficient implementations of the corresponding convolution operation in any dimension. In 2D and 3D, these implementations achieve an average performance gain by factors of 3.8 and 3.5, respectively, compared to a fast Fourier transform-based implementation. The contributions made by this thesis represent improvements over state-of-the-art methods, especially in the 2D-analysis of cells in histological resections, and in the 2D and 3D-analysis of fibrous materials.
W-Lisp Sprachbeschreibung
(1993)
W-Lisp [Wippennann 91] ist eine Sprache, die im Bereich der Implementierung höherer
Programmiersprachen verwendet wird. Ihre Anwendung ist nicht auf diesen Bereich beschränkt. Gute Lesbarkeit der W-Lisp-Notation wird durch zahlreiche Anleihen aus dem Bereich der bekannten imperativen Sprachen erzielt. W-Lisp-Programme können im Rahmen eines Common Lisp-Systems ausgeführt werden. In der WLisp Notation können alle Lisp-Funktionen (inkl. MCS) verwendet werden, so daß die Mächtigkeit von Common-Lisp [Steele 90] in dieser Hinsicht auch in W-Lisp verfügbar ist.
In diesem Artikel diskutieren wir Anforderungen aus der Kreditwürdigkeitsprüfung und ihre Erfüllung mit Hilfe der Technik des fallbasierten Schliessens. Innerhalb eines allgemeinen Ansatzes zur fallbasierten Systementwicklung wird ein Lernverfahren zur Optimierung von Entscheidungskosten ausführlich beschrieben. Dieses Verfahren wird, auf der Basis realer Kundendaten, mit dem fallbasierten Entwicklungswerkzeug INRECA empirisch bewertet. Die Voraussetzungen für den Einsatz fallbasierter Systeme zur Kreditwürdigkeitsprüfung werden abschliessend dargestellt und ihre Nüt zlichkeit diskutiert.
INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.
Eine Möglichkeit das Planen in Planungssystemen zu realisieren, ist das fallbasierte Planen. Vereinfacht kann darun ter das Lösen von neuen Planungsproblemen anhand von bereits bekannten Plänen aus der Planungsdomäne verstanden werden. Dazu werden Pläne, die in der Vergangenheit ein Planungsproblem gelöst haben, gesammelt und bei der Lösung neuer Planungsprobleme dahin gehend modifiziert, dass sie das aktuelle Planungsproblem lösen. Um eine grössere Wiederverwendbarkeit der bereits bekannten Pläne zu erreichen, kann man nun eine konkrete Problemstellung mit ihrer Lösung aus der konkreten Planungswelt in eine abstraktere Planungswelt durch eine Abstraktion transformieren.
Fallbasiertes Schliessen (engl.: Case-based Reasoning) hat in den vergangenen Jahren zunehmende Bedeutung für den praktischen Einsatz in realen Anwendungsbereichen erlangt. In dieser Arbeit werden zunächst die allgemeine Vorgehensweise und die verschiedenen Teilaufgaben des fallbasierten Schliessens vorgestellt. Anschliessend wird auf die charakteristischen Eigenschaften eines Anwendungsbereiches eingegangen und an der konkreten Aufgabe der Kreditwürdigkeitsprüfung die Realisierung eines fallbasierten Ansatzes in der Finanzwelt beschrieben.
Planabstraktion ist eine Möglichkeit, den Aufwand bei der Suche nach einem Plan zur Lösung eines konkreten Problems zu reduzieren. Hierbei wird eine konkrete Welt mit einer Problemstellung auf eine abstrakte Welt abgebildet. Die abstrakte Problemstellung wird nun in der abstrakten Welt gelöst. Durch die Rückabbildung der abstrakten Lösung auf eine konkrete Lösung erhält man eine Lösung für das konkrete Problem. Da die Anzahl der zur Lösung des abstrakten Problems benötigten Operationen geringer ist und die abstrakten Zustände und Operatoren einer weniger komplexen Beschreibung genügen, wird der Aufwand zur Suche einer konkreten Problemlösung reduziert.
Most of today’s wireless communication devices operate on unlicensed bands with uncoordinated spectrum access, with the consequence that RF interference and collisions are impairing the overall performance of wireless networks. In the classical design of network protocols, both packets in a collision are considered lost, such that channel access mechanisms attempt to avoid collisions proactively. However, with the current proliferation of wireless applications, e.g., WLANs, car-to-car networks, or the Internet of Things, this conservative approach is increasingly limiting the achievable network performance in practice. Instead of shunning interference, this thesis questions the notion of „harmful“ interference and argues that interference can, when generated in a controlled manner, be used to increase the performance and security of wireless systems. Using results from information theory and communications engineering, we identify the causes for reception or loss of packets and apply these insights to design system architectures that benefit from interference. Because the effect of signal propagation and channel fading, receiver design and implementation, and higher layer interactions on reception performance is complex and hard to reproduce by simulations, we design and implement an experimental platform for controlled interference generation to strengthen our theoretical findings with experimental results. Following this philosophy, we introduce and evaluate a system architecture that leverage interference.
First, we identify the conditions for successful reception of concurrent transmissions in wireless networks. We focus on the inherent ability of angular modulation receivers to reject interference when the power difference of the colliding signals is sufficiently large, the so-called capture effect. Because signal power fades over distance, the capture effect enables two or more sender–receiver pairs to transmit concurrently if they are positioned appropriately, in turn boosting network performance. Second, we show how to increase the security of wireless networks with a centralized network access control system (called WiFire) that selectively interferes with packets that violate a local security policy, thus effectively protecting legitimate devices from receiving such packets. WiFire’s working principle is as follows: a small number of specialized infrastructure devices, the guardians, are distributed alongside a network and continuously monitor all packet transmissions in the proximity, demodulating them iteratively. This enables the guardians to access the packet’s content before the packet fully arrives at the receiver. Using this knowledge the guardians classify the packet according to a programmable security policy. If a packet is deemed malicious, e.g., because its header fields indicate an unknown client, one or more guardians emit a limited burst of interference targeting the end of the packet, with the objective to introduce bit errors into it. Established communication standards use frame check sequences to ensure that packets are received correctly; WiFire leverages this built-in behavior to prevent a receiver from processing a harmful packet at all. This paradigm of „over-the-air“ protection without requiring any prior modification of client devices enables novel security services such as the protection of devices that cannot defend themselves because their performance limitations prohibit the use of complex cryptographic protocols, or of devices that cannot be altered after deployment.
This thesis makes several contributions. We introduce the first software-defined radio based experimental platform that is able to generate selective interference with the timing precision needed to evaluate the novel architectures developed in this thesis. It implements a real-time receiver for IEEE 802.15.4, giving it the ability to react to packets in a channel-aware way. Extending this system design and implementation, we introduce a security architecture that enables a remote protection of wireless clients, the wireless firewall. We augment our system with a rule checker (similar in design to Netfilter) to enable rule-based selective interference. We analyze the security properties of this architecture using physical layer modeling and validate our analysis with experiments in diverse environmental settings. Finally, we perform an analysis of concurrent transmissions. We introduce a new model that captures the physical properties correctly and show its validity with experiments, improving the state of the art in the design and analysis of cross-layer protocols for wireless networks.
Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting
method for Oracle's Java 7 runtime library. The decision for the change was based on
empirical studies showing that on average, the new algorithm is faster than the formerly
used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot
approach — an idea that was considered not promising by several theoretical studies in the
past. In this thesis, I try to find the reason for this unexpected success.
My focus is on the precise and detailed average case analysis, aiming at the flavor of
Knuth's series “The Art of Computer Programming”. In particular, I go beyond abstract
measures like counting key comparisons, and try to understand the efficiency of the
algorithms at different levels of abstraction. Whenever possible, precise expected values are
preferred to asymptotic approximations. This rigor ensures that (a) the sorting methods
discussed here are actually usable in practice and (b) that the analysis results contribute to
a sound comparison of the Quicksort variants.
Dual-Pivot Quicksort and Beyond: Analysis of Multiway Partitioning and Its Practical Potential
(2016)
Multiway Quicksort, i.e., partitioning the input in one step around several pivots, has received much attention since Java 7’s runtime library uses a new dual-pivot method that outperforms by far the old Quicksort implementation. The success of dual-pivot Quicksort is most likely due to more efficient usage of the memory hierarchy, which gives reason to believe that further improvements are possible with multiway Quicksort.
In this dissertation, I conduct a mathematical average-case analysis of multiway Quicksort including the important optimization to choose pivots from a sample of the input. I propose a parametric template algorithm that covers all practically relevant partitioning methods as special cases, and analyze this method in full generality. This allows me to analytically investigate in depth what effect the parameters of the generic Quicksort have on its performance. To model the memory-hierarchy costs, I also analyze the expected number of scanned elements, a measure for the amount of data transferred from memory that is known to also approximate the number of cache misses very well. The analysis unifies previous analyses of particular Quicksort variants under particular cost measures in one generic framework.
A main result is that multiway partitioning can reduce the number of scanned elements significantly, while it does not save many key comparisons; this explains why the earlier studies of multiway Quicksort did not find it promising. A highlight of this dissertation is the extension of the analysis to inputs with equal keys. I give the first analysis of Quicksort with pivot sampling and multiway partitioning on an input model with equal keys.
It has been observed that for understanding the biological function of certain RNA molecules, one has to study joint secondary structures of interacting pairs of RNA. In this thesis, a new approach for predicting the joint structure is proposed and implemented. For this, we introduce the class of m-dimensional context-free grammars --- an extension of stochastic context-free grammars to multiple dimensions --- and present an Earley-style semiring parser for this class. Additionally, we develop and thoroughly discuss an implementation variant of Earley parsers tailored to efficiently handle dense grammars, which embraces the grammars used for structure prediction. A currently proposed partitioning scheme for joint secondary structures is transferred into a two-dimensional context-free grammar, which in turn is used as a stochastic model for RNA-RNA interaction. This model is trained on actual data and then used for predicting most likely joint structures for given RNA molecules. While this technique has been widely used for secondary structure prediction of single molecules, RNA-RNA interaction was hardly approached this way in the past. Although our parser has O(n^3 m^3) time complexity and O(n^2 m^2) space complexity for two RNA molecules of sizes n and m, it remains practically applicable for typical sizes if enough memory is available. Experiments show that our parser is much more efficient for this application than classical Earley parsers. Moreover the predictions of joint structures are comparable in quality to current energy minimization approaches.
In the presented work, I evaluate if and how Virtual Reality (VR) technologies can be used to support researchers working in the geosciences by providing immersive, collaborative visualization systems as well as virtual tools for data analysis. Technical challenges encountered in the development of theses systems are identified and solutions for these are provided.
To enable geologists to explore large digital terrain models (DTMs) in an immersive, explorative fashion within a VR environment, a suitable terrain rendering algorithm is required. For realistic perception of planetary curvature at large viewer altitudes, spherical rendering of the surface is necessary. Furthermore, rendering must sustain interactive frame rates of about 30 frames per second to avoid sensory confusion of the user. At the same time, the data structures used for visualization should also be suitable for efficiently computing spatial properties such as height profiles or volumes in order to implement virtual analysis tools. To address these requirements, I have developed a novel terrain rendering algorithm based on tiled quadtree hierarchies using the HEALPix parametrization of a sphere. For evaluation purposes, the system is applied to a 500 GiB dataset representing the surface of Mars.
Considering the current development of inexpensive remote surveillance equipment such as quadcopters, it seems inevitable that these devices will play a major role in future disaster management applications. Virtual reality installations in disaster management headquarters which provide an immersive visualization of near-live, three-dimensional situational data could then be a valuable asset for rapid, collaborative decision making. Most terrain visualization algorithms, however, require a computationally expensive pre-processing step to construct a terrain database.
To address this problem, I present an on-the-fly pre-processing system for cartographic data. The system consists of a frontend for rendering and interaction as well as a distributed processing backend executing on a small cluster which produces tiled data in the format required by the frontend on demand. The backend employs a CUDA based algorithm on graphics cards to perform efficient conversion from cartographic standard projections to the HEALPix-based grid used by the frontend.
Measurement of spatial properties is an important step in quantifying geological phenomena. When performing these tasks in a VR environment, a suitable input device and abstraction for the interaction (a “virtual tool”) must be provided. This tool should enable the user to precisely select the location of the measurement even under a perspective projection. Furthermore, the measurement process should be accurate to the resolution of the data available and should not have a large impact on the frame rate in order to not violate interactivity requirements.
I have implemented virtual tools based on the HEALPix data structure for measurement of height profiles as well as volumes. For interaction, a ray-based picking metaphor was employed, using a virtual selection ray extending from the user’s hand holding a VR interaction device. To provide maximum accuracy, the algorithms access the quad-tree terrain database at the highest available resolution level while at the same time maintaining interactivity in rendering.
Geological faults are cracks in the earth’s crust along which a differential movement of rock volumes can be observed. Quantifying the direction and magnitude of such translations is an essential requirement in understanding earth’s geological history. For this purpose, geologists traditionally use maps in top-down projection which are cut (e.g. using image editing software) along the suspected fault trace. The two resulting pieces of the map are then translated in parallel against each other until surface features which have been cut by the fault motion come back into alignment. The amount of translation applied is then used as a hypothesis for the magnitude of the fault action. In the scope of this work it is shown, however, that performing this study in a top-down perspective can lead to the acceptance of faulty reconstructions, since the three-dimensional structure of topography is not considered.
To address this problem, I present a novel terrain deformation algorithm which allows the user to trace a fault line directly within a 3D terrain visualization system and interactively deform the terrain model while inspecting the resulting reconstruction from arbitrary perspectives. I demonstrate that the application of 3D visualization allows for a more informed interpretation of fault reconstruction hypotheses. The algorithm is implemented on graphics cards and performs real-time geometric deformation of the terrain model, guaranteeing interactivity with respect to all parameters.
Paleoceanography is the study of the prehistoric evolution of the ocean. One of the key data sources used in this research are coring experiments which provide point samples of layered sediment depositions at the ocean floor. The samples obtained in these experiments document the time-varying sediment concentrations within the ocean water at the point of measurement. The task of recovering the ocean flow patterns based on these deposition records is a challenging inverse numerical problem, however.
To support domain scientists working on this problem, I have developed a VR visualization tool to aid in the verification of model parameters by providing simultaneous visualization of experimental data from coring as well as the resulting predicted flow field obtained from numerical simulation. Earth is visualized as a globe in the VR environment with coring data being presented using a billboard rendering technique while the
time-variant flow field is indicated using Line-Integral-Convolution (LIC). To study individual sediment transport pathways and their correlation with the depositional record, interactive particle injection and real-time advection is supported.
Die Verwendung von existierenden Planungsansätzen zur Lösung von realen Anwendungs- problemen führt meist schnell zur Erkenntnis, dass eine vorliegende Problemstellung im Prinzip zwar lösbar ist, der exponentiell anwachsende Suchraum jedoch nur die Behandlung relativ kleiner Aufgabenstellungen erlaubt. Beobachtet man jedoch menschliche Planungsexperten, so sind diese in der Lage bei komplexen Problemen den Suchraum durch Abstraktion und die Verwendung bekannter Fallbeispiele als Heuristiken, entscheident zu verkleinern und so auch für schwierige Aufgabenstellungen zu einer akzeptablen Lösung zu gelangen. In dieser Arbeit wollen wir am Beispiel der Arbeitsplanung ein System vorstellen, das Abstraktion und fallbasierte Techniken zur Steuerung des Inferenzprozesses eines nichtlinearen, hierarchischen Planungssystems einsetzt und so die Komplexität der zu lösenden Gesamtaufgabe reduziert.
We describe a hybrid architecture supporting planning for machining workpieces. The archi- tecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.
We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is build on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been realized that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.
While most approaches to similarity assessment are oblivious of knowledge and goals, there is ample evidence that these elements of problem solving play an important role in similarity judgements. This paper is concerned with an approach for integrating assessment of similarity into a framework of problem solving that embodies central notions of problem solving like goals, knowledge and learning.
Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Im Bereich der Expertensysteme ist das Problemlösen auf der Basis von bekannten Fallbeispielen ein derzeit sehr aktuelles Thema. Auch für Diagnoseaufgaben gewinnt der fallbasierte Ansatz immer mehr an Bedeutung. In diesem Papier soll der im Rahmen des Moltke -Projektes1 an der Universität Kaiserslautern entwickelte fallbasierte Problemlöser Patdex/22 vorgestellt werden. Ein erster Prototyp, Patdex/1, wurde bereits 1988 entwickelt.
Forschungsprojekte im Bereich des fallbasierten Schliessens in den USA, die Verfügbarkeit kommerzieller fallbasierter Shells, sowie erste Forschungsergebnisse initialer deutscher Projekte haben auch in Deutschland verstärkte Aktivitäten auf dem Gebiet des fallbasierten Schliessens ausgelöst. In diesem Artikel sollen daher Projekte, die sich als Schwerpunkt oder als Teilaspekt mit fallbasierten Aspekten beschäftigen, einer breiteren Öffentlichkeit kurz vorgestellt werden.
Patdex is an expert system which carries out case-based reasoning for the fault diagnosis of complex machines. It is integrated in the Moltke workbench for technical diagnosis, which was developed at the university of Kaiserslautern over the past years, Moltke contains other parts as well, in particular a model-based approach; in Patdex where essentially the heuristic features are located. The use of cases also plays an important role for knowledge acquisition. In this paper we describe Patdex from a principal point of view and embed its main concepts into a theoretical framework.
Crowd condition monitoring concerns the crowd safety and concerns business performance metrics. The research problem to be solved is a crowd condition estimation approach to enable and support the supervision of mass events by first-responders and marketing experts, but is also targeted towards supporting social scientists, journalists, historians, public relations experts, community leaders, and political researchers. Real-time insights of the crowd condition is desired for quick reactions and historic crowd conditions measurements are desired for profound post-event crowd condition analysis.
This thesis aims to provide a systematic understanding of different approaches for crowd condition estimation by relying on 2.4 GHz signals and its variation in crowds of people, proposes and categorizes possible sensing approaches, applies supervised machine learning algorithms, and demonstrates experimental evaluation results. I categorize four sensing approaches. Firstly, stationary sensors which are sensing crowd centric signals sources. Secondly, stationary sensors which are sensing other stationary signals sources (either opportunistic or special purpose signal sources). Thirdly, a few volunteers within the crowd equipped with sensors which are sensing other surrounding crowd centric device signals (either individually, in a single group or collaboratively) within a small region. Fourthly, a small subset of participants within the crowd equipped with sensors and roaming throughout a whole city to sense wireless crowd centric signals.
I present and evaluate an approach with meshed stationary sensors which were sensing crowd centric devices. This was demonstrated and empirically evaluated within an industrial project during three of the world-wide largest automotive exhibitions. With over 30 meshed stationary sensors in an optimized setup across 6400m2 I achieved a mean absolute error of the crowd density of just 0.0115
people per square meter which equals to an average of below 6% mean relative error from the ground truth. I validate the contextual crowd condition anomaly detection method during the visit of chancellor Mrs. Merkel and during a large press conference during the exhibition. I present the approach of opportunistically sensing stationary based wireless signal variations and validate this during the Hannover CeBIT exhibition with 80 opportunistic sources with a crowd condition estimation relative error of below 12% relying only on surrounding signals in influenced by humans. Pursuing this approach I present an approach with dedicated signal sources and sensors to estimate the condition of shared office environments. I demonstrate methods being viable to even detect low density static crowds, such as people sitting at their desks, and evaluate this on an eight person office scenario. I present the approach of mobile crowd density estimation by a group of sensors detecting other crowd centric devices in the proximity with a classification accuracy of the crowd density of 66 % (improvement of over 22% over a individual sensor) during the crowded Oktoberfest event. I propose a collaborative mobile sensing approach which makes the system more robust against variations that may result from the background of the people rather than the crowd condition with differential features taking information about the link structure between actively scanning devices, the ratio between values observed by different devices, ratio of discovered crowd devices over time, team-wise diversity of discovered devices, number of semi- continuous device visibility periods, and device visibility durations into account. I validate the approach on multiple experiments including the Kaiserslautern European soccer championship public viewing event and evaluated the collaborative mobile sensing approach with a crowd condition estimation accuracy of 77 % while outperforming previous methods by 21%. I present the feasibility of deploying the wireless crowd condition sensing approach to a citywide scale during an event in Zurich with 971 actively sensing participants and outperformed the reference method by 24% in average.
Neuronale Netze sind ein derzeit (wieder) aktuelles Thema. Trotz der oft eher schlagwortartigen
Verwendung dieses Begriffs beinhaltet er eine Vielfalt von Ideen, unterschiedlichste methodische
Ansätze und konkrete Anwendungsmöglichkeiten. Die grundlegenden Vorstellungen sind dabei nicht neu, sondern haben eine mitunter recht lange Tradition in angrenzenden Disziplinen wie Biologie, Kybernetik , Mathematik und Physik . Vielversprechende Forschungsergebnisse der letzten Zeit haben dieses Thema wieder in den Mittelpunkt des Interesses gerückt und eine Vielzahl neuer Querbezüge zur Informatik und Neurobiologie sowie zu anderen, auf den ersten Blick weit entfernten Gebieten offenbart. Gegenstand des Forschungsgebiets Neuronale Netze ist dabei die Untersuchung und Konstruktion informationsverarbeitender Systeme, die sich aus vielen mitunter nur sehr primitiven, uniformen Einheiten zusammensetzen und deren wesentliches Verarbeitungsprinzip die Kommunikation zwischen diesen Einheiten ist, d.h. die Übertragung von Nachrichten oder Signalen. Ein weiteres
Charakteristikum dieser Systeme ist die hochgradig parallele Verarbeitung von Information innerhalb
des Systems. Neben der Modellierung kognitiver Prozesse und dem Interesse, wie das menschliche Gehirn komplexe kognitive Leistungen vollbringt, ist über das rein wissenschaftliche Interesse hinaus in zunehmendem Maße auch der konkrete Einsatz neuronaler Netze in verschiedenen technischen Anwendungsgebieten zu sehen. Der vorliegende Report beinhaltet die schriftlichen Ausarbeitungen der Teilnehmerinnen des Seminars Theorie und Praxis neuronaler Netze , das von der Arbeitsgruppe Richter im Sommersemester 1993 an der Universität Kaiserslautern veranstaltet wurde. Besonderer Wert wurde darauf gelegt, nicht nur die theoretischen Grundlagen neuronaler Netze zu behandeln, sondern auch deren Einsatz in der Praxis zu diskutieren. Die Themenauswahl spiegelt einen Teil des weiten Spektrums der Arbeiten auf diesem Gebiet wider. Ein Anspruch auf Vollständigkeit kann daher nicht erhoben werden. Insbesondere sei darauf verwiesen, daß für eine intensive, vertiefende Beschäftigung mit einem Thema auf die jeweiligen Originalarbeiten zurückgegriffen werden sollte. Ohne die Mitarbeit der Teilnehmerinnen und Teilnehmer des Seminars wäre dieser Report nicht möglich gewesen. Wir bedanken uns daher bei Frank Hauptmann, Peter Conrad, Christoph Keller, Martin Buch, Philip Ziegler, Frank Leidermann, Martin Kronenburg, Michael Dieterich, Ulrike Becker, Christoph Krome, Susanne Meyfarth , Markus Schmitz, Kenan Çarki, Oliver Schweikart, Michael Schick und Ralf Comes.
Backward compatibility of class libraries ensures that an old implementation of a library can safely be replaced by a new implementation without breaking existing clients.
Formal reasoning about backward compatibility requires an adequate semantic model to compare the behavior of two library implementations.
In the object-oriented setting with inheritance and callbacks, finding such models is difficult as the interface between library implementations and clients are complex.
Furthermore, handling these models in a way to support practical reasoning requires appropriate verification tools.
This thesis proposes a formal model for library implementations and a reasoning approach for backward compatibility that is implemented using an automatic verifier. The first part of the thesis develops a fully abstract trace-based semantics for class libraries of a core sequential object-oriented language. Traces abstract from the control flow (stack) and data representation (heap) of the library implementations. The construction of a most general context is given that abstracts exactly from all possible clients of the library implementation.
Soundness and completeness of the trace semantics as well as the most general context are proven using specialized simulation relations on the operational semantics. The simulation relations also provide a proof method for reasoning about backward compatibility.
The second part of the thesis presents the implementation of the simulation-based proof method for an automatic verifier to check backward compatibility of class libraries written in Java. The approach works for complex library implementations, with recursion and loops, in the setting of unknown program contexts. The verification process relies on a coupling invariant that describes a relation between programs that use the old library implementation and programs that use the new library implementation. The thesis presents a specification language to formulate such coupling invariants. Finally, an application of the developed theory and tool to typical examples from the literature validates the reasoning and verification approach.
This report presents a generalization of tensor-product B-spline surfaces. The new scheme permits knots whose endpoints lie in the interior of the domain rectangle of a surface. This allows local refinement of the knot structure for approximation purposes as well as modeling surfaces with local tangent or curvature discontinuities. The surfaces are represented in terms of B-spline basis functions, ensuring affine invariance, local control, the convex hull property, and evaluation by de Boor's algorithm. A dimension formula for a class of generalized tensor-product spline spaces is developed.
One of the problems of autonomous mobile systems is the continuous tracking of position and orientation. In most cases, this problem is solved by dead reckoning, based on measurement of wheel rotations or step counts and step width. Unfortunately dead reckoning leads to accumulation of drift errors and is very sensitive against slippery. In this paper an algorithm for tracking position and orientation is presented being nearly independent from odometry and its problems with slippery. To achieve this results, a rotating range-finder is used, delivering scans of the environmental structure. The properties of this structure are used to match the scans from different locations in order to find their translational and rotational displacement. For this purpose derivatives of range-finder scans are calculated which can be used to find position and orientation by crosscorrelation.
A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.
We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.
In this article we present a method to generate random objects from a large variety of combinatorial classes according to a given distribution. Given a description of the combinatorial class and a set of sample data our method will provide an algorithm that generates objects of size n in worst-case runtime O(n^2) (O(n log(n)) can be achieved at the cost of a higher average-case runtime), with the generated objects following a distribution that closely matches the distribution of the sample data.
For many decades, the search for language classes that extend the
context-free laguages enough to include various languages that arise in
practice, while still keeping as many of the useful properties that
context-free grammars have - most notably cubic parsing time - has been
one of the major areas of research in formal language theory. In this thesis
we add a new family of classes to this field, namely
position-and-length-dependent context-free grammars. Our classes use the
approach of regulated rewriting, where derivations in a context-free base
grammar are allowed or forbidden based on, e.g., the sequence of rules used
in a derivation or the sentential forms, each rule is applied to. For our
new classes we look at the yield of each rule application, i.e. the
subword of the final word that eventually is derived from the symbols
introduced by the rule application. The position and length of the yield
in the final word define the position and length of the rule application and
each rule is associated a set of positions and lengths where it is allowed
to be applied.
We show that - unless the sets of allowed positions and lengths are really
complex - the languages in our classes can be parsed in the same time as
context-free grammars, using slight adaptations of well-known parsing
algorithms. We also show that they form a proper hierarchy above the
context-free languages and examine their relation to language classes
defined by other types of regulated rewriting.
We complete the treatment of the language classes by introducing pushdown
automata with position counter, an extension of traditional pushdown
automata that recognizes the languages generated by
position-and-length-dependent context-free grammars, and we examine various
closure and decidability properties of our classes. Additionally, we gather
the corresponding results for the subclasses that use right-linear resp.
left-linear base grammars and the corresponding class of automata, finite
automata with position counter.
Finally, as an application of our idea, we introduce length-dependent
stochastic context-free grammars and show how they can be employed to
improve the quality of predictions for RNA secondary structures.
Kinetic models of human motion rely on boundary conditions which are defined by the interaction of the body with its environment. In the simplest case, this interaction is limited to the foot contact with the ground and is given by the so called ground reaction force (GRF). A major challenge in the reconstruction of GRF from kinematic data is the double support phase, referring to the state with multiple ground contacts. In this case, the GRF prediction is not well defined. In this work we present an approach to reconstruct and distribute vertical GRF (vGRF) to each foot separately, using only kinematic data. We propose the biomechanically inspired force shadow method (FSM) to obtain a unique solution for any contact phase, including double support, of an arbitrary motion. We create a kinematic based function, model an anatomical foot shape and mimic the effect of hip muscle activations. We compare our estimations with the measurements of a Zebris pressure plate and obtain correlations of 0.39≤r≤0.94 for double support motions and 0.83≤r≤0.87 for a walking motion. The presented data is based on inertial human motion capture, showing the applicability for scenarios outside the laboratory. The proposed approach has low computational complexity and allows for online vGRF estimation.
Im Rahmen dieser Arbeit beschreiben wir die wesentlichen Merkmale der CAPlan-Architektur, die die interaktive Bearbeitung von Planungsproblemen ermöglichen. Anhand des SNLP-Algorithmus, der der Architektur zugrunde liegt, werden die im Laufe eines Planungsprozesses auftretenden Entscheidungspunkte charakterisiert. Mit Hilfe von frei definierbaren Kontrollkomponenten kann das Verhalten an diesen Entscheidungspunkte festgelegt werden, wodurch eine flexible Steuerung des Planungsprozesses ermöglicht wird. Planungsziele und -entscheidungen werden in einem gerichteten azyklischen Graphen verwaltet, der ihre kausalen Abhängigkeiten widerspiegelt. Im Gegensatz zu einem Stack, der typischerweise zur Verwaltung von Entscheidungen eingesetzt wird, erlaubt die graphbasierte Repräsentation die flexible Rücknahme einer Entscheidung, ohne alle zeitlich danach getroffenen Entscheidungen ebenfalls zurücknehmen zu müssen.
Problem specifications for classical planners based on a STRIPS-like representation typically consist of an initial situation and a partially defined goal state. Hierarchical planning approaches, e.g., Hierarchical Task Network (HTN) Planning, have not only richer representations for actions but also for the representation of planning problems. The latter are defined by giving an initial state and an initial task network in which the goals can be ordered with respect to each other. However, studies with a specification of the domain of process planning for the plan-space planner CAPlan (an extension of SNLP) have shown that even without hierarchical domain representation typical properties called goal orderings can be identified in this domain that allow more efficient and correct case retrieval strategies for the case-based planner CAPlan/CbC. Motivated by that, this report describes an extension of the classical problem specifications for plan-space planners like SNLP and descendants. These extended problem specifications allow to define a partial order on the planning goals which can interpreted as an order in which the solution plan should achieve the goals. These goal ordering can theoretically and empirically be shown to improve planning performance not only for case-based but also for generative planning. As a second but different way we show how goal orderings can be used to address the control problem of partial order planners. These improvements can be best understood with a refinement of Barrett's and Weld's extended taxonomy of subgoal collections.
Real world planning tasks like manufacturing process planning often don't allow to formalize all of the relevant knowledge. Especially, preferences between alternatives are hard to acquire but have high influence on the efficiency of the planning process and the quality of the solution. We describe the essential features of the CAPlan planning architecture that supports cooperative problem solving to narrow the gap caused by absent preference and control knowledge. The architecture combines an SNLP-like base planner with mechanisms for explict representation and maintenance of dependencies between planning decisions. The flexible control interface of CAPlan allows a combination of autonomous and interactive planning in which a user can participate in the problem solving process. Especially, the rejection of arbitrary decisions by a user or dependency-directed backtracking mechanisms are supported by CAPlan.
Planung ist ein vielfach untersuchtes Gebiet im Bereich der Künstlichen Intelligenz. Die hier vorgestellte Arbeit ist in diesem Gebiet angesiedelt: es geht um ein Planungssystem, welches auf die Unterstützung der Arbeitsplanerstellung in der computerintegrierten Fertigung abzielt. Der Bereich der computerintegrierten Fertigung ist allerdings nur als ein spezieller Anwendungsbereich für das System zu sehen.
Today, polygonal models occur everywhere in graphical applications, since they are easy
to render and to compute and a very huge set of tools are existing for generation and
manipulation of polygonal data. But modern scanning devices that allow a high quality
and large scale acquisition of complex real world models often deliver a large set of
points as resulting data structure of the scanned surface. A direct triangulation of those
point clouds does not always result in good models. They often contain problems like
holes, self-intersections and non manifold structures. Also one often looses important
surface structures like sharp corners and edges during a usual surface reconstruction.
So it is suitable to stay a little longer in the point based world to analyze the point cloud
data with respect to such features and apply a surface reconstruction method afterwards
that is known to construct continuous and smooth surfaces and extend it to reconstruct
sharp features.
Im Rahmen dieser Diplomarbeit werden die Konzepte zur Unterstützung von datenbankorientierten Software-Produktlinien durch domänenspezifische Sprachen am Beispiel von Versionierungssystemen untersucht. Ziel dieser Arbeit ist es, die zeitlichen Kosten, die durch die Nutzung einer domänenspezifischen Sprache entstehen, zu bestimmen. Dabei werden unterschiedliche Datenbankschemata verwendet, um zu untersuchen, welcher Zusammenhang zwischen der Komplexität des Datenbankschemas und der Übersetzung einer domänenspezifischen Anweisung in eine Reihe von herkömmlichen SQL-Anweisungen besteht. Um die zeitlichen Kosten für die Reduktion zu bestimmen, werden Leistungsuntersuchungen durchgeführt. Grundlage für diese Leistungsuntersuchungen sind domänenspezifische Anweisungen, die von einem speziell für diesen Zweck entwickelten Generator erzeugt wurden. Diese generierten domänenspezifischen Anweisungen werden mit den unterschiedlichen Datenbanktreibern auf dem passenden Datenbankschema ausgeführt.
Diese Arbeit ist ein Bestandteil des Projekts META-AKAD. Ziel dieser Arbeit ist die Entwicklung einer Softwarekomponente, die in der Lage ist, automatisiert Klassifikationen nach der Regensburger Verbundklassifikation (RVK) und Schlagworte aus der deutschen Schlagwortnormdatei (SWD) für Dokumente, die als Lehr- oder Lernmaterial eingestuft wurden, zu vergeben. Die automatische Indexierung wird auf Basis einer Support-Vektor-Maschine durchgeführt. Die Implementierung erfolgte in der Programmiersprache Java.
Ein maßgeschneidertes Kommunikationssystem für eine mobile Applikation mit Dienstgüteanforderungen
(2004)
In diesem Beitrag wird die Maßschneiderung eines Ad-Hoc-Kommunikationssystems zur Fernsteuerung eines Luftschiffs über WLAN vorgestellt. Dabei steht die Dienstunterstützung bei der Übertragung mehrerer Datenströme im Vordergrund. Es werden verschiedene Dienstgütemechanismen erklärt und deren Entwicklung und Integration in ein Kommunikationsprotokoll mit Hilfe eines komponentenbasierten Ansatzes genauer erläutert.
We present a methodology to augment system safety step-by-step and illustrate the approach by the definition of reusable solutions for the detection of fail-silent nodes - a watchdog and a heartbeat. These solutions can be added to real-time system designs, to protect against certain types of system failures. We use SDL as a system design language for the development of distributed systems, including real-time systems.
Software is becoming increasingly concurrent: parallelization, decentralization, and reactivity necessitate asynchronous programming in which processes communicate by posting messages/tasks to others’ message/task buffers. Asynchronous programming has been widely used to build fast servers and routers, embedded systems and sensor networks, and is the basis of Web programming using Javascript. Languages such as Erlang and Scala have adopted asynchronous programming as a fundamental concept with which highly scalable and highly reliable distributed systems are built.
Asynchronous programs are challenging to implement correctly: the loose coupling between asynchronously executed tasks makes the control and data dependencies difficult to follow. Even subtle design and programming mistakes on the programs have the capability to introduce erroneous or divergent behaviors. As asynchronous programs are typically written to provide a reliable, high-performance infrastructure, there is a critical need for analysis techniques to guarantee their correctness.
In this dissertation, I provide scalable verification and testing tools to make asyn- chronous programs more reliable. I show that the combination of counter abstraction and partial order reduction is an effective approach for the verification of asynchronous systems by presenting PROVKEEPER and KUAI, two scalable verifiers for two types of asynchronous systems. I also provide a theoretical result that proves a counter-abstraction based algorithm called expand-enlarge-check, is an asymptotically optimal algorithm for the coverability problem of branching vector addition systems as which many asynchronous programs can be modeled. In addition, I present BBS and LLSPLAT, two testing tools for asynchronous programs that efficiently uncover many subtle memory violation bugs.
Interactive graphics has been limited to simple direct illumination that commonly results in an artificial appearance. A more realistic appearance by simulating global illumination effects has been too costly to compute at interactive rates. In this paper we describe a new Monte Carlo-based global illumination algorithm. It achieves performance of up to 10 frames per second while arbitrary changes to the scene may be applied interactively. The performance is obtained through the effective use of a fast, distributed ray-tracing engine as well as a new interleaved sampling technique for parallel Monte Carlo simulation. A new filtering step in combination with correlated sampling avoids the disturbing noise artifacts common to Monte Carlo methods.
Due to tremendous improvements of high-performance computing resources as well
as numerical advances computational simulations became a common tool for modern
engineers. Nowadays, simulation of complex physics is more and more substituting a
large amount of physical experiments. While the vast compute power of large-scale
high-performance systems enabled for simulating more complex numerical equations,
handling the ever increasing amount of data with spatial and temporal resolution
burdens new challenges to scientists. Huge hardware and energy costs desire for
ecient utilization of high-performance systems. However, increasing complexity of
simulations raises the risk of failing simulations resulting in a single simulation to be
restarted multiple times. Computational Steering is a promising approach to interact
with running simulations which could prevent simulation crashes. The large amount
of data expands gaps in the amount of data that can be calculated and the amount of
data that can be processed. Extreme-scale simulations produce more data that can
even be stored. In this thesis, I propose several methods that enhance the process
of steering, exploring, visualizing, and analyzing ongoing numerical simulations.
Diese Projektarbeit beschreibt die Anforderungen, den Aufbau und die Implementierung der Anfrageverarbeitung (Query-Engine). Im diesem Kapitel werden die Zielsetzungen des Meta-Akad Projekts und die Realisierungsmöglichkeiten mit dem Java 2 Enterprise Edition Framework erörtert. Ferner wird die Einordnung der Anfrageverarbeitung in das Gesamtsystem gezeigt. Das zweite Kapitel erläutert grob Anforderungen sowie Ablauf der Anfrageverarbeitung und stellt das Implementierungskonzept dar. In den Nachfolgenden Kapitel wird dann näher auf die einzelnen Phasen der Verarbeitung und die auftretenden Probleme eingegangen. Am Ende werden im Kapitel Ausblick Möglichkeiten für Erweiterungen und Verbesserungen der Anfrageverarbeitung des Meta-Akad Suchdienstes dargelegt.
About the approach The approach of TOPO was originally developed in the FABEL project1[1] to support architects in designing buildings with complex installations. Supplementing knowledge-based design tools, which are available only for selected subtasks, TOPO aims to cover the whole design process. To that aim, it relies almost exclusively on archived plans. Input to TOPO is a partial plan, and output is an elaborated plan. The input plan constitutes the query case and the archived plans form the case base with the source cases. A plan is a set of design objects. Each design object is defined by some semantic attributes and by its bounding box in a 3-dimensional coordinate system. TOPO supports the elaboration of plans by adding design objects.
Struktur und Werkzeuge des experiment-spezifischen Datenbereichs der SFB501 Erfahrungsdatenbank
(1999)
Software-Entwicklungsartefakte müssen zielgerichtet während der Durchführung eines Software- Projekts erfasst werden, um für die Wiederverwendung aufbereitet werden zu können. Die methodische Basis hierzu bildet im Sonderforschungsbereich 501 das Konzept der Erfahrungsdatenbank. In ihrem experiment-spezifischen Datenbereich werden für jedes Entwicklungsprojekt alle Software-Entwicklungsartefakte abgelegt, die während des Lebenszyklus eines Projektes anfallen. In ihrem übergreifenden Datenbereich werden all die jenigen Artefakte aus dem experiment-spezifischen Datenbereich zusammengefasst, die für eine Wiederverwendung in nachfolgenden Projekten in Frage kommen. Es hat sich gezeigt, dass bereits zur Nutzung der Datenmengen im experiment- spezifischen Datenbereich der Erfahrungsdatenbank ein systematischer Zugriff notwendig ist. Ein systematischer Zugriff setzt jedoch eine normierte Struktur voraus. Im experiment-spezifischen Bereich werden zwei Arten von Experimenttypen unterschieden: "Kontrollierte Experimente" und "Fallstudien". Dieser Bericht beschreibt die Ablage- und Zugriffsstruktur für den Experimenttyp "Fallstudien". Die Struktur wurde aufgrund der Erfahrungen in ersten Fallstudien entwickelt und evaluiert.
Versions- und Konfigurationsmanagement sind zentrale Instrumente zur intellektuellen Beherrschung komplexer Softwareentwicklungen. In stark wiederverwendungsorientierten Softwareentwicklungsansätzen -wie vom SFB bereitgestellt- muß der Begriff der Konfiguration von traditionell produktorientierten Artefakten auf Prozesse und sonstige Entwicklungserfahrungen erweitert werden. In dieser Veröffentlichung wird ein derartig erweitertes Konfigurationsmodell vorgestellt. Darüberhinau wird eine Ergänzung traditioneller Projektplanungsinformationen diskutiert, die die Ableitung maßgeschneiderter Versions- und Konfigurationsmanagementmechanismen vor Projektbeginn ermöglichen.
Software development is becoming a more and more distributed process, which urgently needs supporting tools in the field of configuration management, software process/w orkflow management, communication and problem tracking. In this paper we present a new distributed software configuration management framework COMAND. It offers high availabilit y through replication and a mechanism to easily change and adapt the project structure to new business needs. To better understand and formally prove some properties of COMAND, we have modeled it in a formal technique based on distributed graph transformations. This formalism provides an intuitive rule-based description technique mainly for the dynamic behavior of the system on an abstract level. We use it here to model the replication subsystem.
Die Sichten von Projektmitgliedern auf Prozesse von Software-Entwicklungen sollen in der Prozeßmodellierungssprache MVP-L formuliert und anschließend in ein Umfassendes Prozeßmodell integriert werden. Dabei ist die Identifikation ähnlicher Informationen in verschiedenen Sichten von Bedeutung. In dieser Arbeit berichten
wir über die Adaption und Synthese verschiedener Ansätze zum Thema Ähnlichkeit aus unterschiedlichen Domänen (Schema-Integration beim Datenbank-Entwurf, Analoges und Fallbasiertes Schließen, Wiederverwendung und System-Spezifikation). Das Ergebnis, die Ähnlichkeitsfunktion vsim, wird anhand eines Referenzbeispiels illustriert. Dabei gehen wir insbesondere auf die Eigenschaft der Funktion vsim ein und berichten über Erfahrungen im Umgang mit dieser Funktion zur Berechnung der Ähnlichkeit zwischen Prozeßmodellen.
In the increasingly competitive public-cloud marketplace, improving the efficiency of data centers is a major concern. One way to improve efficiency is to consolidate as many VMs onto as few physical cores as possible, provided that performance expectations are not violated. However, as a prerequisite for increased VM densities, the hypervisor’s VM scheduler must allocate processor time efficiently and in a timely fashion. As we show in this thesis, contemporary VM schedulers leave substantial room for improvements in both regards when facing challenging high-VM-density workloads that frequently trigger the VM scheduler. As root causes, we identify (i) high runtime overheads and (ii) unpredictable scheduling heuristics.
To better support high VM densities, we propose Tableau, a VM scheduler that guarantees a minimum processor share and a maximum bound on scheduling delay for every VM in the system. Tableau combines a low-overhead, core-local, table-driven dispatcher with a fast on-demand table-generation procedure (triggered on VM creation/teardown) that employs scheduling techniques typically used in hard real-time systems. Further, we show that, owing to its focus on efficiency and scalability, Tableau provides comparable or better throughput than existing Xen schedulers in dedicated-core scenarios as are commonly employed in public clouds today.
Tableau also extends this design by providing the ability to use idle cycles in the system to perform low-priority background work, without affecting the performance of primary VMs, a common requirement in public clouds.
Finally, VM churn and workload variations in multi-tenant public clouds result in changing interference patterns at runtime, resulting in performance variation. In particular, variation in last-level cache (LLC) interference has been shown to have a significant impact on virtualized application performance in cloud environments. Tableau employs a novel technique for dealing with dynamically changing interference, which involves periodically regenerating tables with the same guarantees on utilization and scheduling latency for all VMs in the system, but having different LLC interference characteristics. We present two strategies to mitigate LLC interference: a randomized approach, and one that uses performance counters to detect VMs running cache-intensive workloads and selectively mitigate interference.
In the past, information and knowledge dissemination was relegated to the
brick-and-mortar classrooms, newspapers, radio, and television. As these
processes were simple and centralized, the models behind them were well
understood and so were the empirical methods for optimizing them. In today’s
world, the internet and social media has become a powerful tool for information
and knowledge dissemination: Wikipedia gets more than 1 million edits per day,
Stack Overflow has more than 17 million questions, 25% of US population visits
Yahoo! News for articles and discussions, Twitter has more than 60 million
active monthly users, and Duolingo has 25 million users learning languages
online. These developments have introduced a paradigm shift in the process of
dissemination. Not only has the nature of the task moved from being centralized
to decentralized, but the developments have also blurred the boundary between
the creator and the consumer of the content, i.e., information and knowledge.
These changes have made it necessary to develop new models, which are better
suited to understanding and analysing the dissemination, and to develop new
methods to optimize them.
At a broad level, we can view the participation of users in the process of
dissemination as falling in one of two settings: collaborative or competitive.
In the collaborative setting, the participants work together in crafting
knowledge online, e.g., by asking questions and contributing answers, or by
discussing news or opinion pieces. In contrast, as competitors, they vie for
the attention of their followers on social media. This thesis investigates both
these settings.
The first part of the thesis focuses on the understanding and analysis of
content being created online collaboratively. To this end, I propose models for
understanding the complexity of the content of collaborative online discussions
by looking exclusively at the signals of agreement and disagreement expressed
by the crowd. This leads to a formal notion of complexity of opinions and
online discussions. Next, I turn my attention to the participants of the crowd,
i.e., the creators and consumers themselves, and propose an intuitive model for
both, the evolution of their expertise and the value of the content they
collaboratively contribute and learn from on online Q&A based forums. The
second part of the thesis explores the competitive setting. It provides methods
to help the creators gain more attention from their followers on social media.
In particular, I consider the problem of controlling the timing of the posts of
users with the aim of maximizing the attention that their posts receive under
the idealized setting of full-knowledge of timing of posts of others. To solve
it, I develop a general reinforcement learning based method which is shown to
have good performance on the when-to-post problem and which can be employed in
many other settings as well, e.g., determining the reviewing times for spaced
repetition which lead to optimal learning. The last part of the thesis looks at
methods for relaxing the idealized assumption of full knowledge. This basic
question of determining the visibility of one’s posts on the followers’ feeds
becomes difficult to answer on the internet when constantly observing the feeds
of all the followers becomes unscalable. I explore the links of this problem to
the well-studied problem of web-crawling to update a search engine’s index and
provide algorithms with performance guarantees for feed observation policies
which minimize the error in the estimate of visibility of one’s posts.
The task of printed Optical Character Recognition (OCR), though considered ``solved'' by many, still poses several challenges. The complex grapheme structure of many scripts, such as Devanagari and Urdu Nastaleeq, greatly lowers the performance of state-of-the-art OCR systems.
Moreover, the digitization of historical and multilingual documents still require much probing. Lack of benchmark datasets further complicates the development of reliable OCR systems. This thesis aims to find the answers to some of these challenges using contemporary machine learning technologies. Specifically, the Long Short-Term Memory (LSTM) networks, have been employed to OCR modern as well historical monolingual documents. The excellent OCR results obtained on these have led us to extend their application for multilingual documents.
The first major contribution of this thesis is to demonstrate the usability of LSTM networks for monolingual documents. The LSTM networks yield very good OCR results on various modern and historical scripts, without using sophisticated features and post-processing techniques. The set of modern scripts include modern English, Urdu Nastaleeq and Devanagari. To address the challenge of OCR of historical documents, this thesis focuses on Old German Fraktur script, medieval Latin script of the 15th century, and Polytonic Greek script. LSTM-based systems outperform the contemporary OCR systems on all of these scripts. To cater for the lack of ground-truth data, this thesis proposes a new methodology, combining segmentation-based and segmentation-free OCR approaches, to OCR scripts for which no transcribed training data is available.
Another major contribution of this thesis is the development of a novel multilingual OCR system. A unified framework for dealing with different types of multilingual documents has been proposed. The core motivation behind this generalized framework is the human reading ability to process multilingual documents, where no script identification takes place.
In this design, the LSTM networks recognize multiple scripts simultaneously without the need to identify different scripts. The first step in building this framework is the realization of a language-independent OCR system which recognizes multilingual text in a single step. This language-independent approach is then extended to script-independent OCR that can recognize multiscript documents using a single OCR model. The proposed generalized approach yields low error rate (1.2%) on a test corpus of English-Greek bilingual documents.
In summary, this thesis aims to extend the research in document recognition, from modern Latin scripts to Old Latin, to Greek and to other ``under-privilaged'' scripts such as Devanagari and Urdu Nastaleeq.
It also attempts to add a different perspective in dealing with multilingual documents.
There is a well known relationship between alternating automata on finite words and symbolically represented nondeterministic automata on finite words. This relationship is of practical relevance because it allows to combine the advantages of alternating and symbolically represented nondeterministic automata on finite words. However, for infinite words the situation is unclear. Therefore, this work investigates the relationship between alternating omega-automata and symbolically represented nondeterministic omega-automata. Thereby, we identify classes of alternating omega-automata that are as expressive as safety, liveness and deterministic prefix automata, respectively. Moreover, some very simple symbolic nondeterminisation procedures are developed for the classes corresponding to safety and liveness properties.
Das Projekt Meta-Akad hat das Ziel, Lernenden und Lehrenden den möglichst einfachen, umfassenden und schnellen Zugriff auf Lehrmaterial zu ermöglichen. Dabei werden verschiedene, über die Aspekte einer reinen Internet- Suchmaschine hinausgehende Aspekte berücksichtigt: Neben dem Aufbau einer umfangreichen und repräsentativen Sammlung von Lerndokumenten sollen diese mittels bibliothekarischer Methoden erschlossen und mit umfangreichen Metadaten, wie beispielsweise einer inhaltlichen Einordnung, versehen werden. Um dem Problem der fraglichen Qualität von Dokumenten aus dem Internet gerecht zu werden, bietet Meta-Akad die Möglichkeit diese durch Beguchtachtungsverfahren sicherzustellen. Aufgrund dieses Mehrwerts versteht sich das Projekt als virtuelle, über das Internet erreichbare Bibliothek. Der Zugriff auf die erfassten Dokumente ist durch eine Web-basierte Schnittstelle realisiert: Diese soll sowohl die Möglichkeit einer Suche durch Angabe von Schlüsselwörtern, als auch das Blättern in der Dokumentsammlung erlauben. Eine Suche nach Schlüsselwörtern soll neben den Meta-Daten auch den gesamten textuellen Inhalt der Dokumente betreffen. Die Integration der Volltextsuche in den bereits vorhandenen Meta-Daten Suchvorgang ist das Kernthema dieser Projektarbeit.
Interconnected, autonomously driving cars shall realize the vision of a zero-accident, low energy mobility in spite of a fast increasing traffic volume. Tightly interconnected medical devices and health care systems shall ensure the health of an aging society. And interconnected virtual power plants based on renewable energy sources shall ensure a clean energy supply in a society that consumes more energy than ever before. Such open systems of systems will play an essential role for economy and society.
Open systems of systems dynamically connect to each other in order to collectively provide a superordinate functionality, which could not be provided by a single system alone. The structure as well as the behavior of an open system of system dynamically emerge at runtime leading to very flexible solutions working under various different environmental conditions. This flexibility and adaptivity of systems of systems are a key for realizing the above mentioned scenarios.
On the other hand, however, this leads to uncertainties since the emerging structure and behavior of a system of system can hardly be anticipated at design time. This impedes the indispensable safety assessment of such systems in safety-critical application domains. Existing safety assurance approaches presume that a system is completely specified and configured prior to a safety assessment. Therefore, they cannot be applied to open systems of systems. In consequence, safety assurance of open systems of systems could easily become a bottleneck impeding or even preventing the success of this promising new generation of embedded systems.
For this reason, this thesis introduces an approach for the safety assurance of open systems of systems. To this end, we shift parts of the safety assurance lifecycle into runtime in order to dynamically assess the safety of the emerging system of system. We use so-called safety models at runtime for enabling systems to assess the safety of an emerging system of system themselves. This leads to a very flexible runtime safety assurance framework.
To this end, this thesis describes the fundamental knowledge on safety assurance and model-driven development, which are the indispensable prerequisites for defining safety models at runtime. Based on these fundamentals, we illustrate how we modularized and formalized conventional safety assurance techniques using model-based representations and analyses. Finally, we explain how we advanced these design time safety models to safety models that can be used by the systems themselves at runtime and how we use these safety models at runtime to create an efficient and flexible runtime safety assurance framework for open systems of systems.