Filtern
Erscheinungsjahr
Dokumenttyp
- Preprint (346)
- Dissertation (139)
- Bericht (139)
- Wissenschaftlicher Artikel (109)
- Masterarbeit (45)
- Studienarbeit (13)
- Konferenzveröffentlichung (6)
- Bachelorarbeit (2)
- Habilitation (2)
- Teil eines Buches (Kapitel) (1)
Schlagworte
- AG-RESY (64)
- PARO (31)
- Case-Based Reasoning (20)
- Visualisierung (17)
- SKALP (16)
- CoMo-Kit (15)
- Fallbasiertes Schliessen (12)
- RODEO (12)
- Robotik (12)
- HANDFLEX (11)
Fachbereich / Organisatorische Einheit
- Fachbereich Informatik (802) (entfernen)
Grob skizziert soll das System in der Lage sein, aus einer vorgegebenen Konstruktionszeichnung eines Drehteils einen Plan f"ur die maschinelle Fertigung dieses Teils zu erstellen. Ausgehend vom Ansatz des fallbasierten Schliessens besteht die Aufgabe des Systems darin, aus einer Menge bekannter Drehteile, für die bereits ein Fertigungsplan erstellt worden ist, das Teil zu finden, dessen Darstellung zu der des eingegebenen Teils am ähnlichsten ist. Der Plan dieses ähnlichsten Teils ist dann so zu modifizieren und anzupassen, dass damit das vorgegebene Teil gefertigt werden kann. Ein zentrales Problem ist hierbei die Definition des Ähnlichkeitsbegriffes, der auf jeden Fall den fertigungstechnischen Aspekt berücksichtigen muss.
Based on experiences from an autonomous mobile robot project called MOBOT -III, we found hard realtime-constraints for the operating-system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operatingsystem-kernel", not limited to mobile- robot-projects only, but which might be useful also wherever you have to guarantee a high reliability of a realtime-system. The focus in this article is on a communication-scheme fulfilling the demanded (hard realtime-) assurances although not implying time-delays or jitters on the critical informationchannels. The central chapters discuss a locking-free shared buffer management, without the need for interrupts and a way to arrange the communication architecture in order to produce minimal protocol-overhead and short cycle-times. Most of the remaining communication-capacity (if there is any) is used for redundant transfers, increasing the reliability of the whole system. ALBATROSS is actually implemented on a multi-processor VMEbus-system.
This paper refers to the problem of adaptability over an infinite period of time, regarding dynamic networks. A never ending flow of examples have to be clustered, based on a distance measure. The developed model is based on the self-organizing feature maps of Kohonen [6], [7] and some adaptations by Fritzke [3]. The problem of dynamic surface classification is embedded in the SPIN project, where sub-symbolic abstractions, based on a 3-d scanned environment is being done.
The problem to be discussed here, is the usage of neural network clustering techniques on a mobile robot, in order to build qualitative topologic environment maps. This has to be done in realtime, i.e. the internal world model has to be adapted by the flow of sensor- samples without the possibility to stop this data-flow.Our experiments are done in a simulation environment as well as on a robot, called ALICE.
Based on the experiences from an autonomous mobile robot project called MOBOT-III, we found hard realtime-constraints for the operating- system-design. ALBATROSS is "A flexible multi-tasking and realtime network-operating-system-kernel". The focusin this article is on a communication-scheme fulfilling the previous demanded assurances. The centralchapters discuss the shared buffer management and the way to design the communication architecture.Some further aspects beside the strict realtime-requirements like the possibilities to control and watch a running system, are mentioned. ALBATROSS is actually implemented on a multi-processor VMEbus-system.
Based on the idea of using topologic feature-mapsinstead of geometric environment maps in practical mobile robot tasks, we show an applicable way tonavigate on such topologic maps. The main features regarding this kind of navigation are: handling of very inaccurate position (and orientation) information as well as implicit modelling of complex kinematics during an adaptation phase. Due to the lack of proper a-priori knowledge, a re-inforcement based model is used for the translation of navigator commands to motor actions. Instead of employing a backpropagation network for the cen-tral associative memory module (attaching actionprobabilities to sensor situations resp. navigatorcommands) a much faster dynamic cell structure system based on dynamic feature maps is shown. Standard graph-search heuristics like A* are applied in the planning phase.
SPIN-NFDS Learning and Preset Knowledge for Surface Fusion - A Neural Fuzzy Decision System -
(1993)
The problem to be discussed in this paper may be characterized in short by the question: "Are these two surface fragments belonging together (i.e. belonging to the same surface)?" The presented techniques try to benefit from some predefined knowledge as well as from the possibility to refine and adapt this knowledge according to a (changing) real environment, resulting in a combination of fuzzy-decision systems and neural networks. The results are encouraging (fast convergence speed, high accuracy), and the model might be used for a wide range of applications. The general frame surrounding the work in this paper is the SPIN- project, where emphasis is on sub-symbolic abstractions, based on a 3-d scanned environment.
World models for mobile robots as introduced in many projects, are mostly redundant regarding similar situations detected in different places. The present paper proposes a method for dynamic generation of a minimal world model based on these redundancies. The technique is an extention of the qualitative topologic world modelling methods. As a central aspect the reliability regarding errortolerance and stability will be emphasized. The proposed technique demands very low constraints on the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard realtime constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot "
This article will discuss a qualitative, topological and robust world-modelling technique with special regard to navigation-tasks for mobile robots operating in unknownenvironments. As a central aspect, the reliability regarding error-tolerance and stability will be emphasized. Benefits and problems involved in exploration, as well as in navigation tasks, are discussed. The proposed method demands very low constraints for the kind and quality of the employed sensors as well as for the kinematic precision of the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot
Self-localization in unknown environments respectively correlation of current and former impressions of the world is an essential ability for most mobile robots. The method,proposed in this article is the construction of a qualitative, topological world model as a basis for self-localization. As a central aspect the reliability regarding error-tolerance and stability will be emphasized. The proposed techniques demand very low constraints for the kind and quality of the employed sensors as well as for the kinematic precisionof the utilized mobile platform. Hard real-time constraints can be handled due to the low computational complexity. The principal discussions are supported by real-world experiments with the mobile robot.
Visual Search has been investigated by many researchers inspired by the biological fact, that the sensory elements on the mammal retina are not equably distributed. Therefore the focus of attention (the area of the retina with the highest density of sensory elements) has to be directed in a way to efficiently gather data according to certain criteria. The work discussed in this article concentrates on applying a laser range finder instead of a silicon retina. The laser range finder is maximal focused at any time, but therefore a low resolution total-scene-image, available with camera-like devices from scratch on, cannot be used here. By adapting a couple of algorithms, the edge-scanning module steering the laser range finder is able to trace a detected edge. Based on the data scanned so far , two questions have to be answered. First: "Should the actual (edge-) scanning be interrupted in order to give another area of interest a chance of being investigated?" and second: "Where to start a new edge-scanning, after being interrupted?". These two decision-problems might be solved by a range of decision systems. The correctness of the decisions depends widely on the actual environment and the underlying rules may not be well initialized with a-priori knowledge. So we will present a version of a reinforcement decision system together with an overall scheme for efficiently controlling highly focused devices.
ALICE
(1994)
Die am Fraunhofer-Institut für Experimentelles Software Engineering entwickelte MARMOT-Methode beschreibt einen Ansatz für die komponentenbasierte Entwicklung eingebetteter Systeme. Sie baut auf der ebenfalls am IESE entwickelten KobrA-Methode auf und erweitert diese um spezielle Anforderungen für eingebettete Systeme. Die Idee dahinter ist es, einzelne Komponenten zu modellieren, implementieren und zu testen um später auf vorhandene qualitätsgesicherte Komponenten zurückgreifen zu können, und zu Applikationen zu komponieren ohne diese immer wieder neu entwickeln und testen zu müssen. Im Rahmen dieser Projektarbeit sollte mit Hilfe der MARMOT-Methode ein Antikollisionssystem für ein Modellauto entwickelt werden. Nach Auswahl der hierfür geeigneten Hardware wurde zunächst ein Grundkonzept für die Sensorik entwickelt. Die vom verwendeten RADAR-Sensor gelieferten Signale müssen für die weitere Verwendung durch einen Mikrocontroller aufbereitet werden. Vor der eigentlichen Systemmodellierung musste deshalb zu diesem Zweck eine Sensorplatine entwickelt werden. Anschließend folgte die Modellierung des Antikollisionssystems in UML 2.0 und die Implementierung in C. Zum Abschluss wurde das Zusammenspiel der Hard- und Software getestet.
Nowadays, accounting, charging and billing users' network resource consumption are commonly used for the purpose of facilitating reasonable network usage, controlling congestion, allocating cost, gaining revenue, etc. In traditional IP traffic accounting systems, IP addresses are used to identify the corresponding consumers of the network resources. However, there are some situations in which IP addresses cannot be used to identify users uniquely, for example, in multi-user systems. In these cases, network resource consumption can only be ascribed to the owners of these hosts instead of corresponding real users who have consumed the network resources. Therefore, accurate accountability in these systems is practically impossible. This is a flaw of the traditional IP address based IP traffic accounting technique. This dissertation proposes a user based IP traffic accounting model which can facilitate collecting network resource usage information on the basis of users. With user based IP traffic accounting, IP traffic can be distinguished not only by IP addresses but also by users. In this dissertation, three different schemes, which can achieve the user based IP traffic accounting mechanism, are discussed in detail. The inband scheme utilizes the IP header to convey the user information of the corresponding IP packet. The Accounting Agent residing in the measured host intercepts IP packets passing through it. Then it identifies the users of these IP packets and inserts user information into the IP packets. With this mechanism, a meter located in a key position of the network can intercept the IP packets tagged with user information, extract not only statistic information, but also IP addresses and user information from the IP packets to generate accounting records with user information. The out-of-band scheme is a contrast scheme to the in-band scheme. It also uses an Accounting Agent to intercept IP packets and identify the users of IP traffic. However, the user information is transferred through a separated channel, which is different from the corresponding IP packets' transmission. The Multi-IP scheme provides a different solution for identifying users of IP traffic. It assigns each user in a measured host a unique IP address. Through that, an IP address can be used to identify a user uniquely without ambiguity. This way, traditional IP address based accounting techniques can be applied to achieve the goal of user based IP traffic accounting. In this dissertation, a user based IP traffic accounting prototype system developed according to the out-of-band scheme is also introduced. The application of user based IP traffic accounting model in the distributed computing environment is also discussed.
Conditional Compilation (CC) is frequently used as a variation mechanism in software product lines (SPLs). However, as a SPL evolves the variable code realized by CC erodes in the sense that it becomes overly complex and difficult to understand and maintain. As a result, the SPL productivity goes down and puts expected advantages more and more at risk. To investigate the variability erosion and keep the productivity above a sufficiently good level, in this paper we 1) investigate several erosion symptoms in an industrial SPL; 2) present a variability improvement process that includes two major improvement strategies. While one strategy is to optimize variable code within the scope of CC, the other strategy is to transition CC to a new variation mechanism called Parameterized Inclusion. Both of these two improvement strategies can be conducted automatically, and the result of CC optimization is provided. Related issues such as applicability and cost of the improvement are also discussed.
As a Software Product Line (SPL) evolves with increasing number of features and feature values, the feature correlations become extremely intricate, and the specifications of these correlations tend to be either incomplete or inconsistent with their realizations, causing misconfigurations in practice. In order to guide product configuration processes, we present a solution framework to recover complex feature correlations from existing product configurations. These correlations are further pruned automatically and validated by domain experts. During implementation, we use association mining techniques to automatically extract strong association rules as potential feature correlations. This approach is evaluated using a large-scale industrial SPL in the embedded system domain, and finally we identify a large number of complex feature correlations.
Automata theory has given rise to a variety of automata models that consist
of a finite-state control and an infinite-state storage mechanism. The aim
of this work is to provide insights into how the structure of the storage
mechanism influences the expressiveness and the analyzability of the
resulting model. To this end, it presents generalizations of results about
individual storage mechanisms to larger classes. These generalizations
characterize those storage mechanisms for which the given result remains
true and for which it fails.
In order to speak of classes of storage mechanisms, we need an overarching
framework that accommodates each of the concrete storage mechanisms we wish
to address. Such a framework is provided by the model of valence automata,
in which the storage mechanism is represented by a monoid. Since the monoid
serves as a parameter to specifying the storage mechanism, our aim
translates into the question: For which monoids does the given
(automata-theoretic) result hold?
As a first result, we present an algebraic characterization of those monoids
over which valence automata accept only regular languages. In addition, it
turns out that for each monoid, this is the case if and only if valence
grammars, an analogous grammar model, can generate only context-free
languages.
Furthermore, we are concerned with closure properties: We study which
monoids result in a Boolean closed language class. For every language class
that is closed under rational transductions (in particular, those induced by
valence automata), we show: If the class is Boolean closed and contains any
non-regular language, then it already includes the whole arithmetical
hierarchy.
This work also introduces the class of graph monoids, which are defined by
finite graphs. By choosing appropriate graphs, one can realize a number of
prominent storage mechanisms, but also combinations and variants thereof.
Examples are pushdowns, counters, and Turing tapes. We can therefore relate
the structure of the graphs to computational properties of the resulting
storage mechanisms.
In the case of graph monoids, we study (i) the decidability of the emptiness
problem, (ii) which storage mechanisms guarantee semilinear Parikh images,
(iii) when silent transitions (i.e. those that read no input) can be
avoided, and (iv) which storage mechanisms permit the computation of
downward closures.
Wireless LANs operating within unlicensed frequency bands require random access schemes such as CSMA/ CA, so that wireless networks from different administrative domains (for example wireless community networks) may co-exist without central coordination, even when they happen to operate on the same radio channel. Yet, it is evident that this Jack of coordination leads to an inevitable loss in efficiency due to contention on the MAC layer. The interesting question is, which efficiency may be gained by adding coordination to existing, unrelated wireless networks, for example by self-organization. In this paper, we present a methodology based on a mathematical programming formulation to determine the
parameters (assignment of stations to access points, signal strengths and channel assignment of both access points and stations) for a scenario of co-existing CSMA/ CA-based wireless networks, such that the contention between these networks is minimized. We demonstrate how it is possible to solve this discrete, non-linear optimization problem exactly for small
problems. For larger scenarios, we present a genetic algorithm specifically tuned for finding near-optimal solutions, and compare its results to theoretical lower bounds. Overall, we provide a benchmark on the minimum contention problem for coordination mechanisms in CSMA/CA-based wireless networks.
The paper focuses on the problem of trajectory planning of flexible redundant robot manipulators (FRM) in joint space. Compared to irredundant flexible manipulators, FRMs present additional possibilities in trajectory planning due to their kinematics redundancy. A trajectory planning method to minimize vibration of FRMs is presented based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as a planning variable. Quadrinomial and quintic polynomials are used to describe the segments which connect the initial, intermediate, and final points in joint space. The trajectory planning of FRMs is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. A case study shows that the method is applicable.
Point-to-Point Trajectory Planning of Flexible Redundant Robot Manipulators Using Genetic Algorithms
(2001)
The paper focuses on the problem of point-to-point trajectory planning for flexible redundant robot manipulators (FRM) in joint space. Compared with irredundant flexible manipulators, a FRM possesses additional possibilities during point-to-point trajectory planning due to its kinematics redundancy. A trajectory planning method to minimize vibration and/or executing time of a point-to-point motion is presented for FRM based on Genetic Algorithms (GAs). Kinematics redundancy is integrated into the presented method as planning variables. Quadrinomial and quintic polynomial are used to describe the segments that connect the initial, intermediate, and final points in joint space. The trajectory planning of FRM is formulated as a problem of optimization with constraints. A planar FRM with three flexible links is used in simulation. Case studies show that the method is applicable.
The vibration induced in a deformable object upon automatic handling by robot manipulators can often be bothersome. This paper presents a force/torque sensor-based method for handling deformable linear objects (DLOs) in a manner suitable to eliminate acute vibration. An adjustment-motion that can be attached to the end of an arbitrary end-effector's trajectory is employed to eliminate vibration of deformable objects. Differently from model-based methods, the presented sensor-based method does not employ any information from previous motions. The adjustment-motion is generated automatically by analyzing data from a force/torque sensor mounted on the robot wrist. Template matching technique is used to find out the matching point between the vibrational signal of the DLO and a template. Experiments are conducted to test the new method under various conditions. Results demonstrate the effectiveness of the sensor-based adjustment-motion.
Manipulating Deformable Linear Objects: Attachable Adjustment-Motions for Vibration Reduction
(2001)
This paper addresses the problem of handling deformable linear objects (DLOs) in a suitable way to avoid acute vibration. Different types of adjustment-motions that eliminate vibration of deformable objects and can be attached to the end of an arbitrary end-effector trajectory are presented. For describing the dynamics of deformable linear objects, the finite element method is used to derive the dynamic differential equations. Genetic algorithm is used to find the optimal adjustment motion for each simulation example. Experiments are conducted to verify the presented manipulating method.
Manipulating Deformable Linear Objects: Model-Based Adjustment-Motion for Vibration Reduction
(2001)
This paper addresses the problem of handling deformable linear objects (DLOs) in a suitable way to avoid acute vibration. An adjustment-motion that eliminates vibration of DLOs and can be attached to the end of any arbitrary end-effector's trajectory is presented, based on the concept of open-loop control. The presented adjustment-motion is a kind of agile end-effector motion with limited scope. To describe the dynamics of deformable linear objects, the finite element method is used to derive the dynamic differential equations. Genetic algorithm is used to find the optimal adjustment-motion for each simulation example. In contrast to previous approaches, the presented method can be treated as one of the manipulation skills and can be applied to different cases without major changes to the method.
It is difficult for robots to handle a vibrating deformable object. Even for human beings it is a high-risk operation to, for example, insert a vibrating linear object into a small hole. However, fast manipulation using a robot arm is not just a dream; it may be achieved if some important features of the vibration are detected online. In this paper, we present an approach for fast manipulation using a force/torque sensor mounted on the robot's wrist. Template matching method is employed to recognize the vibrational phase of the deformable objects. Therefore, a fast manipulation can be performed with a high success rate, even if there is acute vibration. Experiments inserting a deformable object into a hole are conducted to test the presented method. Results demonstrate that the presented sensor-based online fast manipulation is feasible.
The safety of embedded systems is becoming more and more important nowadays. Fault Tree Analysis (FTA) is a widely used technique for analyzing the safety of embedded systems. A standardized tree-like structure called a Fault Tree (FT) models the failures of the systems. The Component Fault Tree (CFT) provides an advanced modeling concept for adapting the traditional FTs to the hierarchical architecture model in system design. Minimal Cut Set (MCS) analysis is a method that works for qualitative analysis based on the FTs. Each MCS represents a minimal combination of component failures of a system called basic events, which may together cause the top-level system failure. The ordinary representations of MCSs consist of plain text and data tables with little additional supporting visual and interactive information. Importance analysis based on FTs or CFTs estimates the contribution of each potential basic event to a top-level system failure. The resulting importance values of basic events are typically represented in summary views, e.g., data tables and histograms. There is little visual integration between these forms and the FT (or CFT) structure. The safety of a system can be improved using an iterative process, called the safety improvement process, based on FTs taking relevant constraints into account, e.g., cost. Typically, relevant data regarding the safety improvement process are presented across multiple views with few interactive associations. In short, the ordinary representation concepts cannot effectively facilitate these analyses.
We propose a set of visualization approaches for addressing the issues above mentioned in order to facilitate those analyses in terms of the representations.
Contribution:
1. To support the MCS analysis, we propose a matrix-based visualization that allows detailed data of the MCSs of interest to be viewed while maintaining a satisfactory overview of a large number of MCSs for effective navigation and pattern analysis. Engineers can also intuitively analyze the influence of MCSs of a CFT.
2. To facilitate the importance analysis based on the CFT, we propose a hybrid visualization approach that combines the icicle-layout-style architectural views with the CFT structure. This approach facilitates to identify the vulnerable components taking the hierarchies of system architecture into account and investigate the logical failure propagation of the important basic events.
3. We propose a visual safety improvement process that integrates an enhanced decision tree with a scatter plot. This approach allows one to visually investigate the detailed data related to individual steps of the process while maintaining the overview of the process. The approach facilitates to construct and analyze improvement solutions of the safety of a system.
Using our visualization approaches, the MCS analysis, the importance analysis, and the safety improvement process based on the CFT can be facilitated.
This paper discusses the problem of automatic off-line programming and motion planning for industrial robots. At first, a new concept consisting of three steps is proposed. The first step, a new method for on-line motion planning is introduced. The motion planning method is based on the A*-search algorithm and works in the implicit configuration space. During searching, the collisions are detected in the explicitly represented Cartesian workspace by hierarchical distance computation. In the second step, the trajectory planner has to transform the path into a time and energy optimal robot program. The practical application of these two steps strongly depends on the method for robot calibration with high accuracy, thus, mapping the virtual world onto the real world, which is discussed in the third step.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
A new problem for the automated off-line programming of industrial robot application is investigated. The Multi-Goal Path Planning is to find the collision-free path connecting a set of goal poses and minimizing e.g. the total path length. Our solution is based on an earlier reported path planner for industrial robot arms with 6 degrees-of-freedom in an on-line given 3D environment. To control the path planner, four different goal selection methods are introduced and compared. While the Random and the Nearest Pair Selection methods can be used with any path planner, the Nearest Goal and the Adaptive Pair Selection method are favorable for our planner. With the latter two goal selection methods, the Multi-Goal Path Planning task can be significantly accelerated, because they are able to automatically solve the simplest path planning problems first. Summarizing, compared to Random or Nearest Pair Selection, this new Multi-Goal Path Planning approach results in a further cost reduction of the programming phase.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A*-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear, and sometimes even superlinear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
Anwendungen effizienter Verfahren in Automation - Universität Karlsruhe auf der SPS97 in Nürnberg -
(1998)
Die Bewegungsplanung für Industrieroboter ist eine notwendige Voraussetzung, damit sich autonome Systeme kollisionsfrei durch die Umwelt bewegen können. Die Berücksichtigung von dynamischen Hindernissen zur Laufzeit erfordert allerdings leistungsfähige Algorithmen, zur Lösung dieser Aufgabenstellung in Echtzeit. Eine Möglichkeit zur Beschleunigung der Algorithmen ist der effiziente Einsatz von skalierbarer Parallelverarbeitung. Die softwaretechnische Umsetzung kann aber nur dann erfolgreich sein, wenn ein Parallelrechner zur Verfügung steht, der einen hohen Datendurchsatz bei geringer Latenzzeit bietet. Darüber hinaus muß dieser Parallelrechner unter vertretbarem Aufwand bedienbar sein und ein gutes Preisleistungsverhältnis aufweisen, damit die Parallelverarbeitung verstärkt in der Industrie zum Einsatz kommt. In diesem Artikel wird ein Workstation-Cluster auf der Basis von neun Standard- PCs vorgestellt, die über eine spezielle Kommunikationskarte miteinander vernetzt sind. In den einzelnen Abschnitten werden die gesammelten Erfahrungen bei der Inbetriebnahme, Systemadministration und Anwendung geschildert. Als Beispiel für eine Anwendung auf diesem Cluster wird ein paralleler Bewegungsplaner für Industrieroboter beschrieben.
This article presents contributions in the field of path planning for industrial robots with 6 degrees of freedom. This work presents the results of our research in the last 4 years at the Institute for Process Control and Robotics at the University of Karlsruhe. The path planning approach we present works in an implicit and discretized C-space. Collisions are detected in the Cartesian workspace by a hierarchical distance computation. The method is based on the A* search algorithm and needs no essential off-line computation. A new optimal discretization method leads to smaller search spaces, thus speeding up the planning. For a further acceleration, the search was parallelized. With a static load distribution good speedups can be achieved. By extending the algorithm to a bidirectional search, the planner is able to automatically select the easier search direction. The new dynamic switching of start and goal leads finally to the multi-goal path planning, which is able to compute a collision-free path between a set of goal poses (e.g., spot welding points) while minimizing the total path length.
On the Complexity of the Uncapacitated Single Allocation p-Hub Median Problem with Equal Weights
(2007)
The Super-Peer Selection Problem is an optimization problem in network topology construction. It may be cast as a special case of a Hub Location Problem, more exactly an Uncapacitated Single Allocation p-Hub Median Problem with equal weights. We show that this problem is still NP-hard by reduction from Max Clique.
We present a convenient notation for positive/negativeADconditional equations. Theidea is to merge rules specifying the same function by using caseAD, ifAD, matchAD, and letADexpressions.Based on the presented macroADruleADconstruct, positive/negativeADconditional equational specifiADcations can be written on a higher level. A rewrite system translates the macroADruleADconstructsinto positive/negativeADconditional equations.
We present an inference system for clausal theorem proving w.r.t. various kinds of inductivevalidity in theories specified by constructor-based positive/negative-conditional equations. The reductionrelation defined by such equations has to be (ground) confluent, but need not be terminating. Our con-structor-based approach is well-suited for inductive theorem proving in the presence of partially definedfunctions. The proposed inference system provides explicit induction hypotheses and can be instantiatedwith various wellfounded induction orderings. While emphasizing a well structured clear design of theinference system, our fundamental design goal is user-orientation and practical usefulness rather thantheoretical elegance. The resulting inference system is comprehensive and relatively powerful, but requiresa sophisticated concept of proof guidance, which is not treated in this paper.This research was supported by the Deutsche Forschungsgemeinschaft, SFB 314 (D4-Projekt)
We study the combination of the following already known ideas for showing confluence ofunconditional or conditional term rewriting systems into practically more useful confluence criteria forconditional systems: Our syntactic separation into constructor and non-constructor symbols, Huet's intro-duction and Toyama's generalization of parallel closedness for non-noetherian unconditional systems, theuse of shallow confluence for proving confluence of noetherian and non-noetherian conditional systems, theidea that certain kinds of limited confluence can be assumed for checking the fulfilledness or infeasibilityof the conditions of conditional critical pairs, and the idea that (when termination is given) only primesuperpositions have to be considered and certain normalization restrictions can be applied for the sub-stitutions fulfilling the conditions of conditional critical pairs. Besides combining and improving alreadyknown methods, we present the following new ideas and results: We strengthen the criterion for overlayjoinable noetherian systems, and, by using the expressiveness of our syntactic separation into constructorand non-constructor symbols, we are able to present criteria for level confluence that are not criteria forshallow confluence actually and also able to weaken the severe requirement of normality (stiffened withleft-linearity) in the criteria for shallow confluence of noetherian and non-noetherian conditional systems tothe easily satisfied requirement of quasi-normality. Finally, the whole paper also gives a practically usefuloverview of the syntactic means for showing confluence of conditional term rewriting systems.
Modern digital imaging technologies, such as digital microscopy or micro-computed tomography, deliver such large amounts of 2D and 3D-image data that manual processing becomes infeasible. This leads to a need for robust, flexible and automatic image analysis tools in areas such as histology or materials science, where microstructures are being investigated (e.g. cells, fiber systems). General-purpose image processing methods can be used to analyze such microstructures. These methods usually rely on segmentation, i.e., a separation of areas of interest in digital images. As image segmentation algorithms rarely adapt well to changes in the imaging system or to different analysis problems, there is a demand for solutions that can easily be modified to analyze different microstructures, and that are more accurate than existing ones. To address these challenges, this thesis contributes a novel statistical model for objects in images and novel algorithms for the image-based analysis of microstructures. The first contribution is a novel statistical model for the locations of objects (e.g. tumor cells) in images. This model is fully trainable and can therefore be easily adapted to many different image analysis tasks, which is demonstrated by examples from histology and materials science. Using algorithms for fitting this statistical model to images results in a method for locating multiple objects in images that is more accurate and more robust to noise and background clutter than standard methods. On simulated data at high noise levels (peak signal-to-noise ratio below 10 dB), this method achieves detection rates up to 10% above those of a watershed-based alternative algorithm. While objects like tumor cells can be described well by their coordinates in the plane, the analysis of fiber systems in composite materials, for instance, requires a fully three dimensional treatment. Therefore, the second contribution of this thesis is a novel algorithm to determine the local fiber orientation in micro-tomographic reconstructions of fiber-reinforced polymers and other fibrous materials. Using simulated data, it will be demonstrated that the local orientations obtained from this novel method are more robust to noise and fiber overlap than those computed using an established alternative gradient-based algorithm, both in 2D and 3D. The property of robustness to noise of the proposed algorithm can be explained by the fact that a low-pass filter is used to detect local orientations. But even in the absence of noise, depending on fiber curvature and density, the average local 3D-orientation estimate can be about 9° more accurate compared to that alternative gradient-based method. Implementations of that novel orientation estimation method require repeated image filtering using anisotropic Gaussian convolution filters. These filter operations, which other authors have used for adaptive image smoothing, are computationally expensive when using standard implementations. Therefore, the third contribution of this thesis is a novel optimal non-orthogonal separation of the anisotropic Gaussian convolution kernel. This result generalizes a previous one reported elsewhere, and allows for efficient implementations of the corresponding convolution operation in any dimension. In 2D and 3D, these implementations achieve an average performance gain by factors of 3.8 and 3.5, respectively, compared to a fast Fourier transform-based implementation. The contributions made by this thesis represent improvements over state-of-the-art methods, especially in the 2D-analysis of cells in histological resections, and in the 2D and 3D-analysis of fibrous materials.
W-Lisp Sprachbeschreibung
(1993)
W-Lisp [Wippennann 91] ist eine Sprache, die im Bereich der Implementierung höherer
Programmiersprachen verwendet wird. Ihre Anwendung ist nicht auf diesen Bereich beschränkt. Gute Lesbarkeit der W-Lisp-Notation wird durch zahlreiche Anleihen aus dem Bereich der bekannten imperativen Sprachen erzielt. W-Lisp-Programme können im Rahmen eines Common Lisp-Systems ausgeführt werden. In der WLisp Notation können alle Lisp-Funktionen (inkl. MCS) verwendet werden, so daß die Mächtigkeit von Common-Lisp [Steele 90] in dieser Hinsicht auch in W-Lisp verfügbar ist.
In diesem Artikel diskutieren wir Anforderungen aus der Kreditwürdigkeitsprüfung und ihre Erfüllung mit Hilfe der Technik des fallbasierten Schliessens. Innerhalb eines allgemeinen Ansatzes zur fallbasierten Systementwicklung wird ein Lernverfahren zur Optimierung von Entscheidungskosten ausführlich beschrieben. Dieses Verfahren wird, auf der Basis realer Kundendaten, mit dem fallbasierten Entwicklungswerkzeug INRECA empirisch bewertet. Die Voraussetzungen für den Einsatz fallbasierter Systeme zur Kreditwürdigkeitsprüfung werden abschliessend dargestellt und ihre Nüt zlichkeit diskutiert.
INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.
Fallbasiertes Schliessen (engl.: Case-based Reasoning) hat in den vergangenen Jahren zunehmende Bedeutung für den praktischen Einsatz in realen Anwendungsbereichen erlangt. In dieser Arbeit werden zunächst die allgemeine Vorgehensweise und die verschiedenen Teilaufgaben des fallbasierten Schliessens vorgestellt. Anschliessend wird auf die charakteristischen Eigenschaften eines Anwendungsbereiches eingegangen und an der konkreten Aufgabe der Kreditwürdigkeitsprüfung die Realisierung eines fallbasierten Ansatzes in der Finanzwelt beschrieben.
Planabstraktion ist eine Möglichkeit, den Aufwand bei der Suche nach einem Plan zur Lösung eines konkreten Problems zu reduzieren. Hierbei wird eine konkrete Welt mit einer Problemstellung auf eine abstrakte Welt abgebildet. Die abstrakte Problemstellung wird nun in der abstrakten Welt gelöst. Durch die Rückabbildung der abstrakten Lösung auf eine konkrete Lösung erhält man eine Lösung für das konkrete Problem. Da die Anzahl der zur Lösung des abstrakten Problems benötigten Operationen geringer ist und die abstrakten Zustände und Operatoren einer weniger komplexen Beschreibung genügen, wird der Aufwand zur Suche einer konkreten Problemlösung reduziert.
Eine Möglichkeit das Planen in Planungssystemen zu realisieren, ist das fallbasierte Planen. Vereinfacht kann darun ter das Lösen von neuen Planungsproblemen anhand von bereits bekannten Plänen aus der Planungsdomäne verstanden werden. Dazu werden Pläne, die in der Vergangenheit ein Planungsproblem gelöst haben, gesammelt und bei der Lösung neuer Planungsprobleme dahin gehend modifiziert, dass sie das aktuelle Planungsproblem lösen. Um eine grössere Wiederverwendbarkeit der bereits bekannten Pläne zu erreichen, kann man nun eine konkrete Problemstellung mit ihrer Lösung aus der konkreten Planungswelt in eine abstraktere Planungswelt durch eine Abstraktion transformieren.
Most of today’s wireless communication devices operate on unlicensed bands with uncoordinated spectrum access, with the consequence that RF interference and collisions are impairing the overall performance of wireless networks. In the classical design of network protocols, both packets in a collision are considered lost, such that channel access mechanisms attempt to avoid collisions proactively. However, with the current proliferation of wireless applications, e.g., WLANs, car-to-car networks, or the Internet of Things, this conservative approach is increasingly limiting the achievable network performance in practice. Instead of shunning interference, this thesis questions the notion of „harmful“ interference and argues that interference can, when generated in a controlled manner, be used to increase the performance and security of wireless systems. Using results from information theory and communications engineering, we identify the causes for reception or loss of packets and apply these insights to design system architectures that benefit from interference. Because the effect of signal propagation and channel fading, receiver design and implementation, and higher layer interactions on reception performance is complex and hard to reproduce by simulations, we design and implement an experimental platform for controlled interference generation to strengthen our theoretical findings with experimental results. Following this philosophy, we introduce and evaluate a system architecture that leverage interference.
First, we identify the conditions for successful reception of concurrent transmissions in wireless networks. We focus on the inherent ability of angular modulation receivers to reject interference when the power difference of the colliding signals is sufficiently large, the so-called capture effect. Because signal power fades over distance, the capture effect enables two or more sender–receiver pairs to transmit concurrently if they are positioned appropriately, in turn boosting network performance. Second, we show how to increase the security of wireless networks with a centralized network access control system (called WiFire) that selectively interferes with packets that violate a local security policy, thus effectively protecting legitimate devices from receiving such packets. WiFire’s working principle is as follows: a small number of specialized infrastructure devices, the guardians, are distributed alongside a network and continuously monitor all packet transmissions in the proximity, demodulating them iteratively. This enables the guardians to access the packet’s content before the packet fully arrives at the receiver. Using this knowledge the guardians classify the packet according to a programmable security policy. If a packet is deemed malicious, e.g., because its header fields indicate an unknown client, one or more guardians emit a limited burst of interference targeting the end of the packet, with the objective to introduce bit errors into it. Established communication standards use frame check sequences to ensure that packets are received correctly; WiFire leverages this built-in behavior to prevent a receiver from processing a harmful packet at all. This paradigm of „over-the-air“ protection without requiring any prior modification of client devices enables novel security services such as the protection of devices that cannot defend themselves because their performance limitations prohibit the use of complex cryptographic protocols, or of devices that cannot be altered after deployment.
This thesis makes several contributions. We introduce the first software-defined radio based experimental platform that is able to generate selective interference with the timing precision needed to evaluate the novel architectures developed in this thesis. It implements a real-time receiver for IEEE 802.15.4, giving it the ability to react to packets in a channel-aware way. Extending this system design and implementation, we introduce a security architecture that enables a remote protection of wireless clients, the wireless firewall. We augment our system with a rule checker (similar in design to Netfilter) to enable rule-based selective interference. We analyze the security properties of this architecture using physical layer modeling and validate our analysis with experiments in diverse environmental settings. Finally, we perform an analysis of concurrent transmissions. We introduce a new model that captures the physical properties correctly and show its validity with experiments, improving the state of the art in the design and analysis of cross-layer protocols for wireless networks.
Recently, a new Quicksort variant due to Yaroslavskiy was chosen as standard sorting
method for Oracle's Java 7 runtime library. The decision for the change was based on
empirical studies showing that on average, the new algorithm is faster than the formerly
used classic Quicksort. Surprisingly, the improvement was achieved by using a dual pivot
approach — an idea that was considered not promising by several theoretical studies in the
past. In this thesis, I try to find the reason for this unexpected success.
My focus is on the precise and detailed average case analysis, aiming at the flavor of
Knuth's series “The Art of Computer Programming”. In particular, I go beyond abstract
measures like counting key comparisons, and try to understand the efficiency of the
algorithms at different levels of abstraction. Whenever possible, precise expected values are
preferred to asymptotic approximations. This rigor ensures that (a) the sorting methods
discussed here are actually usable in practice and (b) that the analysis results contribute to
a sound comparison of the Quicksort variants.
It has been observed that for understanding the biological function of certain RNA molecules, one has to study joint secondary structures of interacting pairs of RNA. In this thesis, a new approach for predicting the joint structure is proposed and implemented. For this, we introduce the class of m-dimensional context-free grammars --- an extension of stochastic context-free grammars to multiple dimensions --- and present an Earley-style semiring parser for this class. Additionally, we develop and thoroughly discuss an implementation variant of Earley parsers tailored to efficiently handle dense grammars, which embraces the grammars used for structure prediction. A currently proposed partitioning scheme for joint secondary structures is transferred into a two-dimensional context-free grammar, which in turn is used as a stochastic model for RNA-RNA interaction. This model is trained on actual data and then used for predicting most likely joint structures for given RNA molecules. While this technique has been widely used for secondary structure prediction of single molecules, RNA-RNA interaction was hardly approached this way in the past. Although our parser has O(n^3 m^3) time complexity and O(n^2 m^2) space complexity for two RNA molecules of sizes n and m, it remains practically applicable for typical sizes if enough memory is available. Experiments show that our parser is much more efficient for this application than classical Earley parsers. Moreover the predictions of joint structures are comparable in quality to current energy minimization approaches.
Dual-Pivot Quicksort and Beyond: Analysis of Multiway Partitioning and Its Practical Potential
(2016)
Multiway Quicksort, i.e., partitioning the input in one step around several pivots, has received much attention since Java 7’s runtime library uses a new dual-pivot method that outperforms by far the old Quicksort implementation. The success of dual-pivot Quicksort is most likely due to more efficient usage of the memory hierarchy, which gives reason to believe that further improvements are possible with multiway Quicksort.
In this dissertation, I conduct a mathematical average-case analysis of multiway Quicksort including the important optimization to choose pivots from a sample of the input. I propose a parametric template algorithm that covers all practically relevant partitioning methods as special cases, and analyze this method in full generality. This allows me to analytically investigate in depth what effect the parameters of the generic Quicksort have on its performance. To model the memory-hierarchy costs, I also analyze the expected number of scanned elements, a measure for the amount of data transferred from memory that is known to also approximate the number of cache misses very well. The analysis unifies previous analyses of particular Quicksort variants under particular cost measures in one generic framework.
A main result is that multiway partitioning can reduce the number of scanned elements significantly, while it does not save many key comparisons; this explains why the earlier studies of multiway Quicksort did not find it promising. A highlight of this dissertation is the extension of the analysis to inputs with equal keys. I give the first analysis of Quicksort with pivot sampling and multiway partitioning on an input model with equal keys.
In the presented work, I evaluate if and how Virtual Reality (VR) technologies can be used to support researchers working in the geosciences by providing immersive, collaborative visualization systems as well as virtual tools for data analysis. Technical challenges encountered in the development of theses systems are identified and solutions for these are provided.
To enable geologists to explore large digital terrain models (DTMs) in an immersive, explorative fashion within a VR environment, a suitable terrain rendering algorithm is required. For realistic perception of planetary curvature at large viewer altitudes, spherical rendering of the surface is necessary. Furthermore, rendering must sustain interactive frame rates of about 30 frames per second to avoid sensory confusion of the user. At the same time, the data structures used for visualization should also be suitable for efficiently computing spatial properties such as height profiles or volumes in order to implement virtual analysis tools. To address these requirements, I have developed a novel terrain rendering algorithm based on tiled quadtree hierarchies using the HEALPix parametrization of a sphere. For evaluation purposes, the system is applied to a 500 GiB dataset representing the surface of Mars.
Considering the current development of inexpensive remote surveillance equipment such as quadcopters, it seems inevitable that these devices will play a major role in future disaster management applications. Virtual reality installations in disaster management headquarters which provide an immersive visualization of near-live, three-dimensional situational data could then be a valuable asset for rapid, collaborative decision making. Most terrain visualization algorithms, however, require a computationally expensive pre-processing step to construct a terrain database.
To address this problem, I present an on-the-fly pre-processing system for cartographic data. The system consists of a frontend for rendering and interaction as well as a distributed processing backend executing on a small cluster which produces tiled data in the format required by the frontend on demand. The backend employs a CUDA based algorithm on graphics cards to perform efficient conversion from cartographic standard projections to the HEALPix-based grid used by the frontend.
Measurement of spatial properties is an important step in quantifying geological phenomena. When performing these tasks in a VR environment, a suitable input device and abstraction for the interaction (a “virtual tool”) must be provided. This tool should enable the user to precisely select the location of the measurement even under a perspective projection. Furthermore, the measurement process should be accurate to the resolution of the data available and should not have a large impact on the frame rate in order to not violate interactivity requirements.
I have implemented virtual tools based on the HEALPix data structure for measurement of height profiles as well as volumes. For interaction, a ray-based picking metaphor was employed, using a virtual selection ray extending from the user’s hand holding a VR interaction device. To provide maximum accuracy, the algorithms access the quad-tree terrain database at the highest available resolution level while at the same time maintaining interactivity in rendering.
Geological faults are cracks in the earth’s crust along which a differential movement of rock volumes can be observed. Quantifying the direction and magnitude of such translations is an essential requirement in understanding earth’s geological history. For this purpose, geologists traditionally use maps in top-down projection which are cut (e.g. using image editing software) along the suspected fault trace. The two resulting pieces of the map are then translated in parallel against each other until surface features which have been cut by the fault motion come back into alignment. The amount of translation applied is then used as a hypothesis for the magnitude of the fault action. In the scope of this work it is shown, however, that performing this study in a top-down perspective can lead to the acceptance of faulty reconstructions, since the three-dimensional structure of topography is not considered.
To address this problem, I present a novel terrain deformation algorithm which allows the user to trace a fault line directly within a 3D terrain visualization system and interactively deform the terrain model while inspecting the resulting reconstruction from arbitrary perspectives. I demonstrate that the application of 3D visualization allows for a more informed interpretation of fault reconstruction hypotheses. The algorithm is implemented on graphics cards and performs real-time geometric deformation of the terrain model, guaranteeing interactivity with respect to all parameters.
Paleoceanography is the study of the prehistoric evolution of the ocean. One of the key data sources used in this research are coring experiments which provide point samples of layered sediment depositions at the ocean floor. The samples obtained in these experiments document the time-varying sediment concentrations within the ocean water at the point of measurement. The task of recovering the ocean flow patterns based on these deposition records is a challenging inverse numerical problem, however.
To support domain scientists working on this problem, I have developed a VR visualization tool to aid in the verification of model parameters by providing simultaneous visualization of experimental data from coring as well as the resulting predicted flow field obtained from numerical simulation. Earth is visualized as a globe in the VR environment with coring data being presented using a billboard rendering technique while the
time-variant flow field is indicated using Line-Integral-Convolution (LIC). To study individual sediment transport pathways and their correlation with the depositional record, interactive particle injection and real-time advection is supported.
Die Verwendung von existierenden Planungsansätzen zur Lösung von realen Anwendungs- problemen führt meist schnell zur Erkenntnis, dass eine vorliegende Problemstellung im Prinzip zwar lösbar ist, der exponentiell anwachsende Suchraum jedoch nur die Behandlung relativ kleiner Aufgabenstellungen erlaubt. Beobachtet man jedoch menschliche Planungsexperten, so sind diese in der Lage bei komplexen Problemen den Suchraum durch Abstraktion und die Verwendung bekannter Fallbeispiele als Heuristiken, entscheident zu verkleinern und so auch für schwierige Aufgabenstellungen zu einer akzeptablen Lösung zu gelangen. In dieser Arbeit wollen wir am Beispiel der Arbeitsplanung ein System vorstellen, das Abstraktion und fallbasierte Techniken zur Steuerung des Inferenzprozesses eines nichtlinearen, hierarchischen Planungssystems einsetzt und so die Komplexität der zu lösenden Gesamtaufgabe reduziert.
We describe a hybrid architecture supporting planning for machining workpieces. The archi- tecture is built around CAPlan, a partial-order nonlinear planner that represents the plan already generated and allows external control decision made by special purpose programs or by the user. To make planning more efficient, the domain is hierarchically modelled. Based on this hierarchical representation, a case-based control component has been realized that allows incremental acquisition of control knowledge by storing solved problems and reusing them in similar situations.
We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is build on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been realized that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.
While most approaches to similarity assessment are oblivious of knowledge and goals, there is ample evidence that these elements of problem solving play an important role in similarity judgements. This paper is concerned with an approach for integrating assessment of similarity into a framework of problem solving that embodies central notions of problem solving like goals, knowledge and learning.
Contrary to symbolic learning approaches, which represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB; sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case- based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and case-based methods and show the interdependency between the measure used by a case-based algorithm and the target concept.
Im Bereich der Expertensysteme ist das Problemlösen auf der Basis von bekannten Fallbeispielen ein derzeit sehr aktuelles Thema. Auch für Diagnoseaufgaben gewinnt der fallbasierte Ansatz immer mehr an Bedeutung. In diesem Papier soll der im Rahmen des Moltke -Projektes1 an der Universität Kaiserslautern entwickelte fallbasierte Problemlöser Patdex/22 vorgestellt werden. Ein erster Prototyp, Patdex/1, wurde bereits 1988 entwickelt.
Forschungsprojekte im Bereich des fallbasierten Schliessens in den USA, die Verfügbarkeit kommerzieller fallbasierter Shells, sowie erste Forschungsergebnisse initialer deutscher Projekte haben auch in Deutschland verstärkte Aktivitäten auf dem Gebiet des fallbasierten Schliessens ausgelöst. In diesem Artikel sollen daher Projekte, die sich als Schwerpunkt oder als Teilaspekt mit fallbasierten Aspekten beschäftigen, einer breiteren Öffentlichkeit kurz vorgestellt werden.
Patdex is an expert system which carries out case-based reasoning for the fault diagnosis of complex machines. It is integrated in the Moltke workbench for technical diagnosis, which was developed at the university of Kaiserslautern over the past years, Moltke contains other parts as well, in particular a model-based approach; in Patdex where essentially the heuristic features are located. The use of cases also plays an important role for knowledge acquisition. In this paper we describe Patdex from a principal point of view and embed its main concepts into a theoretical framework.
Crowd condition monitoring concerns the crowd safety and concerns business performance metrics. The research problem to be solved is a crowd condition estimation approach to enable and support the supervision of mass events by first-responders and marketing experts, but is also targeted towards supporting social scientists, journalists, historians, public relations experts, community leaders, and political researchers. Real-time insights of the crowd condition is desired for quick reactions and historic crowd conditions measurements are desired for profound post-event crowd condition analysis.
This thesis aims to provide a systematic understanding of different approaches for crowd condition estimation by relying on 2.4 GHz signals and its variation in crowds of people, proposes and categorizes possible sensing approaches, applies supervised machine learning algorithms, and demonstrates experimental evaluation results. I categorize four sensing approaches. Firstly, stationary sensors which are sensing crowd centric signals sources. Secondly, stationary sensors which are sensing other stationary signals sources (either opportunistic or special purpose signal sources). Thirdly, a few volunteers within the crowd equipped with sensors which are sensing other surrounding crowd centric device signals (either individually, in a single group or collaboratively) within a small region. Fourthly, a small subset of participants within the crowd equipped with sensors and roaming throughout a whole city to sense wireless crowd centric signals.
I present and evaluate an approach with meshed stationary sensors which were sensing crowd centric devices. This was demonstrated and empirically evaluated within an industrial project during three of the world-wide largest automotive exhibitions. With over 30 meshed stationary sensors in an optimized setup across 6400m2 I achieved a mean absolute error of the crowd density of just 0.0115
people per square meter which equals to an average of below 6% mean relative error from the ground truth. I validate the contextual crowd condition anomaly detection method during the visit of chancellor Mrs. Merkel and during a large press conference during the exhibition. I present the approach of opportunistically sensing stationary based wireless signal variations and validate this during the Hannover CeBIT exhibition with 80 opportunistic sources with a crowd condition estimation relative error of below 12% relying only on surrounding signals in influenced by humans. Pursuing this approach I present an approach with dedicated signal sources and sensors to estimate the condition of shared office environments. I demonstrate methods being viable to even detect low density static crowds, such as people sitting at their desks, and evaluate this on an eight person office scenario. I present the approach of mobile crowd density estimation by a group of sensors detecting other crowd centric devices in the proximity with a classification accuracy of the crowd density of 66 % (improvement of over 22% over a individual sensor) during the crowded Oktoberfest event. I propose a collaborative mobile sensing approach which makes the system more robust against variations that may result from the background of the people rather than the crowd condition with differential features taking information about the link structure between actively scanning devices, the ratio between values observed by different devices, ratio of discovered crowd devices over time, team-wise diversity of discovered devices, number of semi- continuous device visibility periods, and device visibility durations into account. I validate the approach on multiple experiments including the Kaiserslautern European soccer championship public viewing event and evaluated the collaborative mobile sensing approach with a crowd condition estimation accuracy of 77 % while outperforming previous methods by 21%. I present the feasibility of deploying the wireless crowd condition sensing approach to a citywide scale during an event in Zurich with 971 actively sensing participants and outperformed the reference method by 24% in average.
Neuronale Netze sind ein derzeit (wieder) aktuelles Thema. Trotz der oft eher schlagwortartigen
Verwendung dieses Begriffs beinhaltet er eine Vielfalt von Ideen, unterschiedlichste methodische
Ansätze und konkrete Anwendungsmöglichkeiten. Die grundlegenden Vorstellungen sind dabei nicht neu, sondern haben eine mitunter recht lange Tradition in angrenzenden Disziplinen wie Biologie, Kybernetik , Mathematik und Physik . Vielversprechende Forschungsergebnisse der letzten Zeit haben dieses Thema wieder in den Mittelpunkt des Interesses gerückt und eine Vielzahl neuer Querbezüge zur Informatik und Neurobiologie sowie zu anderen, auf den ersten Blick weit entfernten Gebieten offenbart. Gegenstand des Forschungsgebiets Neuronale Netze ist dabei die Untersuchung und Konstruktion informationsverarbeitender Systeme, die sich aus vielen mitunter nur sehr primitiven, uniformen Einheiten zusammensetzen und deren wesentliches Verarbeitungsprinzip die Kommunikation zwischen diesen Einheiten ist, d.h. die Übertragung von Nachrichten oder Signalen. Ein weiteres
Charakteristikum dieser Systeme ist die hochgradig parallele Verarbeitung von Information innerhalb
des Systems. Neben der Modellierung kognitiver Prozesse und dem Interesse, wie das menschliche Gehirn komplexe kognitive Leistungen vollbringt, ist über das rein wissenschaftliche Interesse hinaus in zunehmendem Maße auch der konkrete Einsatz neuronaler Netze in verschiedenen technischen Anwendungsgebieten zu sehen. Der vorliegende Report beinhaltet die schriftlichen Ausarbeitungen der Teilnehmerinnen des Seminars Theorie und Praxis neuronaler Netze , das von der Arbeitsgruppe Richter im Sommersemester 1993 an der Universität Kaiserslautern veranstaltet wurde. Besonderer Wert wurde darauf gelegt, nicht nur die theoretischen Grundlagen neuronaler Netze zu behandeln, sondern auch deren Einsatz in der Praxis zu diskutieren. Die Themenauswahl spiegelt einen Teil des weiten Spektrums der Arbeiten auf diesem Gebiet wider. Ein Anspruch auf Vollständigkeit kann daher nicht erhoben werden. Insbesondere sei darauf verwiesen, daß für eine intensive, vertiefende Beschäftigung mit einem Thema auf die jeweiligen Originalarbeiten zurückgegriffen werden sollte. Ohne die Mitarbeit der Teilnehmerinnen und Teilnehmer des Seminars wäre dieser Report nicht möglich gewesen. Wir bedanken uns daher bei Frank Hauptmann, Peter Conrad, Christoph Keller, Martin Buch, Philip Ziegler, Frank Leidermann, Martin Kronenburg, Michael Dieterich, Ulrike Becker, Christoph Krome, Susanne Meyfarth , Markus Schmitz, Kenan Çarki, Oliver Schweikart, Michael Schick und Ralf Comes.
Backward compatibility of class libraries ensures that an old implementation of a library can safely be replaced by a new implementation without breaking existing clients.
Formal reasoning about backward compatibility requires an adequate semantic model to compare the behavior of two library implementations.
In the object-oriented setting with inheritance and callbacks, finding such models is difficult as the interface between library implementations and clients are complex.
Furthermore, handling these models in a way to support practical reasoning requires appropriate verification tools.
This thesis proposes a formal model for library implementations and a reasoning approach for backward compatibility that is implemented using an automatic verifier. The first part of the thesis develops a fully abstract trace-based semantics for class libraries of a core sequential object-oriented language. Traces abstract from the control flow (stack) and data representation (heap) of the library implementations. The construction of a most general context is given that abstracts exactly from all possible clients of the library implementation.
Soundness and completeness of the trace semantics as well as the most general context are proven using specialized simulation relations on the operational semantics. The simulation relations also provide a proof method for reasoning about backward compatibility.
The second part of the thesis presents the implementation of the simulation-based proof method for an automatic verifier to check backward compatibility of class libraries written in Java. The approach works for complex library implementations, with recursion and loops, in the setting of unknown program contexts. The verification process relies on a coupling invariant that describes a relation between programs that use the old library implementation and programs that use the new library implementation. The thesis presents a specification language to formulate such coupling invariants. Finally, an application of the developed theory and tool to typical examples from the literature validates the reasoning and verification approach.
This report presents a generalization of tensor-product B-spline surfaces. The new scheme permits knots whose endpoints lie in the interior of the domain rectangle of a surface. This allows local refinement of the knot structure for approximation purposes as well as modeling surfaces with local tangent or curvature discontinuities. The surfaces are represented in terms of B-spline basis functions, ensuring affine invariance, local control, the convex hull property, and evaluation by de Boor's algorithm. A dimension formula for a class of generalized tensor-product spline spaces is developed.
One of the problems of autonomous mobile systems is the continuous tracking of position and orientation. In most cases, this problem is solved by dead reckoning, based on measurement of wheel rotations or step counts and step width. Unfortunately dead reckoning leads to accumulation of drift errors and is very sensitive against slippery. In this paper an algorithm for tracking position and orientation is presented being nearly independent from odometry and its problems with slippery. To achieve this results, a rotating range-finder is used, delivering scans of the environmental structure. The properties of this structure are used to match the scans from different locations in order to find their translational and rotational displacement. For this purpose derivatives of range-finder scans are calculated which can be used to find position and orientation by crosscorrelation.
A map for an autonomous mobile robot (AMR) in an indoor environment for the purpose ofcontinuous position and orientation estimation is discussed. Unlike many other approaches, this map is not based on geometrical primitives like lines and polygons. An algorithm is shown , where the sensordata of a laser range finder can be used to establish this map without a geometrical interpretation of the data. This is done by converting single laser radar scans to statistical representations of the environ-ment, so that a crosscorrelation of an actu al converted scan and this representative results into the actual position and orientation in a global coordinate system. The map itsel f is build of representative scansfor the positions where the AMR has been, so that it is able to find its position and orientation by c omparing the actual scan with a scan stored in the map.
We tested the GYROSTAR ENV-05S. This device is a sensor for angular velocity. There- fore the orientation must be calculated by integration of the angular velocity over time. The devices output is a voltage proportional to the angular velocity and relative to a reference. The test where done to find out under which conditions it is possible to use this device for estimation of orientation.
In this article we present a method to generate random objects from a large variety of combinatorial classes according to a given distribution. Given a description of the combinatorial class and a set of sample data our method will provide an algorithm that generates objects of size n in worst-case runtime O(n^2) (O(n log(n)) can be achieved at the cost of a higher average-case runtime), with the generated objects following a distribution that closely matches the distribution of the sample data.
For many decades, the search for language classes that extend the
context-free laguages enough to include various languages that arise in
practice, while still keeping as many of the useful properties that
context-free grammars have - most notably cubic parsing time - has been
one of the major areas of research in formal language theory. In this thesis
we add a new family of classes to this field, namely
position-and-length-dependent context-free grammars. Our classes use the
approach of regulated rewriting, where derivations in a context-free base
grammar are allowed or forbidden based on, e.g., the sequence of rules used
in a derivation or the sentential forms, each rule is applied to. For our
new classes we look at the yield of each rule application, i.e. the
subword of the final word that eventually is derived from the symbols
introduced by the rule application. The position and length of the yield
in the final word define the position and length of the rule application and
each rule is associated a set of positions and lengths where it is allowed
to be applied.
We show that - unless the sets of allowed positions and lengths are really
complex - the languages in our classes can be parsed in the same time as
context-free grammars, using slight adaptations of well-known parsing
algorithms. We also show that they form a proper hierarchy above the
context-free languages and examine their relation to language classes
defined by other types of regulated rewriting.
We complete the treatment of the language classes by introducing pushdown
automata with position counter, an extension of traditional pushdown
automata that recognizes the languages generated by
position-and-length-dependent context-free grammars, and we examine various
closure and decidability properties of our classes. Additionally, we gather
the corresponding results for the subclasses that use right-linear resp.
left-linear base grammars and the corresponding class of automata, finite
automata with position counter.
Finally, as an application of our idea, we introduce length-dependent
stochastic context-free grammars and show how they can be employed to
improve the quality of predictions for RNA secondary structures.
Im Rahmen dieser Arbeit beschreiben wir die wesentlichen Merkmale der CAPlan-Architektur, die die interaktive Bearbeitung von Planungsproblemen ermöglichen. Anhand des SNLP-Algorithmus, der der Architektur zugrunde liegt, werden die im Laufe eines Planungsprozesses auftretenden Entscheidungspunkte charakterisiert. Mit Hilfe von frei definierbaren Kontrollkomponenten kann das Verhalten an diesen Entscheidungspunkte festgelegt werden, wodurch eine flexible Steuerung des Planungsprozesses ermöglicht wird. Planungsziele und -entscheidungen werden in einem gerichteten azyklischen Graphen verwaltet, der ihre kausalen Abhängigkeiten widerspiegelt. Im Gegensatz zu einem Stack, der typischerweise zur Verwaltung von Entscheidungen eingesetzt wird, erlaubt die graphbasierte Repräsentation die flexible Rücknahme einer Entscheidung, ohne alle zeitlich danach getroffenen Entscheidungen ebenfalls zurücknehmen zu müssen.
Problem specifications for classical planners based on a STRIPS-like representation typically consist of an initial situation and a partially defined goal state. Hierarchical planning approaches, e.g., Hierarchical Task Network (HTN) Planning, have not only richer representations for actions but also for the representation of planning problems. The latter are defined by giving an initial state and an initial task network in which the goals can be ordered with respect to each other. However, studies with a specification of the domain of process planning for the plan-space planner CAPlan (an extension of SNLP) have shown that even without hierarchical domain representation typical properties called goal orderings can be identified in this domain that allow more efficient and correct case retrieval strategies for the case-based planner CAPlan/CbC. Motivated by that, this report describes an extension of the classical problem specifications for plan-space planners like SNLP and descendants. These extended problem specifications allow to define a partial order on the planning goals which can interpreted as an order in which the solution plan should achieve the goals. These goal ordering can theoretically and empirically be shown to improve planning performance not only for case-based but also for generative planning. As a second but different way we show how goal orderings can be used to address the control problem of partial order planners. These improvements can be best understood with a refinement of Barrett's and Weld's extended taxonomy of subgoal collections.
Real world planning tasks like manufacturing process planning often don't allow to formalize all of the relevant knowledge. Especially, preferences between alternatives are hard to acquire but have high influence on the efficiency of the planning process and the quality of the solution. We describe the essential features of the CAPlan planning architecture that supports cooperative problem solving to narrow the gap caused by absent preference and control knowledge. The architecture combines an SNLP-like base planner with mechanisms for explict representation and maintenance of dependencies between planning decisions. The flexible control interface of CAPlan allows a combination of autonomous and interactive planning in which a user can participate in the problem solving process. Especially, the rejection of arbitrary decisions by a user or dependency-directed backtracking mechanisms are supported by CAPlan.
Planung ist ein vielfach untersuchtes Gebiet im Bereich der Künstlichen Intelligenz. Die hier vorgestellte Arbeit ist in diesem Gebiet angesiedelt: es geht um ein Planungssystem, welches auf die Unterstützung der Arbeitsplanerstellung in der computerintegrierten Fertigung abzielt. Der Bereich der computerintegrierten Fertigung ist allerdings nur als ein spezieller Anwendungsbereich für das System zu sehen.
Today, polygonal models occur everywhere in graphical applications, since they are easy
to render and to compute and a very huge set of tools are existing for generation and
manipulation of polygonal data. But modern scanning devices that allow a high quality
and large scale acquisition of complex real world models often deliver a large set of
points as resulting data structure of the scanned surface. A direct triangulation of those
point clouds does not always result in good models. They often contain problems like
holes, self-intersections and non manifold structures. Also one often looses important
surface structures like sharp corners and edges during a usual surface reconstruction.
So it is suitable to stay a little longer in the point based world to analyze the point cloud
data with respect to such features and apply a surface reconstruction method afterwards
that is known to construct continuous and smooth surfaces and extend it to reconstruct
sharp features.
Diese Arbeit ist ein Bestandteil des Projekts META-AKAD. Ziel dieser Arbeit ist die Entwicklung einer Softwarekomponente, die in der Lage ist, automatisiert Klassifikationen nach der Regensburger Verbundklassifikation (RVK) und Schlagworte aus der deutschen Schlagwortnormdatei (SWD) für Dokumente, die als Lehr- oder Lernmaterial eingestuft wurden, zu vergeben. Die automatische Indexierung wird auf Basis einer Support-Vektor-Maschine durchgeführt. Die Implementierung erfolgte in der Programmiersprache Java.
Im Rahmen dieser Diplomarbeit werden die Konzepte zur Unterstützung von datenbankorientierten Software-Produktlinien durch domänenspezifische Sprachen am Beispiel von Versionierungssystemen untersucht. Ziel dieser Arbeit ist es, die zeitlichen Kosten, die durch die Nutzung einer domänenspezifischen Sprache entstehen, zu bestimmen. Dabei werden unterschiedliche Datenbankschemata verwendet, um zu untersuchen, welcher Zusammenhang zwischen der Komplexität des Datenbankschemas und der Übersetzung einer domänenspezifischen Anweisung in eine Reihe von herkömmlichen SQL-Anweisungen besteht. Um die zeitlichen Kosten für die Reduktion zu bestimmen, werden Leistungsuntersuchungen durchgeführt. Grundlage für diese Leistungsuntersuchungen sind domänenspezifische Anweisungen, die von einem speziell für diesen Zweck entwickelten Generator erzeugt wurden. Diese generierten domänenspezifischen Anweisungen werden mit den unterschiedlichen Datenbanktreibern auf dem passenden Datenbankschema ausgeführt.
Ein maßgeschneidertes Kommunikationssystem für eine mobile Applikation mit Dienstgüteanforderungen
(2004)
In diesem Beitrag wird die Maßschneiderung eines Ad-Hoc-Kommunikationssystems zur Fernsteuerung eines Luftschiffs über WLAN vorgestellt. Dabei steht die Dienstunterstützung bei der Übertragung mehrerer Datenströme im Vordergrund. Es werden verschiedene Dienstgütemechanismen erklärt und deren Entwicklung und Integration in ein Kommunikationsprotokoll mit Hilfe eines komponentenbasierten Ansatzes genauer erläutert.
We present a methodology to augment system safety step-by-step and illustrate the approach by the definition of reusable solutions for the detection of fail-silent nodes - a watchdog and a heartbeat. These solutions can be added to real-time system designs, to protect against certain types of system failures. We use SDL as a system design language for the development of distributed systems, including real-time systems.
Software is becoming increasingly concurrent: parallelization, decentralization, and reactivity necessitate asynchronous programming in which processes communicate by posting messages/tasks to others’ message/task buffers. Asynchronous programming has been widely used to build fast servers and routers, embedded systems and sensor networks, and is the basis of Web programming using Javascript. Languages such as Erlang and Scala have adopted asynchronous programming as a fundamental concept with which highly scalable and highly reliable distributed systems are built.
Asynchronous programs are challenging to implement correctly: the loose coupling between asynchronously executed tasks makes the control and data dependencies difficult to follow. Even subtle design and programming mistakes on the programs have the capability to introduce erroneous or divergent behaviors. As asynchronous programs are typically written to provide a reliable, high-performance infrastructure, there is a critical need for analysis techniques to guarantee their correctness.
In this dissertation, I provide scalable verification and testing tools to make asyn- chronous programs more reliable. I show that the combination of counter abstraction and partial order reduction is an effective approach for the verification of asynchronous systems by presenting PROVKEEPER and KUAI, two scalable verifiers for two types of asynchronous systems. I also provide a theoretical result that proves a counter-abstraction based algorithm called expand-enlarge-check, is an asymptotically optimal algorithm for the coverability problem of branching vector addition systems as which many asynchronous programs can be modeled. In addition, I present BBS and LLSPLAT, two testing tools for asynchronous programs that efficiently uncover many subtle memory violation bugs.
Interactive graphics has been limited to simple direct illumination that commonly results in an artificial appearance. A more realistic appearance by simulating global illumination effects has been too costly to compute at interactive rates. In this paper we describe a new Monte Carlo-based global illumination algorithm. It achieves performance of up to 10 frames per second while arbitrary changes to the scene may be applied interactively. The performance is obtained through the effective use of a fast, distributed ray-tracing engine as well as a new interleaved sampling technique for parallel Monte Carlo simulation. A new filtering step in combination with correlated sampling avoids the disturbing noise artifacts common to Monte Carlo methods.
Due to tremendous improvements of high-performance computing resources as well
as numerical advances computational simulations became a common tool for modern
engineers. Nowadays, simulation of complex physics is more and more substituting a
large amount of physical experiments. While the vast compute power of large-scale
high-performance systems enabled for simulating more complex numerical equations,
handling the ever increasing amount of data with spatial and temporal resolution
burdens new challenges to scientists. Huge hardware and energy costs desire for
ecient utilization of high-performance systems. However, increasing complexity of
simulations raises the risk of failing simulations resulting in a single simulation to be
restarted multiple times. Computational Steering is a promising approach to interact
with running simulations which could prevent simulation crashes. The large amount
of data expands gaps in the amount of data that can be calculated and the amount of
data that can be processed. Extreme-scale simulations produce more data that can
even be stored. In this thesis, I propose several methods that enhance the process
of steering, exploring, visualizing, and analyzing ongoing numerical simulations.
Diese Projektarbeit beschreibt die Anforderungen, den Aufbau und die Implementierung der Anfrageverarbeitung (Query-Engine). Im diesem Kapitel werden die Zielsetzungen des Meta-Akad Projekts und die Realisierungsmöglichkeiten mit dem Java 2 Enterprise Edition Framework erörtert. Ferner wird die Einordnung der Anfrageverarbeitung in das Gesamtsystem gezeigt. Das zweite Kapitel erläutert grob Anforderungen sowie Ablauf der Anfrageverarbeitung und stellt das Implementierungskonzept dar. In den Nachfolgenden Kapitel wird dann näher auf die einzelnen Phasen der Verarbeitung und die auftretenden Probleme eingegangen. Am Ende werden im Kapitel Ausblick Möglichkeiten für Erweiterungen und Verbesserungen der Anfrageverarbeitung des Meta-Akad Suchdienstes dargelegt.
About the approach The approach of TOPO was originally developed in the FABEL project1[1] to support architects in designing buildings with complex installations. Supplementing knowledge-based design tools, which are available only for selected subtasks, TOPO aims to cover the whole design process. To that aim, it relies almost exclusively on archived plans. Input to TOPO is a partial plan, and output is an elaborated plan. The input plan constitutes the query case and the archived plans form the case base with the source cases. A plan is a set of design objects. Each design object is defined by some semantic attributes and by its bounding box in a 3-dimensional coordinate system. TOPO supports the elaboration of plans by adding design objects.
Struktur und Werkzeuge des experiment-spezifischen Datenbereichs der SFB501 Erfahrungsdatenbank
(1999)
Software-Entwicklungsartefakte müssen zielgerichtet während der Durchführung eines Software- Projekts erfasst werden, um für die Wiederverwendung aufbereitet werden zu können. Die methodische Basis hierzu bildet im Sonderforschungsbereich 501 das Konzept der Erfahrungsdatenbank. In ihrem experiment-spezifischen Datenbereich werden für jedes Entwicklungsprojekt alle Software-Entwicklungsartefakte abgelegt, die während des Lebenszyklus eines Projektes anfallen. In ihrem übergreifenden Datenbereich werden all die jenigen Artefakte aus dem experiment-spezifischen Datenbereich zusammengefasst, die für eine Wiederverwendung in nachfolgenden Projekten in Frage kommen. Es hat sich gezeigt, dass bereits zur Nutzung der Datenmengen im experiment- spezifischen Datenbereich der Erfahrungsdatenbank ein systematischer Zugriff notwendig ist. Ein systematischer Zugriff setzt jedoch eine normierte Struktur voraus. Im experiment-spezifischen Bereich werden zwei Arten von Experimenttypen unterschieden: "Kontrollierte Experimente" und "Fallstudien". Dieser Bericht beschreibt die Ablage- und Zugriffsstruktur für den Experimenttyp "Fallstudien". Die Struktur wurde aufgrund der Erfahrungen in ersten Fallstudien entwickelt und evaluiert.
Versions- und Konfigurationsmanagement sind zentrale Instrumente zur intellektuellen Beherrschung komplexer Softwareentwicklungen. In stark wiederverwendungsorientierten Softwareentwicklungsansätzen -wie vom SFB bereitgestellt- muß der Begriff der Konfiguration von traditionell produktorientierten Artefakten auf Prozesse und sonstige Entwicklungserfahrungen erweitert werden. In dieser Veröffentlichung wird ein derartig erweitertes Konfigurationsmodell vorgestellt. Darüberhinau wird eine Ergänzung traditioneller Projektplanungsinformationen diskutiert, die die Ableitung maßgeschneiderter Versions- und Konfigurationsmanagementmechanismen vor Projektbeginn ermöglichen.
Software development is becoming a more and more distributed process, which urgently needs supporting tools in the field of configuration management, software process/w orkflow management, communication and problem tracking. In this paper we present a new distributed software configuration management framework COMAND. It offers high availabilit y through replication and a mechanism to easily change and adapt the project structure to new business needs. To better understand and formally prove some properties of COMAND, we have modeled it in a formal technique based on distributed graph transformations. This formalism provides an intuitive rule-based description technique mainly for the dynamic behavior of the system on an abstract level. We use it here to model the replication subsystem.
Die Sichten von Projektmitgliedern auf Prozesse von Software-Entwicklungen sollen in der Prozeßmodellierungssprache MVP-L formuliert und anschließend in ein Umfassendes Prozeßmodell integriert werden. Dabei ist die Identifikation ähnlicher Informationen in verschiedenen Sichten von Bedeutung. In dieser Arbeit berichten
wir über die Adaption und Synthese verschiedener Ansätze zum Thema Ähnlichkeit aus unterschiedlichen Domänen (Schema-Integration beim Datenbank-Entwurf, Analoges und Fallbasiertes Schließen, Wiederverwendung und System-Spezifikation). Das Ergebnis, die Ähnlichkeitsfunktion vsim, wird anhand eines Referenzbeispiels illustriert. Dabei gehen wir insbesondere auf die Eigenschaft der Funktion vsim ein und berichten über Erfahrungen im Umgang mit dieser Funktion zur Berechnung der Ähnlichkeit zwischen Prozeßmodellen.
The task of printed Optical Character Recognition (OCR), though considered ``solved'' by many, still poses several challenges. The complex grapheme structure of many scripts, such as Devanagari and Urdu Nastaleeq, greatly lowers the performance of state-of-the-art OCR systems.
Moreover, the digitization of historical and multilingual documents still require much probing. Lack of benchmark datasets further complicates the development of reliable OCR systems. This thesis aims to find the answers to some of these challenges using contemporary machine learning technologies. Specifically, the Long Short-Term Memory (LSTM) networks, have been employed to OCR modern as well historical monolingual documents. The excellent OCR results obtained on these have led us to extend their application for multilingual documents.
The first major contribution of this thesis is to demonstrate the usability of LSTM networks for monolingual documents. The LSTM networks yield very good OCR results on various modern and historical scripts, without using sophisticated features and post-processing techniques. The set of modern scripts include modern English, Urdu Nastaleeq and Devanagari. To address the challenge of OCR of historical documents, this thesis focuses on Old German Fraktur script, medieval Latin script of the 15th century, and Polytonic Greek script. LSTM-based systems outperform the contemporary OCR systems on all of these scripts. To cater for the lack of ground-truth data, this thesis proposes a new methodology, combining segmentation-based and segmentation-free OCR approaches, to OCR scripts for which no transcribed training data is available.
Another major contribution of this thesis is the development of a novel multilingual OCR system. A unified framework for dealing with different types of multilingual documents has been proposed. The core motivation behind this generalized framework is the human reading ability to process multilingual documents, where no script identification takes place.
In this design, the LSTM networks recognize multiple scripts simultaneously without the need to identify different scripts. The first step in building this framework is the realization of a language-independent OCR system which recognizes multilingual text in a single step. This language-independent approach is then extended to script-independent OCR that can recognize multiscript documents using a single OCR model. The proposed generalized approach yields low error rate (1.2%) on a test corpus of English-Greek bilingual documents.
In summary, this thesis aims to extend the research in document recognition, from modern Latin scripts to Old Latin, to Greek and to other ``under-privilaged'' scripts such as Devanagari and Urdu Nastaleeq.
It also attempts to add a different perspective in dealing with multilingual documents.
There is a well known relationship between alternating automata on finite words and symbolically represented nondeterministic automata on finite words. This relationship is of practical relevance because it allows to combine the advantages of alternating and symbolically represented nondeterministic automata on finite words. However, for infinite words the situation is unclear. Therefore, this work investigates the relationship between alternating omega-automata and symbolically represented nondeterministic omega-automata. Thereby, we identify classes of alternating omega-automata that are as expressive as safety, liveness and deterministic prefix automata, respectively. Moreover, some very simple symbolic nondeterminisation procedures are developed for the classes corresponding to safety and liveness properties.
Das Projekt Meta-Akad hat das Ziel, Lernenden und Lehrenden den möglichst einfachen, umfassenden und schnellen Zugriff auf Lehrmaterial zu ermöglichen. Dabei werden verschiedene, über die Aspekte einer reinen Internet- Suchmaschine hinausgehende Aspekte berücksichtigt: Neben dem Aufbau einer umfangreichen und repräsentativen Sammlung von Lerndokumenten sollen diese mittels bibliothekarischer Methoden erschlossen und mit umfangreichen Metadaten, wie beispielsweise einer inhaltlichen Einordnung, versehen werden. Um dem Problem der fraglichen Qualität von Dokumenten aus dem Internet gerecht zu werden, bietet Meta-Akad die Möglichkeit diese durch Beguchtachtungsverfahren sicherzustellen. Aufgrund dieses Mehrwerts versteht sich das Projekt als virtuelle, über das Internet erreichbare Bibliothek. Der Zugriff auf die erfassten Dokumente ist durch eine Web-basierte Schnittstelle realisiert: Diese soll sowohl die Möglichkeit einer Suche durch Angabe von Schlüsselwörtern, als auch das Blättern in der Dokumentsammlung erlauben. Eine Suche nach Schlüsselwörtern soll neben den Meta-Daten auch den gesamten textuellen Inhalt der Dokumente betreffen. Die Integration der Volltextsuche in den bereits vorhandenen Meta-Daten Suchvorgang ist das Kernthema dieser Projektarbeit.
Interconnected, autonomously driving cars shall realize the vision of a zero-accident, low energy mobility in spite of a fast increasing traffic volume. Tightly interconnected medical devices and health care systems shall ensure the health of an aging society. And interconnected virtual power plants based on renewable energy sources shall ensure a clean energy supply in a society that consumes more energy than ever before. Such open systems of systems will play an essential role for economy and society.
Open systems of systems dynamically connect to each other in order to collectively provide a superordinate functionality, which could not be provided by a single system alone. The structure as well as the behavior of an open system of system dynamically emerge at runtime leading to very flexible solutions working under various different environmental conditions. This flexibility and adaptivity of systems of systems are a key for realizing the above mentioned scenarios.
On the other hand, however, this leads to uncertainties since the emerging structure and behavior of a system of system can hardly be anticipated at design time. This impedes the indispensable safety assessment of such systems in safety-critical application domains. Existing safety assurance approaches presume that a system is completely specified and configured prior to a safety assessment. Therefore, they cannot be applied to open systems of systems. In consequence, safety assurance of open systems of systems could easily become a bottleneck impeding or even preventing the success of this promising new generation of embedded systems.
For this reason, this thesis introduces an approach for the safety assurance of open systems of systems. To this end, we shift parts of the safety assurance lifecycle into runtime in order to dynamically assess the safety of the emerging system of system. We use so-called safety models at runtime for enabling systems to assess the safety of an emerging system of system themselves. This leads to a very flexible runtime safety assurance framework.
To this end, this thesis describes the fundamental knowledge on safety assurance and model-driven development, which are the indispensable prerequisites for defining safety models at runtime. Based on these fundamentals, we illustrate how we modularized and formalized conventional safety assurance techniques using model-based representations and analyses. Finally, we explain how we advanced these design time safety models to safety models that can be used by the systems themselves at runtime and how we use these safety models at runtime to create an efficient and flexible runtime safety assurance framework for open systems of systems.
Attention-awareness is a key topic for the upcoming generation of computer-human interaction. A human moves his or her eyes to visually attends to a particular region in a scene. Consequently, he or she can process visual information rapidly and efficiently without being overwhelmed by vast amount of information from the environment. Such a physiological function called visual attention provides a computer system with valuable information of the user to infer his or her activity and the surrounding environment. For example, a computer can infer whether the user is reading text or not by analyzing his or her eye movements. Furthermore, it can infer with which object he or she is interacting by recognizing the object the user is looking at. Recent developments of mobile eye tracking technologies enable us
to capture human visual attention in ubiquitous everyday environments. There are various types of applications where attention-aware systems may be effectively incorporated. Typical examples are augmented reality (AR) applications such as Wikitude which overlay virtual information onto physical objects. This type of AR application presents augmentative information of recognized objects to the user. However, if it presents information of all recognized objects at once, the over
ow of information could be obtrusive to the user. As a solution for such a problem, attention-awareness can be integrated into a system. If a
system knows to which object the user is attending, it can present only the information of
relevant objects to the user.
Towards attention-aware systems in everyday environments, this thesis presents approaches
for analysis of user attention to visual content. Using a state-of-the-art wearable eye tracking device, one can measure the user's eye movements in a mobile scenario. By capturing the user's eye gaze position in a scene and analyzing the image where the eyes focus, a computer can recognize the visual content the user is currently attending to. I propose several image analysis methods to recognize the user-attended visual content in a scene image. For example, I present an application called Museum Guide 2.0. In Museum Guide 2.0, image-based object recognition and eye gaze analysis are combined together to recognize user-attended objects in a museum scenario. Similarly, optical character recognition
(OCR), face recognition, and document image retrieval are also combined with eye gaze analysis to identify the user-attended visual content in respective scenarios. In addition to Museum Guide 2.0, I present other applications in which these combined frameworks are effectively used. The proposed applications show that the user can benefit from active information presentation which augments the attended content in a virtual environment with
a see-through head-mounted display (HMD).
In addition to the individual attention-aware applications mentioned above, this thesis
presents a comprehensive framework that combines all recognition modules to recognize the user-attended visual content when various types of visual information resources such as text, objects, and human faces are present in one scene. In particular, two processing strategies are proposed. The first one selects an appropriate image analysis module according to the user's current cognitive state. The second one runs all image analysis modules simultaneously and merges the analytic results later. I compare these two processing strategies in terms of user-attended visual content recognition when multiple visual information resources are present in the same scene.
Furthermore, I present novel interaction methodologies for a see-through HMD using eye gaze input. A see-through HMD is a suitable device for a wearable attention-aware system for everyday environments because the user can also view his or her physical environment
through the display. I propose methods for the user's attention engagement estimation with the display, eye gaze-driven proactive user assistance functions, and a method for interacting
with a multi-focal see-through display.
Contributions of this thesis include:
• An overview of the state-of-the-art in attention-aware computer-human interaction
and attention-integrated image analysis.
• Methods for the analysis of user-attended visual content in various scenarios.
• Demonstration of the feasibilities and the benefits of the proposed user-attended visual content analysis methods with practical user-supportive applications.
• Methods for interaction with a see-through HMD using eye gaze.
• A comprehensive framework for recognition of user-attended visual content in a complex
scene where multiple visual information resources are present.
This thesis opens a novel field of wearable computer systems where computers can understand the user attention in everyday environments and provide with what the user wants. I will show the potential of such wearable attention-aware systems for everyday
environments for the next generation of pervasive computer-human interaction.
Due to remarkable technological advances in the last three decades the capacity of computer systems has improved tremendously. Considering Moore's law, the number of transistors on integrated circuits has doubled approximately every two years and the trend is continuing. Likewise, developments in storage density, network bandwidth, and compute capacity show similar patterns. As a consequence, the amount of data that can be processed by today's systems has increased by orders of magnitude. At the same time, however, the resolution of screens has hardly increased by a factor of ten. Thus, there is a gap between the amount of data that can be processed and the amount of data that can be visualized. Large high-resolution displays offer a way to deal with this gap and provide a significantly increased screen area by combining the images of multiple smaller display devices. The main objective of this dissertation is the development of new visualization and interaction techniques for large high-resolution displays.
In diesem Aufsatz wird die Arbeitsweise eines Werkzeuges dargestellt, mit dessen Hilfedie Analyse von Feature-Interaktionen in Intelligenten (Telefon-)Netzwerken unterstütztwird. Dieses Werkzeug basiert auf einem von uns entwickelten formalen Lösungsansatz, deraus einem geeigneten Spezifikationsstil, aus einem formalen Kriterium zur Erkennung vonFeature-Interaktionen und aus einer Methode zur Auflösung der erkannten Feature- Interaktionen besteht. Das Werkzeug führt eine statische Analyse von Estelle-Spezifikationendurch und erkennt dabei potientielle Feature-Interaktionen sowie nichtausführbare Transitionen. Darüberhinaus kann es die erkannten nichtausführbaren Transitionen zur Optimierung aus der Spezifikation entfernen. Wir erläutern zunächst kurz den zugrundeliegendenAnsatz und beschreiben danach die Anwendung auf Estelle anhand der Funktionsweisedes Werkzeuges.
In dieser Arbeit wird die Entwicklung eines Werkzeugs dargestellt, mit des-sen Hilfe die Analyse von Feature-Interaktionen in Intelligenten Netzwerkenunterstützt wird. Es basiert auf der formalen Beschreibungstechnik Estelle, wo-bei durch einen speziellen Spezifikationsstil Feature-Interaktionen anhand vonbestimmten Wechselwirkungen zwischen Transitionen verschiedener Features(u.a. Indeterminismus) erkannt werden können. Das Ziel ist dabei die statischeErkennung und Protokollierung dieser Wechselwirkungen sowie die Entfernungvon nicht ausführbaren Transitionen zur Laufzeitoptimierung.Dazu werden zunächst die theoretischen Möglichkeiten zur Erkennung dieserWechselwirkungen untersucht. Danach werden anhand der Implementierung desAnalysewerkzeugs die eingesetzten Methoden und Algorithmen dargestellt undschließlich der Einsatz des Werkzeugs erläutert, das auf dem Estelle-Compiler PET basiert.
Die formale Spezifikation von Kommunikationssystemen stellt durch die mit ihr verbundene Abstraktion und Präzision eine wichtige Grundlage für die formale Verifikation von Systemeigenschaften dar. Diese Abstraktion begrenzt jedoch auch die Ausdrucksfähigkeit der formalen Beschreibungstechnik und kann somit zu problemunangemessenen Spezifikationen führen. Wir untersuchen anhand der formalen Beschreibungstechnik Estelle zunächst zwei solche Aspekte. Beide führen speziell in Hinsicht auf die Domäne von Estelle, der Spezifikation von Kommunikationsprotokollen, zu schwerwiegenden Beeinträchtigungen der Ausdrucksfähigkeit. Eines dieser Defizite zeigt sich bei dem Versuch, in Estelle ein offenes System wie z. B. eine Protokollmaschine oder einen Kommunikationsdienst zu spezifizieren. Da Estelle-Spezifikationen nur geschlossene Systeme beschreiben können, werden solche Komponenten immer nur als Teil einer fest vorgegebenen Umgebung spezifiziert und besitzen auch nur in dieser eine formale Syntax und Semantik. Als Lösung für dieses Problem führen wir die kompatible syntaktische und semantische Estelle-Erweiterung Open-Estelle ein, die eine formale Spezifikation solcher offener Systeme und ihres Imports in verschiedene Umgebungen ermöglicht. Ein anderes Defizit in der Ausdrucksfähigkeit von Estelle ergibt sich aus der strengen Typprüfung. Wir werden zeigen, dass es in heterogenen, hierarchisch strukturierten Kommunikationssystemen im Zusammenhang mit den dort auftretenden horizontalen und vertikalen Typkompositionen zu einer unangemessenen Modellierung von Nutzdatentypen an den Dienstschnittstellen kommt. Dieses Problem erweist sich beim Versuch einer generischen und nutzdatentypunabhängigen Spezifikation eines offenen Systems (z. B. mit Open-Estelle) sogar als fatal. Deshalb führen wir die kompatible Containertyp-Erweiterung ein, durch die eine formale Spezifikation nutzdatentypunabhängiger und somit generischer Schnittstellen von Diensten und Protokollmaschinen ermöglicht wird. Als Grundlage für unsere Implementierungs- und Optimierungsexperimente führen wir den „eXperimental Estelle Compiler“ (XEC) ein. Er ermöglicht aufgrund seines Implementierungskonzeptes eine sehr flexible Modellierung des Systemmanagements und ist insbesondere für die Realisierung verschiedener Auswahloptimierungen geeignet. XEC ist zudem mit verschiedenen Statistik- und Monitoring-Funktionalitäten ausgestattet, durch die eine effiziente quantitative Analyse der durchgeführten Implementierungsexperimente möglich ist. Neben dem vollständigen Sprachumfang von Estelle unterstützt XEC auch die meisten der hier eingeführten Estelle-Erweiterungen. Neben der Korrektheit ist die Effizienz automatisch generierter Implementierungen eine wichtige Anforderung im praktischen Einsatz. Hier zeigt sich jedoch, dass viele der in formalen Protokollspezifikationen verwendeten Konstrukte nur schwer semantikkonform und zugleich effizient implementiert werden können. Entsprechend untersuchen wir anhand des Kontrollflusses und der Handhabung von Nutzdaten, wie die spezifizierten Operationen effizient implementiert werden können, ohne das Abstraktionsniveau senken zu müssen. Die Optimierung des Kontrollflusses geschieht dabei ausgehend von der effizienten Realisierung der Basisoperationen der von XEC erzeugten Implementierungen primär anhand der Transitionsauswahl, da diese speziell bei komplexen Spezifikationen einen erheblichen Teil der Ausführungszeit bansprucht. Wir entwickeln dazu verschiedene heuristische Optimierungen der globalen Auswahl und der modullokalen Auswahl und werten diese sowohl analytisch wie auch experimentell aus. Wesentliche Ansatzpunkte sind dabei verschiedene ereignisgesteuerte Auswahlverfahren auf globaler Ebene und die Reduktion der zu untersuchenden Transitionen auf lokaler Ebene. Die Überprüfung der Ergebnisse anhand der ausführungszeitbezogenen Leistungsbewertung bestätigt diese Ergebnisse. Hinsichtlich der effizienten Handhabung von Daten untersuchen wir unterschiedliche Ansätze auf verschiedenen Ebenen, die jedoch in den meisten Fällen eine problemunangemessene Ausrichtung der Spezifikation auf die effiziente Datenübertragung erfordern. Eine überraschend elegante, problemorientierte und effiziente Lösung ergibt sich jedoch auf Basis der Containertyp-Erweiterung, die ursprünglich zur Steigerung des Abstraktionsniveaus eingeführt wurde. Dieses Ergebnis widerlegt die Vorstellung, dass Maßnahmen zur Steigerung der effizienten Implementierbarkeit auch immer durch eine Senkung des Abstraktionsniveaus erkauft werden müssen.
Estelle is an internationally standardized formal description technique (FDT) designed for the specification of distributed systems, in particular communication protocols. An Estelle specification describes a system of communicating components (module instances). The specified system is closed in a topological sense, i.e. it has no ability to interact with some environment. Because of this restriction, open systems can only be specified together with and incorporated with an environment. To overcome this restriction, we introduce a compatible extension of Estelle, called "Open Estelle". It allows the specification of (topologically) open systems, i.e. systems that have the ability to communicate with any environment through a well-defined external interface. We define aformal syntax and a formal semantics for Open Estelle, both based on and extending the syntax and semantics of Estelle. The extension is compatible syntactically and semantically, i.e. Estelle is a subset of Open Estelle. In particular, the formal semantics of Open Estelle reduces to the Estelle semantics in the special case of a closed system. Furthermore, we present a tool for the textual integration of open systems into environments specified in Open Estelle, and a compiler for the automatic generation of implementations directly from Open Estelle specifications.