Kaiserslautern - Fachbereich Informatik
Refine
Year of publication
Document Type
- Article (107) (remove)
Language
- English (107) (remove)
Has Fulltext
- yes (107)
Keywords
- AG-RESY (32)
- PARO (24)
- SKALP (14)
- HANDFLEX (6)
- motion planning (5)
- industrial robots (4)
- parallel processing (4)
- LOADBAL (3)
- search algorithms (3)
- Case-Based Reasoning (2)
Faculty / Organisational entity
This paper presents a new approach to parallel path planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based a best-first search algorithm and needs no essential off-line computations. The algorithm works in an implicitly discrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on polyhedral models of the robot and the obstacles. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel path planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows very good speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
In this paper we present an interpreter which allows to support the validation of conceptual models in early stages of the development. We compare hypermedia and expert system approaches to knowledge processing and show how an integrated approach eases the creation of expert systems. Our knowledge engineering tool CoMo-Kit allows a "smooth" transition from initial protocols via a semi-formal specification based on a typed hypertext up to an running expert system. The interpreter uses the intermediate hypertext representation for the interactive solution of problems. Thereby, tasks are distributed to agents via an local area network. This means that the specification of an expert system can directly be used to solve real world problems. If there exist formal (operational) specifications for subtasks then these are delegated to computers. Therefore, our approach allows to specify and validate distributed, cooperative systems where some subtasks are solved by humans and other subtasks are solved automatically by computers.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
In many robotic applications, the teaching of points in space is necessary to register the robot coordinate system with the one of the application. Robot-human interaction is awkward and dangerous for the human because of the possibly large size and power of the robot, so robot movements must be predictable and natural. We present a novel hybrid control algorithm which provides the needed precision in small scale movements while allowing for fast and intuitive large scale translations.
We present a parallel control architecture for industrial robot cells. It is based on closed functional components arranged in a flat communication hierarchy. The components may be executed by different processing elements, and each component itself may run on multiple processing elements. The system is driven by the instructions of a central cell control component. We set up necessary requirements for industrial robot cells and possible parallelization levels. These are met by the suggested robot control architecture. As an example we present a robot work cell and a component for motion planning, which fits well in this concept.
The semantics of everyday language and the semanticsof its naive translation into classical first-order language consider-ably differ. An important discrepancy that is addressed in this paperis about the implicit assumption what exists. For instance, in thecase of universal quantification natural language uses restrictions andpresupposes that these restrictions are non-empty, while in classi-cal logic it is only assumed that the whole universe is non-empty.On the other hand, all constants mentioned in classical logic arepresupposed to exist, while it makes no problems to speak about hy-pothetical objects in everyday language. These problems have beendiscussed in philosophical logic and some adequate many-valuedlogics were developed to model these phenomena much better thanclassical first-order logic can do. An adequate calculus, however, hasnot yet been given. Recent years have seen a thorough investigationof the framework of many-valued truth-functional logics. UnfortuADnately, restricted quantifications are not truth-functional, hence theydo not fit the framework directly. We solve this problem by applyingrecent methods from sorted logics.
One of the many features needed to support the activities of autonomous systems is the ability of motion planning. It enables robots to move in their environment securely and to accomplish given tasks. Unfortunately, the control loop comprising sensing, planning, and acting has not yet been closed for robots in dynamic environments. One reason involves the long execution times of the motion planning component. A solution for this problem is offered by the use of highly computational parallelism. Thus, an important task is the parallelization of existing motion planning algorithms for robots so that they are suitable for highly computational parallelism. In several cases, completely new algorithms have to be designed, so that a parallelization is feasible. In this survey, we review recent approaches to motion planning using parallel computation.
In recent years, more and more publications and material for studying and teaching, e. g. for Web-based teaching (WBT), appear "online" and digital libraries are built to manage such publications and online materials. Therefore, the most important concerns are related to the problem of durable, sustained storage and the management of content together with its metadata existing in heterogeneous styles and formats. In this paper, we present specific techniques and their use to support metadata-based catalog services. Such semistructured metadata (represented as XML fragments), which belong to online learning resources, need efficient XML-based query support, scalable result set processing, and comprehensive facilities for personalization purposes. We discuss the associated problems, subsequently derive the concepts of a suitable architecture, and finally outline the realization by means of our prototype system that is based on the J2EE component model.
When considering complex systems, identifying the most important actors is often of relevance. When the system is modeled
as a network, centrality measures are used which assign each node a value due to its position in the network. It is often
disregarded that they implicitly assume a network process flowing through a network, and also make assumptions of how
the network process flows through the network. A node is then central with respect to this network process (Borgatti in Soc
Netw 27(1):55–71, 2005, https ://doi.org/10.1016/j.socne t.2004.11.008). It has been shown that real-world processes often
do not fulfill these assumptions (Bockholt and Zweig, in Complex networks and their applications VIII, Springer, Cham,
2019, https ://doi.org/10.1007/978-3-030-36683 -4_7). In this work, we systematically investigate the impact of the measures’
assumptions by using four datasets of real-world processes. In order to do so, we introduce several variants of the betweenness
and closeness centrality which, for each assumption, use either the assumed process model or the behavior of the real-world
process. The results are twofold: on the one hand, for all measure variants and almost all datasets, we find that, in general,
the standard centrality measures are quite robust against deviations in their process model. On the other hand, we observe a
large variation of ranking positions of single nodes, even among the nodes ranked high by the standard measures. This has
implications for the interpretability of results of those centrality measures. Since a mismatch of the behaviour of the real
network process and the assumed process model does even affect the highly-ranked nodes, resulting rankings need to be
interpreted with care.
Even though it is not very often admitted, partial functionsdo play a significant role in many practical applications of deduction sys-tems. Kleene has already given a semantic account of partial functionsusing a three-valued logic decades ago, but there has not been a satisfact-ory mechanization. Recent years have seen a thorough investigation ofthe framework of many-valued truth-functional logics. However, strongKleene logic, where quantification is restricted and therefore not truth-functional, does not fit the framework directly. We solve this problemby applying recent methods from sorted logics. This paper presents atableau calculus that combines the proper treatment of partial functionswith the efficiency of sorted calculi.
INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.
In this paper we generalize the notion of method for proofplanning. While we adopt the general structure of methods introducedby Alan Bundy, we make an essential advancement in that we strictlyseparate the declarative knowledge from the procedural knowledge. Thischange of paradigm not only leads to representations easier to under-stand, it also enables modeling the important activity of formulatingmeta-methods, that is, operators that adapt the declarative part of exist-ing methods to suit novel situations. Thus this change of representationleads to a considerably strengthened planning mechanism.After presenting our declarative approach towards methods we describethe basic proof planning process with these. Then we define the notion ofmeta-method, provide an overview of practical examples and illustratehow meta-methods can be integrated into the planning process.
Extending the planADbased paradigm for auto-mated theorem proving, we developed in previ-ous work a declarative approach towards rep-resenting methods in a proof planning frame-work to support their mechanical modification.This paper presents a detailed study of a classof particular methods, embodying variations ofa mathematical technique called diagonaliza-tion. The purpose of this paper is mainly two-fold. First we demonstrate that typical math-ematical methods can be represented in ourframework in a natural way. Second we illus-trate our philosophy of proof planning: besidesplanning with a fixed repertoire of methods,metaADmethods create new methods by modify-ing existing ones. With the help of three differ-ent diagonalization problems we present an ex-ample trace protocol of the evolution of meth-ods: an initial method is extracted from a par-ticular successful proof. This initial method isthen reformulated for the subsequent problems,and more general methods can be obtained byabstracting existing methods. Finally we comeup with a fairly abstract method capable ofdealing with all the three problems, since it cap-tures the very key idea of diagonalization.
The development of algorithmic differentiation (AD) tools focuses mostly on handling floating point types in the target language. Taping optimizations in these tools mostly focus on specific operations like matrix vector products. Aggregated types like std::complex are usually handled by specifying the AD type as a template argument. This approach provides exact results, but prevents the use of expression templates. If AD tools are extended and specialized such that aggregated types can be added to the expression framework, then this will result in reduced memory utilization and improve the timing for applications where aggregated types such as complex number or matrix vector operations are used. Such an integration requires a reformulation of the stored data per expression and a rework of the tape evaluation process. We will demonstrate the overheads on a synthetic benchmark and show the improvement when aggregated types are handled properly by the expression framework of the AD tool.
A method for efficiently handling associativity and commutativity (AC) in implementations of (equational) theorem provers without incorporating AC as an underlying theory will be presented. The key of substantial efficiency gains resides in a more suitable representation of permutation-equations (such as f(x,f(y,z))=f(y,f(z,x)) for instance). By representing these permutation-equations through permutations in the mathematical sense (i.e. bijective func- tions :{1,..,n} {1,..,n}), and by applying adapted and specialized inference rules, we can cope more appropriately with the fact that permutation-equations are playing a particular role. Moreover, a number of restrictions concerning application and generation of permuta- tion-equations can be found that would not be possible in this extent when treating permu- tation-equations just like any other equation. Thus, further improvements in efficiency can be achieved.
We present an approach to systematically describing case-based reasoning systems bydifferent kinds of criteria. One main requirement was the practical relevance of these criteria and their usability for real-life applications. We report on the results we achieved from a case study carried out in the INRECA1 Esprit project.
The development of complex software systems is driven by many diverse and sometimes contradictory requirements such as correctness and maintainability of resulting products, development costs, and time-to-market. To alleviate these difficulties, we propose a development method for distributed systems that integrates different basic approaches. First, it combines the use of the formal description technique SDL with software reuse concepts. This results in the definition of a use-case driven, incremental development method with SDL-patterns as the main reusable artifacts. Experience with this approach has shown that there are several other factors of influence, such as the quality of reuse artifacts or the experience of the development team. Therefore, we further combined our SDL-pattern approach with an improvement methodology known from the area of experimental software engineering. In order to demonstrate the validity of this integrating approach, we sketch some representative outcomings of a case study.
Several activities around the world aim at integrating object-oriented data models with relational ones in order to improve database management systems. As a first result of these activities, object-relational database management systems (ORDBMS) are already commercially available and, simultaneously, are subject to several research projects. This (position) paper reports on our activities in exploiting object-relational database technology for establishing repository manager functionality supporting software engineering (SE) processes. We argue that some of the key features of ORDBMS can directly be exploited to fulfill many of the needs of SE processes. Thus, ORDBMS, as we think, are much better suited to support SE applications than any others. Nevertheless, additional functionality, e. g., providing adequate version management, is required in order to gain a completely satisfying SE repository. In order to remain flexible, we have developed a generative approach for providing this additional functionality. It remains to be seen whether this approach, in turn, can effectively exploit ORDBMS features. This paper, therefore, wants to show that ORDBMS can substantially contribute to both establishing and running SE repositories.
Comprehensive reuse and systematic evolution of reuse artifacts as proposed by the Quality Improvement Paradigm (QIP) do not only require tool support for mere storage and retrieval. Rather, an integrated management of (potentially reusable) experience data as well as project-related data is needed. This paper presents an approach exploiting object-relational database technology to implement QIP-driven reuse repositories. Requirements, concepts, and implementational aspects are discussed and illustrated through a running example, namely the reuse and continuous improvement of SDL patterns for developing distributed systems. Our system is designed to support all phases of a reuse process and the accompanying improvement cycle by providing adequate functionality. Its implementation is based on object-relational database technology along with an infrastructure well suited for these purposes.
Many mathematical proofs are hard to generate forhumans and even harder for automated theoremprovers. Classical techniques of automated theoremproving involve the application of basic rules, of built-in special procedures, or of tactics. Melis (Melis 1993)introduced a new method for analogical reasoning inautomated theorem proving. In this paper we showhow the derivational analogy replay method is relatedand extended to encompass analogy-driven proof planconstruction. The method is evaluated by showing theproof plan generation of the Pumping Lemma for con-text free languages derived by analogy with the proofplan of the Pumping Lemma for regular languages.This is an impressive evaluation test for the analogicalreasoning method applied to automated theorem prov-ing, as the automated proof of this Pumping Lemmais beyond the capabilities of any of the current auto-mated theorem provers.
For transferring existing knowledge into new projects, reuse has become an important factor in today's software industry. However, to set reuse into practice, reusable artifacts have to be stored somewhere, and must be offered to (re-)users on demand. For this purpose, advanced reuse repository systems like, for instance, instantiations of the Experience Base concept, are quite frequently used. Many people, from different projects, have to access such a repository at various phases of software development processes to retrieve or store reusable data. In order to fulfill the given tasks, each of these user has specific needs. Taking this into account, a reuse repository has to offer tailored user interfaces and functions for different user groups. Furthermore, since the contents of such a repository usually represent the state of the art of an organization's (core) competencies, not everyone should be allowed to freely access each and every repository entry. This isespecially true for persons that are not part of the organization. This report discusses role concepts that can be applied to reuse repository systems to overcome some of the stated access problems. Commonly used roles for software development and reuse repository management are listed. Based on these roles, a basic set of roles, as implemented in the SFB 501 Experience Base, is introduced.
We propose a universal method for the evaluation of generalized standard materials that greatly simplifies the material law implementation process. By means of automatic differentiation and a numerical integration scheme, AutoMat reduces the implementation effort to two potential functions. By moving AutoMat to the GPU, we close the performance gap to conventional evaluation routines and demonstrate in detail that the expression level reverse mode of automatic differentiation as well as its extension to second order derivatives can be applied inside CUDA kernels. We underline the effectiveness and the applicability of AutoMat by integrating it into the FFT-based homogenization scheme of Moulinec and Suquet and discuss the benefits of using AutoMat with respect to runtime and solution accuracy for an elasto-viscoplastic example.
This paper discusses the problem of automatic off-line programming and motion planning for industrial robots. At first, a new concept consisting of three steps is proposed. The first step, a new method for on-line motion planning is introduced. The motion planning method is based on the A*-search algorithm and works in the implicit configuration space. During searching, the collisions are detected in the explicitly represented Cartesian workspace by hierarchical distance computation. In the second step, the trajectory planner has to transform the path into a time and energy optimal robot program. The practical application of these two steps strongly depends on the method for robot calibration with high accuracy, thus, mapping the virtual world onto the real world, which is discussed in the third step.
In this paper, a framework for globally distributed soft-ware development and management environments, whichwe call Booster is presented. Additionally, the first experi-ences with WebMake, an application developed to serve asan experimental platform for a software developmentenvironment based on the World Wide Web and theBooster framework is introduced. Booster encompasses thebasic building blocks and mechanisms necessary tosupport a truly cooperative distributed softwaredevelopment from the very beginning to the last steps in asoftware life cycle. It is thus a precursor of the GlobalSoftware Highway, in which providers and users can meetfor the development, management, exchange and usage ofall kind of software.
Edit distances between merge trees of scalar fields have many applications in scientific visualization, such as ensemble analysis, feature tracking or symmetry detection. In this paper, we propose branch mappings, a novel approach to the construction of edit mappings for merge trees. Classic edit mappings match nodes or edges of two trees onto each other, and therefore have to either rely on branch decompositions of both trees or have to use auxiliary node properties to determine a matching. In contrast, branch mappings employ branch properties instead of node similarity information, and are independent of predetermined branch decompositions. Especially for topological features, which are typically based on branch properties, this allows a more intuitive distance measure which is also less susceptible to instabilities from small-scale perturbations. For trees with 𝒪(n) nodes, we describe an 𝒪(n4) algorithm for computing optimal branch mappings, which is faster than the only other branch decomposition-independent method in the literature by more than a linear factor. Furthermore, we compare the results of our method on synthetic and real-world examples to demonstrate its practicality and utility.
As global networks are being used by more and more people,they are becoming increasingly interesting for commercial appli-cations. The recent success and change in direction of the World-Wide Web is a clear indication for this. However, this success meta largely unprepared communications infrastructure. The Inter-net as an originally non-profit network did neither offer the secu-rity, nor the globally available accounting infrastructure byitself.These problems were addressed in the recent past, but in aseemingly ad-hoc manner. Several different accounting schemessensible for only certain types of commercial transactions havebeen developed, which either seem to neglect the problems ofscalability, or trade security for efficiency. Finally, some propos-als aim at achieving near perfect security at the expense of effi-ciency, thus rendering those systems to be of no practical use.In contrast, this paper presents a suitably configurable schemefor accounting in a general, widely distributed client/server envi-ronment. When developing the protocol presented in this paper,special attention has been paid to make this approach work wellin the future setting of high-bandwidth, high-latency internets.The developed protocol has been applied to a large-scale distrib-uted application, a WWW-based software development environ-ment.
This paper presents an iterative finite element (FE)–based method to calculate the gravity-free shape of nonrigid parts from
an optical measurement performed on a non-over-constrained fixture. Measuring these kinds of parts in a stress-free state
is almost impossible because deflections caused by their weight occur. To solve this problem, a simulation model of the
measurement is created using available methods of reverse engineering. Then, an iterative algorithm calculates the gravityfree
shape. The approach does not require a CAD model of the measured part, implying the whole part can be fully scanned.
The application of this method mainly addresses thin, unstable sheet metal parts, like those commonly used in the automotive
or aerospace industry. To show the performance of the proposed method, validations with simulation and experimental
data are presented. The shown results meet the predefined quality goal to predict shapes within a tolerance of ±0.05 mm
measured in surface normal direction.
We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is built on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been implemented that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.
Self-adaptation allows software systems to autonomously adjust their behavior during run-time by handling all possible
operating states that violate the requirements of the managed system. This requires an adaptation engine that receives adaptation
requests during the monitoring process of the managed system and responds with an automated and appropriate adaptation
response. During the last decade, several engineering methods have been introduced to enable self-adaptation in software systems.
However, these methods lack addressing (1) run-time uncertainty that hinders the adaptation process and (2) the performance
impacts resulted from the complexity and the large number of the adaptation space. This paper presents CRATER, a framework
that builds an external adaptation engine for self-adaptive software systems. The adaptation engine, which is built on Case-based
Reasoning, handles the aforementioned challenges together. This paper is braced with an experiment illustrating the benefits of
this framework. The experimental results shows the potential of CRATER in terms handling run-time uncertainty and adaptation
remembrance that enhances the performance for large number of adaptation space.
Machine learning algorithms are widely applied to create powerful prediction models. With increasingly complex models, humans' ability to understand the decision function (that maps from a high-dimensional input space) is quickly exceeded. To explain a model's decisions, black-box methods have been proposed that provide either non-linear maps of the global topology of the decision boundary, or samples that allow approximating it locally. The former loses information about distances in input space, while the latter only provides statements about given samples, but lacks a focus on the underlying model for precise ‘What-If'-reasoning. In this paper, we integrate both approaches and propose an interactive exploration method using local linear maps of the decision space. We create the maps on high-dimensional hyperplanes—2D-slices of the high-dimensional parameter space—based on statistical and personal feature mutability and guided by feature importance. We complement the proposed workflow with established model inspection techniques to provide orientation and guidance. We demonstrate our approach on real-world datasets and illustrate that it allows identification of instance-based decision boundary structures and can answer multi-dimensional ‘What-If'-questions, thereby identifying counterfactual scenarios visually.
This paper addresses the decomposition of proofs as a means of constructingmethods in plan-based automated theorem proving. It shows also, howdecomposition can beneficially be applied in theorem proving by analogy.Decomposition is also useful for human-style proof presentation. We proposeseveral decomposition techniques that were found to be useful in automatedtheorem proving and give examples of their application.
Editorial
(2020)
For the online collision detection with a multi-arm robot a fast method for computing the so-called collision vector is presented. Manipulators and obstacles are modelled by sets of convex polytopes. Known distance algorithms serve as a foundation. To speed up the collision detection dynamic obstacles are approximated by geometric primitives and organized in hierarchies. On-line, the here introduced Dynamic Hierarchies are adjusted to the current arm configuration. A comparison with previous methods shows an increased acceleration of the computations.
One of the many features needed to support the activities of autonomous systems is the ability of motion planning. It enables robots to move in their environment securely and to accomplish given tasks. Unfortunately, the control loop comprising sensing, planning, and acting has not yet been closed for robots in dynamic environments. One reason involves the long execution times of the motion planning component. A solution for this problem is offered by the use of highly computational parallelism. Thus, an important task is the parallelization of existing motion planning algorithms for robots so that they are suitable for highly computational parallelism. In several cases, completely new algorithms have to be designed, so that a parallelization is feasible. In this survey, we review recent approaches to motion planning using parallel computation. As a classification scheme, we use the structure given by the different approaches to the robot's motion planning. For each approach, the available parallel processing methods are discussed. Each approach is uniquely assigned a class. Finally, for each referenced research work, a list of keywords is given.
Kinetic models of human motion rely on boundary conditions which are defined by the interaction of the body with its environment. In the simplest case, this interaction is limited to the foot contact with the ground and is given by the so called ground reaction force (GRF). A major challenge in the reconstruction of GRF from kinematic data is the double support phase, referring to the state with multiple ground contacts. In this case, the GRF prediction is not well defined. In this work we present an approach to reconstruct and distribute vertical GRF (vGRF) to each foot separately, using only kinematic data. We propose the biomechanically inspired force shadow method (FSM) to obtain a unique solution for any contact phase, including double support, of an arbitrary motion. We create a kinematic based function, model an anatomical foot shape and mimic the effect of hip muscle activations. We compare our estimations with the measurements of a Zebris pressure plate and obtain correlations of 0.39≤r≤0.94 for double support motions and 0.83≤r≤0.87 for a walking motion. The presented data is based on inertial human motion capture, showing the applicability for scenarios outside the laboratory. The proposed approach has low computational complexity and allows for online vGRF estimation.
We describe a novel technique for the simultaneous visualization of multiple scalar fields, e.g. representing the members of an ensemble, based on their contour trees. Using tree alignments, a graph-theoretic concept similar to edit distance mappings, we identify commonalities across multiple contour trees and leverage these to obtain a layout that can represent all trees simultaneously in an easy-to-interpret, minimally-cluttered manner. We describe a heuristic algorithm to compute tree alignments for a given similarity metric, and give an algorithm to compute a joint layout of the resulting aligned contour trees. We apply our approach to the visualization of scalar field ensembles, discuss basic visualization and interaction possibilities, and demonstrate results on several analytic and real-world examples.
Software development organizations are recognizing the increasing importance of investing in the build-up of core competencies for their competitiveness in software system development. This is supported by reuse and experience repository systems that assist in capturing and reusing all kinds of software artifacts (e. g., code, patterns, frameworks) and processes as well as experiences related to these artifacts and processes. To justify such an investment and guide its improvement, it must be evaluated according to the business case, that is, a measurement program has to be developed that is oriented towards the business goals of such a reuse and experience repository system. In this paper, we suggest an approach to iteratively build up measurement programs for gaining feedback and, thereby, controlling and improving such a reuse and experience repository system. The focus is placed on guidelines for the evolution of such measurement programs over time, rather than providing directly applicable metrics or questionnaires. In order to illustrate the feasibility of the approach, examples of running measurement programs at different stages of evolutions are given.
Higher-Order Tableaux
(1999)
Even though higher-order calculi for automated theorem prov-ing are rather old, tableau calculi have not been investigated yet. Thispaper presents two free variable tableau calculi for higher-order logicthat use higher-order unification as the key inference procedure. Thesecalculi differ in the treatment of the substitutional properties of equival-ences. The first calculus is equivalent in deductive power to the machine-oriented higher-order refutation calculi known from the literature, whereasthe second is complete with respect to Henkin's general models.
This paper analyzes how mathematicians prove the-orems. The analysis is based upon several empiricalsources such as reports of mathematicians and math-ematical proofs by analogy. In order to combine thestrength of traditional automated theorem provers withhuman-like capabilities, the questions arise: Whichproblem solving strategies are appropriate? Which rep-resentations have to be employed? As a result of ouranalysis, the following reasoning strategies are recog-nized: proof planning with partially instantiated meth-ods, structuring of proofs, the transfer of subproofs andof reformulated subproofs. We discuss the represent-ation of a component of these reasoning strategies, aswell as its properties. We find some mechanisms neededfor theorem proving by analogy, that are not providedby previous approaches to analogy. This leads us to acomputational representation of new components andprocedures for automated theorem proving systems.
In this paper we are interested in using a firstorder theorem prover to prove theorems thatare formulated in some higher order logic. Tothis end we present translations of higher or-der logics into first order logic with flat sortsand equality and give a sufficient criterion forthe soundness of these translations. In addi-tion translations are introduced that are soundand complete with respect to L. Henkin's gen-eral model semantics. Our higher order logicsare based on a restricted type structure in thesense of A. Church, they have typed functionsymbols and predicate symbols, but no sorts.
Algorithmic decision-making (ADM) systems have come to support, pre-empt or substitute for human decisions in manifold areas, with potentially significant impacts on individuals' lives. Achieving transparency and accountability has been formulated as a general goal regarding the use of these systems. However, concrete applications differ widely in the degree of risk and the accountability problems they entail for data subjects. The present paper addresses this variation and presents a framework that differentiates regulatory requirements for a range of ADM system uses. It draws on agency theory to conceptualize accountability challenges from the point of view of data subjects with the purpose to systematize instruments for safeguarding algorithmic accountability. The paper furthermore shows how such instruments can be matched to applications of ADM based on a risk matrix. The resulting comprehensive framework can guide the evaluation of ADM systems and the choice of suitable regulatory provisions.
We present two techniques for reasoning from cases to solve classification tasks: Induction and case-based reasoning. We contrast the two technologies (that are often confused) and show how they complement each other. Based on this, we describe how they are integrated in one single platform for reasoning from cases: The Inreca system.
Four different initialization methods for parallel Branch-and-bound algorithms are described and compared with reference to several criteria. A formal analysis of their idle times and efficiency follows. It indicates that the efficiency of three methods depends on the branching factor of the search tree. Furthermore, the fourth method offers the best efficiency of the overall algorithm when a centralized OPEN set is used. Experimental results by a PRAM simulation support these statements.
This paper describes how knowledge-based techniques can be used to overcome problems of workflow management in engineering applications. Using explicit process and product models as a basis for a workflow interpreter allows to alternate planning and execution steps, resulting in an increased flexibility of project coordination and enactment. To gain the full advantages of this flexibility, change processes have to be supported by the system. These require an improved traceability of decisions and have to be based on dependency management and change notification mechanisms. Our methods and techniques are illustrated by two applications: Urban land-use planning and software process modeling.
The task of handling non-rigid one-dimensional objects by a robot manipulation system is investigated. To distinguish between different non-rigid object behaviors, five classes of deformable objects from a robotic point of view are proposed. Additionally, an enumeration of all possible contact states of one-dimensional objects with polyhedral obstacles is provided. Finally, the qualitative motion behavior of linear objects is analyzed for stable point contacts. Experiments with different materials validate the analytical results.
Manipulating deformable linear objects - Vision-based recognition of contact state transitions -
(1999)
A new and systematic approach to machine vision-based robot manipulation of deformable (non-rigid) linear objects is introduced. This approach reduces the computational needs by using a simple state-oriented model of the objects. These states describe the relation of the object with respect to an obstacle and are derived from the object image and its features. Therefore, the object is segmented from a standard video frame using a fast segmentation algorithm. Several object features are presented which allow the state recognition of the object while being manipulated by the robot.