Kaiserslautern - Fachbereich Informatik
Refine
Year of publication
Document Type
- Article (107) (remove)
Language
- English (107) (remove)
Has Fulltext
- yes (107)
Keywords
- AG-RESY (32)
- PARO (24)
- SKALP (14)
- HANDFLEX (6)
- motion planning (5)
- industrial robots (4)
- parallel processing (4)
- LOADBAL (3)
- search algorithms (3)
- Case-Based Reasoning (2)
Faculty / Organisational entity
It is difficult for robots to handle a vibrating deformable object. Even for human beings it is a high-risk operation to, for example, insert a vibrating linear object into a small hole. However, fast manipulation using a robot arm is not just a dream; it may be achieved if some important features of the vibration are detected online. In this paper, we present an approach for fast manipulation using a force/torque sensor mounted on the robot's wrist. Template matching method is employed to recognize the vibrational phase of the deformable objects. Therefore, a fast manipulation can be performed with a high success rate, even if there is acute vibration. Experiments inserting a deformable object into a hole are conducted to test the presented method. Results demonstrate that the presented sensor-based online fast manipulation is feasible.
This paper discusses the problem of automatic off-line programming and motion planning for industrial robots. At first, a new concept consisting of three steps is proposed. The first step, a new method for on-line motion planning is introduced. The motion planning method is based on the A*-search algorithm and works in the implicit configuration space. During searching, the collisions are detected in the explicitly represented Cartesian workspace by hierarchical distance computation. In the second step, the trajectory planner has to transform the path into a time and energy optimal robot program. The practical application of these two steps strongly depends on the method for robot calibration with high accuracy, thus, mapping the virtual world onto the real world, which is discussed in the third step.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance.
This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A*-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear, and sometimes even superlinear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds.
A new problem for the automated off-line programming of industrial robot application is investigated. The Multi-Goal Path Planning is to find the collision-free path connecting a set of goal poses and minimizing e.g. the total path length. Our solution is based on an earlier reported path planner for industrial robot arms with 6 degrees-of-freedom in an on-line given 3D environment. To control the path planner, four different goal selection methods are introduced and compared. While the Random and the Nearest Pair Selection methods can be used with any path planner, the Nearest Goal and the Adaptive Pair Selection method are favorable for our planner. With the latter two goal selection methods, the Multi-Goal Path Planning task can be significantly accelerated, because they are able to automatically solve the simplest path planning problems first. Summarizing, compared to Random or Nearest Pair Selection, this new Multi-Goal Path Planning approach results in a further cost reduction of the programming phase.
INRECA offers tools and methods for developing, validating, and maintaining classification, diagnosis and decision support systems. INRECA's basic technologies are inductive and case-based reasoning [9]. INRECA fully integrates [2] both techniques within one environment and uses the respective advantages of both technologies. Its object-oriented representation language CASUEL [10, 3] allows the definition of complex case structures, relations, similarity measures, as well as background knowledge to be used for adaptation. The objectoriented representation language makes INRECA a domain independent tool for its destined kind of tasks. When problems are solved via case-based reasoning, the primary kind of knowledge that is used during problem solving is the very specific knowledge contained in the cases. However, in many situations this specific knowledge by itself is not sufficient or appropriate to cope with all requirements of an application. Very often, background knowledge is available and/or necessary to better explore and interpret the available cases [1]. Such general knowledge may state dependencies between certain case features and can be used to infer additional, previously unknown features from the known ones.
Edit distances between merge trees of scalar fields have many applications in scientific visualization, such as ensemble analysis, feature tracking or symmetry detection. In this paper, we propose branch mappings, a novel approach to the construction of edit mappings for merge trees. Classic edit mappings match nodes or edges of two trees onto each other, and therefore have to either rely on branch decompositions of both trees or have to use auxiliary node properties to determine a matching. In contrast, branch mappings employ branch properties instead of node similarity information, and are independent of predetermined branch decompositions. Especially for topological features, which are typically based on branch properties, this allows a more intuitive distance measure which is also less susceptible to instabilities from small-scale perturbations. For trees with 𝒪(n) nodes, we describe an 𝒪(n4) algorithm for computing optimal branch mappings, which is faster than the only other branch decomposition-independent method in the literature by more than a linear factor. Furthermore, we compare the results of our method on synthetic and real-world examples to demonstrate its practicality and utility.
Kinetic models of human motion rely on boundary conditions which are defined by the interaction of the body with its environment. In the simplest case, this interaction is limited to the foot contact with the ground and is given by the so called ground reaction force (GRF). A major challenge in the reconstruction of GRF from kinematic data is the double support phase, referring to the state with multiple ground contacts. In this case, the GRF prediction is not well defined. In this work we present an approach to reconstruct and distribute vertical GRF (vGRF) to each foot separately, using only kinematic data. We propose the biomechanically inspired force shadow method (FSM) to obtain a unique solution for any contact phase, including double support, of an arbitrary motion. We create a kinematic based function, model an anatomical foot shape and mimic the effect of hip muscle activations. We compare our estimations with the measurements of a Zebris pressure plate and obtain correlations of 0.39≤r≤0.94 for double support motions and 0.83≤r≤0.87 for a walking motion. The presented data is based on inertial human motion capture, showing the applicability for scenarios outside the laboratory. The proposed approach has low computational complexity and allows for online vGRF estimation.
About the approach The approach of TOPO was originally developed in the FABEL project1[1] to support architects in designing buildings with complex installations. Supplementing knowledge-based design tools, which are available only for selected subtasks, TOPO aims to cover the whole design process. To that aim, it relies almost exclusively on archived plans. Input to TOPO is a partial plan, and output is an elaborated plan. The input plan constitutes the query case and the archived plans form the case base with the source cases. A plan is a set of design objects. Each design object is defined by some semantic attributes and by its bounding box in a 3-dimensional coordinate system. TOPO supports the elaboration of plans by adding design objects.
Patients after total hip arthroplasty (THA) suffer from lingering musculoskeletal restrictions. Three-dimensional (3D) gait analysis in combination with machine-learning approaches is used to detect these impairments. In this work, features from the 3D gait kinematics, spatio temporal parameters (Set 1) and joint angles (Set 2), of an inertial sensor (IMU) system are proposed as an input for a support vector machine (SVM) model, to differentiate impaired and non-impaired gait. The features were divided into two subsets. The IMU-based features were validated against an optical motion capture (OMC) system by means of 20 patients after THA and a healthy control group of 24 subjects. Then the SVM model was trained on both subsets. The validation of the IMU system-based kinematic features revealed root mean squared errors in the joint kinematics from 0.24° to 1.25°. The validity of the spatio-temporal gait parameters (STP) revealed a similarly high accuracy. The SVM models based on IMU data showed an accuracy of 87.2% (Set 1) and 97.0% (Set 2). The current work presents valid IMU-based features, employed in an SVM model for the classification of the gait of patients after THA and a healthy control. The study reveals that the features of Set 2 are more significant concerning the classification problem. The present IMU system proves its potential to provide accurate features for the incorporation in a mobile gait-feedback system for patients after THA.
3D joint kinematics can provide important information about the quality of movements. Optical motion capture systems (OMC) are considered the gold standard in motion analysis. However, in recent years, inertial measurement units (IMU) have become a promising alternative. The aim of this study was to validate IMU-based 3D joint kinematics of the lower extremities during different movements. Twenty-eight healthy subjects participated in this study. They performed bilateral squats (SQ), single-leg squats (SLS) and countermovement jumps (CMJ). The IMU kinematics was calculated using a recently-described sensor-fusion algorithm. A marker based OMC system served as a reference. Only the technical error based on algorithm performance was considered, incorporating OMC data for the calibration, initialization, and a biomechanical model. To evaluate the validity of IMU-based 3D joint kinematics, root mean squared error (RMSE), range of motion error (ROME), Bland-Altman (BA) analysis as well as the coefficient of multiple correlation (CMC) were calculated. The evaluation was twofold. First, the IMU data was compared to OMC data based on marker clusters; and, second based on skin markers attached to anatomical landmarks. The first evaluation revealed means for RMSE and ROME for all joints and tasks below 3°. The more dynamic task, CMJ, revealed error measures approximately 1° higher than the remaining tasks. Mean CMC values ranged from 0.77 to 1 over all joint angles and all tasks. The second evaluation showed an increase in the RMSE of 2.28°– 2.58° on average for all joints and tasks. Hip flexion revealed the highest average RMSE in all tasks (4.87°– 8.27°). The present study revealed a valid IMU-based approach for the measurement of 3D joint kinematics in functional movements of varying demands. The high validity of the results encourages further development and the extension of the present approach into clinical settings.
In many robotic applications, the teaching of points in space is necessary to register the robot coordinate system with the one of the application. Robot-human interaction is awkward and dangerous for the human because of the possibly large size and power of the robot, so robot movements must be predictable and natural. We present a novel hybrid control algorithm which provides the needed precision in small scale movements while allowing for fast and intuitive large scale translations.
Machine learning algorithms are widely applied to create powerful prediction models. With increasingly complex models, humans' ability to understand the decision function (that maps from a high-dimensional input space) is quickly exceeded. To explain a model's decisions, black-box methods have been proposed that provide either non-linear maps of the global topology of the decision boundary, or samples that allow approximating it locally. The former loses information about distances in input space, while the latter only provides statements about given samples, but lacks a focus on the underlying model for precise ‘What-If'-reasoning. In this paper, we integrate both approaches and propose an interactive exploration method using local linear maps of the decision space. We create the maps on high-dimensional hyperplanes—2D-slices of the high-dimensional parameter space—based on statistical and personal feature mutability and guided by feature importance. We complement the proposed workflow with established model inspection techniques to provide orientation and guidance. We demonstrate our approach on real-world datasets and illustrate that it allows identification of instance-based decision boundary structures and can answer multi-dimensional ‘What-If'-questions, thereby identifying counterfactual scenarios visually.
In order to reduce the elapsed time of a computation, a pop-ular approach is to decompose the program into a collection of largelyindependent subtasks which are executed in parallel. Unfortunately, it isoften observed that tightly-coupled parallel programs run considerablyslower than initially expected. In this paper, a framework for the anal-ysis of parallel programs and their potential speedup is presented. Twoparameters which strongly affect the scalability of parallelism are iden-tified, namely the grain of synchronization, and the degree to which thetarget hardware is available. It is shown that for certain classes of appli-cations speedup is inherently poor, even if the program runs under theidealized conditions of perfect load balance, unbounded communicationbandwidth and negligible communication and parallelization overhead.Upper bounds are derived for the speedup that can be obtained in threedifferent types of computations. An example illustrates the main find-ings.
This paper deals with the handling of deformable linear objects (DLOs), such as hoses, wires or leaf springs. It investigates the a priori knowledge about the 6-dimensional force/torque signal for a changing contact situation between a DLO and a rigid polyhedral obstacle. The result is a complete list, containing for each contact change the most significant combination of force/torque signal components together with a description of the expected signal curve. This knowledge enables the reliable detection of changes in the DLO contact situation and with it the implementation of sensor-based manipulation skills for all possible contact changes.
The development of algorithmic differentiation (AD) tools focuses mostly on handling floating point types in the target language. Taping optimizations in these tools mostly focus on specific operations like matrix vector products. Aggregated types like std::complex are usually handled by specifying the AD type as a template argument. This approach provides exact results, but prevents the use of expression templates. If AD tools are extended and specialized such that aggregated types can be added to the expression framework, then this will result in reduced memory utilization and improve the timing for applications where aggregated types such as complex number or matrix vector operations are used. Such an integration requires a reformulation of the stored data per expression and a rework of the tape evaluation process. We will demonstrate the overheads on a synthetic benchmark and show the improvement when aggregated types are handled properly by the expression framework of the AD tool.
Several activities around the world aim at integrating object-oriented data models with relational ones in order to improve database management systems. As a first result of these activities, object-relational database management systems (ORDBMS) are already commercially available and, simultaneously, are subject to several research projects. This (position) paper reports on our activities in exploiting object-relational database technology for establishing repository manager functionality supporting software engineering (SE) processes. We argue that some of the key features of ORDBMS can directly be exploited to fulfill many of the needs of SE processes. Thus, ORDBMS, as we think, are much better suited to support SE applications than any others. Nevertheless, additional functionality, e. g., providing adequate version management, is required in order to gain a completely satisfying SE repository. In order to remain flexible, we have developed a generative approach for providing this additional functionality. It remains to be seen whether this approach, in turn, can effectively exploit ORDBMS features. This paper, therefore, wants to show that ORDBMS can substantially contribute to both establishing and running SE repositories.
This paper deals with the robust manipulation of deformable linear objects such as hoses or wires. We propose manipulation based on thequalitative contact state between the deformable workpiece and a rigid environment. First, we give an enumeration of possible contact states and discuss the main characteristics of each state. Second, we investigate the transitions which are possible between the contact states and derive criteria and conditions for each of them. Finally, we apply the concept of contact states and state transitions to the description of a typical assembly task.
This paper deals with the problem of picking-up deformable linear workpieces such as cables or ropes with an industrial robot. First, we give a motivation and problem definition. Based on a brief conceptual discussion of possible approaches we derive an algorithm for picking-up hanging deformable linear objects using two light barriers as sensor system. For this hardware, a skill-based approach is described and the parameters and major influence factors are discussed. In an experi- mental study, the feasibility and reliability under diverse conditions are investigated. The algorithm is found to be very reliable, if certain boundary conditions are met.
In this paper, we investigate the efficient simulation of deformable linear objects. Based on the state of the art, we extend the principle of minimizing the potential energy by considering plastic deformation and describe a novel approach for treating workpiece dynamics. The major influence factors on precision and computation time are identified and investigated experimentally. Finally, we discuss the usage of parallel processing in order to reduce the computation time.
Dataflow process networks (DPNs) are intrinsically data-driven, i.e., node actions are not synchronized among each other and may fire whenever sufficient input operands arrived at a node. While the general model of computation (MoC) of DPNs does not impose further restrictions, many different subclasses of DPNs representing different dataflow MoCs have been considered over time. These classes mainly differ in the kinds of behaviors of the processes. A DPN may be heterogeneous in that different processes in the network belong to different classes of DPNs. A heterogeneous DPN can therefore be effectively used to model and to implement different components of a system with different kinds of processes and, therefore, different dataflow MoCs. This paper presents a model-based design based on different dataflow MoCs including their heterogeneous combinations. In particular, it covers the automatic software synthesis of systems from DPN models. The main objective is to validate, evaluate and compare the artifacts exhibited by different dataflow MoCs at the implementation level of systems under the supervision of a common design tool. Moreover, this work also offers an efficient synthesis method that targets and exploits heterogeneity in DPNs by generating implementations based on the kinds of behaviors of the processes. The proposed synthesis method provides a tool chain including different specialized code generators for specific dataflow MoCs, and a runtime system that finally maps models using a combination of different dataflow MoCs on cross-vendor target hardware.
We have presented a novel approach to parallel motion planning for robot manipulators in 3D workspaces. The approach is based on arandomized parallel search algorithm and focuses on solving the path planning problem for industrial robot arms working in a reasonably cluttered workspace. The path planning system works in the discretized con guration space, which needs not to be represented explicitly. The parallel search is conducted by a number of rule-based sequential search processes, which work to find a path connecting the initial con guration to the goal via a number of randomly generated subgoal con gurations. Since the planning performs only on-line collision tests with proper proximity information without using pre-computed information, the approach is suitable for planning problems with multirobot or dynamic environments. The implementation has been carried outontheparallel virtual machine (PVM) of a cluster of SUN4 workstations and SGI machines. The experimental results have shown that the approach works well for a 6-dof robot arm in a reasonably cluttered environment, and that parallel computation increases the e ciency of motion planning signi cantly.
Turbulence models, which are a means to fix the closure problem arising from Reynolds averaging of Navier-Stokes equations, are economical stop-gaps but suffer from accuracy issues. Modifying turbulence models by incorporating corrections in their functional form is one approach to improve their accuracy. We estimate correction functionals for the Spalart - Allmaras turbulence model, based on an inverse problem with PDE constraints emphasizing the issue of regularization.
We describe a platform for the portable and secure execution of mobile agents writtenin various interpreted languages on top of a common run-time core. Agents may migrate at anypoint in their execution, fully preserving their state, and may exchange messages with otheragents. One system may contain many virtual places, each establishing a domain of logicallyrelated services under a common security policy governing all agents at this place. Agents areequipped with allowances limiting their resource accesses, both globally per agent lifetime andlocally per place. We discuss aspects of this architecture and report about ongoing work.
We describe a hybrid case-based reasoning system supporting process planning for machining workpieces. It integrates specialized domain dependent reasoners, a feature-based CAD system and domain independent planning. The overall architecture is built on top of CAPlan, a partial-order nonlinear planner. To use episodic problem solving knowledge for both optimizing plan execution costs and minimizing search the case-based control component CAPlan/CbC has been implemented that allows incremental acquisition and reuse of strategical problem solving experience by storing solved problems as cases and reusing them in similar situations. For effective retrieval of cases CAPlan/CbC combines domain-independent and domain-specific retrieval mechanisms that are based on the hierarchical domain model and problem representation.
Structured domains are characterized by the fact that there is an intrinsic dependency between certain key elements in the domain. Considering these dependencies leads to better performance of the planning systems, and it is an important factor for determining the relevance of the cases stored in a case-base. However, testing for cases that meet these dependencies, decreases the performance of case-based planning, as other criterions need also to be consider for determining this relevance. We present a domain-independent architecture that explicitly represents these dependencies so that retrieving relevant cases is ensured without negatively affecting the performance of the case-based planning process.
Many mathematical proofs are hard to generate forhumans and even harder for automated theoremprovers. Classical techniques of automated theoremproving involve the application of basic rules, of built-in special procedures, or of tactics. Melis (Melis 1993)introduced a new method for analogical reasoning inautomated theorem proving. In this paper we showhow the derivational analogy replay method is relatedand extended to encompass analogy-driven proof planconstruction. The method is evaluated by showing theproof plan generation of the Pumping Lemma for con-text free languages derived by analogy with the proofplan of the Pumping Lemma for regular languages.This is an impressive evaluation test for the analogicalreasoning method applied to automated theorem prov-ing, as the automated proof of this Pumping Lemmais beyond the capabilities of any of the current auto-mated theorem provers.
This paper addresses the decomposition of proofs as a means of constructingmethods in plan-based automated theorem proving. It shows also, howdecomposition can beneficially be applied in theorem proving by analogy.Decomposition is also useful for human-style proof presentation. We proposeseveral decomposition techniques that were found to be useful in automatedtheorem proving and give examples of their application.
This paper analyzes how mathematicians prove the-orems. The analysis is based upon several empiricalsources such as reports of mathematicians and math-ematical proofs by analogy. In order to combine thestrength of traditional automated theorem provers withhuman-like capabilities, the questions arise: Whichproblem solving strategies are appropriate? Which rep-resentations have to be employed? As a result of ouranalysis, the following reasoning strategies are recog-nized: proof planning with partially instantiated meth-ods, structuring of proofs, the transfer of subproofs andof reformulated subproofs. We discuss the represent-ation of a component of these reasoning strategies, aswell as its properties. We find some mechanisms neededfor theorem proving by analogy, that are not providedby previous approaches to analogy. This leads us to acomputational representation of new components andprocedures for automated theorem proving systems.
In this paper we present an interpreter which allows to support the validation of conceptual models in early stages of the development. We compare hypermedia and expert system approaches to knowledge processing and show how an integrated approach eases the creation of expert systems. Our knowledge engineering tool CoMo-Kit allows a "smooth" transition from initial protocols via a semi-formal specification based on a typed hypertext up to an running expert system. The interpreter uses the intermediate hypertext representation for the interactive solution of problems. Thereby, tasks are distributed to agents via an local area network. This means that the specification of an expert system can directly be used to solve real world problems. If there exist formal (operational) specifications for subtasks then these are delegated to computers. Therefore, our approach allows to specify and validate distributed, cooperative systems where some subtasks are solved by humans and other subtasks are solved automatically by computers.
In this paper we describe a framework for defining and operationalizing conceptual models of distributed knowledge-based systems which extends published approaches by the notion of ,agents" and multiple task decompositions. The main part deals with techniques underlying our distributed interpreter. We show how a client-server-architecture can be implemented which allows prototyping distributed knowledge-based systems. Further we describe our mechanism which manages task interactions and supports dependency-directed backtracking efficiently.
In many applications, visual analytics (VA) has developed into a standard tool to ease data access and knowledge generation. VA describes a holistic cycle transforming data into hypothesis and visualization to generate insights that enhance the data. Unfortunately, many data sources used in the VA process are affected by uncertainty. In addition, the VA cycle itself can introduce uncertainty to the knowledge generation process but does not provide a mechanism to handle these sources of uncertainty. In this manuscript, we aim to provide an extended VA cycle that is capable of handling uncertainty by quantification, propagation, and visualization, defined as uncertainty-aware visual analytics (UAVA). Here, a recap of uncertainty definition and description is used as a starting point to insert novel components in the visual analytics cycle. These components assist in capturing uncertainty throughout the VA cycle. Further, different data types, hypothesis generation approaches, and uncertainty-aware visualization approaches are discussed that fit in the defined UAVA cycle. In addition, application scenarios that can be handled by such a cycle, examples, and a list of open challenges in the area of UAVA are provided.
We describe a novel technique for the simultaneous visualization of multiple scalar fields, e.g. representing the members of an ensemble, based on their contour trees. Using tree alignments, a graph-theoretic concept similar to edit distance mappings, we identify commonalities across multiple contour trees and leverage these to obtain a layout that can represent all trees simultaneously in an easy-to-interpret, minimally-cluttered manner. We describe a heuristic algorithm to compute tree alignments for a given similarity metric, and give an algorithm to compute a joint layout of the resulting aligned contour trees. We apply our approach to the visualization of scalar field ensembles, discuss basic visualization and interaction possibilities, and demonstrate results on several analytic and real-world examples.
IoT systems consist of Hardware/Software systems (e.g., sensors) that are embedded in a physical world, networked and that interact with complex software platforms. The validation of such systems is a challenge and currently mostly done by prototypes. This paper presents the virtual environment for simulation, emulation and validation of an IoT platform and its semantic model in real life scenarios. It is based on a decentralized, bottom up approach that offers interoperability of IoT devices and the value-added services they want to use across different domains. The framework is demonstrated by a comprehensive case study. The example consists of the complete IoT “Smart Energy” use case with focus on data privacy by homomorphic encryption. The performance of the network is compared while using partially homomorphic encryption, fully homomorphic encryption and no encryption at all.As a major result, we found that our framework is capable of simulating big IoT networks and the overhead introduced by homomorphic encryption is feasible for VICINITY.
Algorithmic decision-making (ADM) systems have come to support, pre-empt or substitute for human decisions in manifold areas, with potentially significant impacts on individuals' lives. Achieving transparency and accountability has been formulated as a general goal regarding the use of these systems. However, concrete applications differ widely in the degree of risk and the accountability problems they entail for data subjects. The present paper addresses this variation and presents a framework that differentiates regulatory requirements for a range of ADM system uses. It draws on agency theory to conceptualize accountability challenges from the point of view of data subjects with the purpose to systematize instruments for safeguarding algorithmic accountability. The paper furthermore shows how such instruments can be matched to applications of ADM based on a risk matrix. The resulting comprehensive framework can guide the evaluation of ADM systems and the choice of suitable regulatory provisions.
In this paper we present the results of the project “#Datenspende” where during the German election in 2017 more than 4000 people contributed their search results regarding keywords connected to the German election campaign.
Analyzing the donated result lists we prove, that the room for personalization of the search results is very small. Thus the opportunity for the effect mentioned in Eli Pariser’s filter bubble theory to occur in this data is also very small, to a degree that it is negligible. We achieved these results by applying various similarity measures to the result lists that were donated. The first approach using the number of common results as a similarity measure showed that the space for personalization is less than two results out of ten on average when searching for persons and at most four regarding the search for parties. Application of other, more specific measures show that the space is indeed smaller, so that the presence of filter bubbles is not evident.
Moreover this project is also a proof of concept, as it enables society to permanently monitor a search engine’s degree of personalization for any desired search terms. The general design can also be transferred to intermediaries, if appropriate APIs restrict selective access to contents relevant to the study in order to establish a similar degree of trustworthiness.
Even though it is not very often admitted, partial functionsdo play a significant role in many practical applications of deduction sys-tems. Kleene has already given a semantic account of partial functionsusing a three-valued logic decades ago, but there has not been a satisfact-ory mechanization. Recent years have seen a thorough investigation ofthe framework of many-valued truth-functional logics. However, strongKleene logic, where quantification is restricted and therefore not truth-functional, does not fit the framework directly. We solve this problemby applying recent methods from sorted logics. This paper presents atableau calculus that combines the proper treatment of partial functionswith the efficiency of sorted calculi.
The introduction of sorts to first-order automated deduc-tion has brought greater conciseness of representation and a considerablegain in efficiency by reducing search spaces. This suggests that sort in-formation can be employed in higher-order theorem proving with similarresults. This paper develops a sorted (lambda)-calculus suitable for automatictheorem proving applications. It extends the simply typed (lambda)-calculus by ahigher-order sort concept that includes term declarations and functionalbase sorts. The term declaration mechanism studied here is powerfulenough to subsume subsorting as a derived notion and therefore gives ajustification for the special form of subsort inference. We present a set oftransformations for sorted (pre-) unification and prove the nondetermin-istic completeness of the algorithm induced by these transformations.
Higher-Order Tableaux
(1999)
Even though higher-order calculi for automated theorem prov-ing are rather old, tableau calculi have not been investigated yet. Thispaper presents two free variable tableau calculi for higher-order logicthat use higher-order unification as the key inference procedure. Thesecalculi differ in the treatment of the substitutional properties of equival-ences. The first calculus is equivalent in deductive power to the machine-oriented higher-order refutation calculi known from the literature, whereasthe second is complete with respect to Henkin's general models.
A straightforward formulation of a mathematical problem is mostly not ad-equate for resolution theorem proving. We present a method to optimize suchformulations by exploiting the variability of first-order logic. The optimizingtransformation is described as logic morphisms, whose operationalizations aretactics. The different behaviour of a resolution theorem prover for the sourceand target formulations is demonstrated by several examples. It is shown howtactical and resolution-style theorem proving can be combined.
The hallmark of traditional Artificial Intelligence (AI) research is the symbolic representation and processing of knowledge. This is in sharp contrast to many forms of human reasoning, which to an extraordinary extent, rely on cases and (typical) examples. Although these examples could themselves be encoded into logic, this raises the problem of restricting the corresponding model classes to include only the intended models.There are, however, more compelling reasons to argue for a hybrid representa-tion based on assertions as well as examples. The problems of adequacy, availability of information, compactness of representation, processing complexity, and last but not least, results from the psychology of human reasoning, all point to the same conclusion: Common sense reasoning requires different knowledge sources and hybrid reasoning principles that combine symbolic as well as semantic-based inference. In this paper we address the problem of integrating semantic representations of examples into automateddeduction systems. The main contribution is a formal framework for combining sentential with direct representations. The framework consists of a hybrid knowledge base, made up of logical formulae on the one hand and direct representations of examples on the other, and of a hybrid reasoning method based on the resolution calculus. The resulting hybrid resolution calculus is shown to be sound and complete.
Typical instances, that is, instances that are representative for a particular situ-ation or concept, play an important role in human knowledge representationand reasoning, in particular in analogical reasoning. This wellADknown obser-vation has been a motivation for investigations in cognitive psychology whichprovide a basis for our characterization of typical instances within conceptstructures and for a new inference rule for justified analogical reasoning withtypical instances. In a nutshell this paper suggests to augment the proposi-tional knowledge representation system by a non-propositional part consistingof concept structures which may have directly represented instances as ele-ments. The traditional reasoning system is extended by a rule for justifiedanalogical inference with typical instances using information extracted fromboth knowledge representation subsystems.
The semantics of everyday language and the semanticsof its naive translation into classical first-order language consider-ably differ. An important discrepancy that is addressed in this paperis about the implicit assumption what exists. For instance, in thecase of universal quantification natural language uses restrictions andpresupposes that these restrictions are non-empty, while in classi-cal logic it is only assumed that the whole universe is non-empty.On the other hand, all constants mentioned in classical logic arepresupposed to exist, while it makes no problems to speak about hy-pothetical objects in everyday language. These problems have beendiscussed in philosophical logic and some adequate many-valuedlogics were developed to model these phenomena much better thanclassical first-order logic can do. An adequate calculus, however, hasnot yet been given. Recent years have seen a thorough investigationof the framework of many-valued truth-functional logics. UnfortuADnately, restricted quantifications are not truth-functional, hence theydo not fit the framework directly. We solve this problem by applyingrecent methods from sorted logics.
In this paper we generalize the notion of method for proofplanning. While we adopt the general structure of methods introducedby Alan Bundy, we make an essential advancement in that we strictlyseparate the declarative knowledge from the procedural knowledge. Thischange of paradigm not only leads to representations easier to under-stand, it also enables modeling the important activity of formulatingmeta-methods, that is, operators that adapt the declarative part of exist-ing methods to suit novel situations. Thus this change of representationleads to a considerably strengthened planning mechanism.After presenting our declarative approach towards methods we describethe basic proof planning process with these. Then we define the notion ofmeta-method, provide an overview of practical examples and illustratehow meta-methods can be integrated into the planning process.
Extending the planADbased paradigm for auto-mated theorem proving, we developed in previ-ous work a declarative approach towards rep-resenting methods in a proof planning frame-work to support their mechanical modification.This paper presents a detailed study of a classof particular methods, embodying variations ofa mathematical technique called diagonaliza-tion. The purpose of this paper is mainly two-fold. First we demonstrate that typical math-ematical methods can be represented in ourframework in a natural way. Second we illus-trate our philosophy of proof planning: besidesplanning with a fixed repertoire of methods,metaADmethods create new methods by modify-ing existing ones. With the help of three differ-ent diagonalization problems we present an ex-ample trace protocol of the evolution of meth-ods: an initial method is extracted from a par-ticular successful proof. This initial method isthen reformulated for the subsequent problems,and more general methods can be obtained byabstracting existing methods. Finally we comeup with a fairly abstract method capable ofdealing with all the three problems, since it cap-tures the very key idea of diagonalization.
In this paper we are interested in using a firstorder theorem prover to prove theorems thatare formulated in some higher order logic. Tothis end we present translations of higher or-der logics into first order logic with flat sortsand equality and give a sufficient criterion forthe soundness of these translations. In addi-tion translations are introduced that are soundand complete with respect to L. Henkin's gen-eral model semantics. Our higher order logicsare based on a restricted type structure in thesense of A. Church, they have typed functionsymbols and predicate symbols, but no sorts.