Overview of transparency and inspectability mechanisms to achieve accountability of artificial intelligence systems

  • Several governmental organizations all over the world aim for algorithmic accountability of artificial intelligence systems. However, there are few specific proposals on how exactly to achieve it. This article provides an extensive overview of possible transparency and inspectability mechanisms that contribute to accountability for the technical components of an algorithmic decision-making system. Following the different phases of a generic software development process, we identify and discuss several such mechanisms. For each of them, we give an estimate of the cost with respect to time and money that might be associated with that measure.

Download full text files

Export metadata

Metadaten
Author:Marc HauerORCiD
URN:urn:nbn:de:hbz:386-kluedo-79238
DOI:https://doi.org/10.1017/dap.2023.30
ISSN:2632-3249
Parent Title (English):Data and Policy
Publisher:Cambridge University Press
Place of publication:Cambridge, University Printing House Shaftesbury Road, United Kingdom
Contributor(s):Marc HauerORCiD, Tobias KrafftORCiD, Katharina ZweigORCiD
Document Type:Article
Language of publication:English
Date of Publication (online):2024/03/29
Year of first Publication:2023
Publishing Institution:Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau
Date of the Publication (Server):2024/04/03
Issue:5
Page Number:23
Source:10.1017/dap.2023.30
Faculties / Organisational entities:Kaiserslautern - Fachbereich Informatik
DDC-Cassification:0 Allgemeines, Informatik, Informationswissenschaft / 004 Informatik
Collections:Open-Access-Publikationsfonds
Licence (German):Zweitveröffentlichung