Trustworthy artificial intelligence

Simion, M. and Kelp, C. (2023) Trustworthy artificial intelligence. Asian Journal of Philosophy, 2(1), 8. (doi: 10.1007/s44204-023-00063-5)

[img] Text
294081.pdf - Published Version
Available under License Creative Commons Attribution.



This paper develops an account of trustworthy AI. Its central idea is that whether AIs are trustworthy is a matter of whether they live up to their function-based obligations. We argue that this account serves to advance the literature in a couple of important ways. First, it serves to provide a rationale for why a range of properties that are widely assumed in the scientific literature, as well as in policy, to be required of trustworthy AI, such as safety, justice, and explainability, are properties (often) instantiated by trustworthy AI. Second, we connect the discussion on trustworthy AI in policy, industry, and the sciences with the philosophical discussion of trustworthiness. We argue that extant accounts of trustworthiness in the philosophy literature cannot make proper sense of trustworthy AI and that our account compares favourably with its competitors on this front.

Item Type:Articles
Glasgow Author(s) Enlighten ID:Kelp, Professor Christoph and Simion, Professor Mona
Authors: Simion, M., and Kelp, C.
College/School:College of Arts & Humanities > School of Humanities > Philosophy
Journal Name:Asian Journal of Philosophy
ISSN (Online):2731-4642
Published Online:13 March 2023
Copyright Holders:Copyright © The Author(s) 2023
First Published:First published in Asian Journal of Philosophy 2(1):8
Publisher Policy:Reproduced under a Creative Commons license

University Staff: Request a correction | Enlighten Editors: Update this record

Project CodeAward NoProject NamePrincipal InvestigatorFunder's NameFunder RefLead Dept
309239Knowledge-First Social EpistemologyMona SimionEuropean Research Council (ERC)948356Arts - Philosophy