‘In view of the dramatic advancements in the development of artificial intelligence technology in recent years, it has become a commonplace to demand that AI systems be trustworthy. This view presupposes that it is possible to trust AI technology in the first place. The aim of this paper is to challenge this view. In order to do that, it is argued that the philosophy of trust really revolves around the problem of how to square the epistemic and the normative dimensions of trust. Given this double nature of trust it is possible to extract a threefold challenge to the defenders of the possibility of AI trust without presupposing any particular trust theory. They have to show (1) how trust in AI systems is more than mere reliance; (2) how AI systems can become objects of normative expectations; and (3) how the resulting attitude gives human agents reassurance in their interactions with AI systems. In order to demonstrate how difficult this task is, the threefold challenge is then applied to two recent accounts that defend the possibility of trust in AI systems. By way of conclusion it is suggested that instead of trusting AI systems, we should strive to make them reliable.’
Link: https://link.springer.com/article/10.1007/s13347-024-00820-1