‘There has been growing attention for Large Language Models and conversational agents, and their capabilities and benefits. In addition, there is a need to look at the various costs, harms, and risks involved in their development and deployment. In order to contribute to the development and deployment of ‘trustworthy AI’, we propose to organize ethical reflection and deliberation, following the seven key requirements of the European Commission’s High-Level Expert Group on AI (2019). We propose to look at these requirements through four different ethical perspectives— consequentialism, duty ethics, relational ethics, and virtue ethics; and to look at different levels of the sociotechnical system—individual, organization, and society. We present a case study of ChatGPT, to illustrate how this approach works in practice, and close with a discussion of this approach.’
Link: https://link.springer.com/article/10.1007/s43681-024-00571-x