‘This paper critically examines Article 50(1) of the EU Artificial Intelligence Act, which establishes an obligation for providers of AI systems intended for direct interaction with natural persons to develop and design their AI systems in such a way that the natural persons concerned are informed about the interaction with an AI system. Through a doctrinal and functional analysis, the study explores the scope, legal structure, and limitations of this provision. It highlights significant uncertainties surrounding the definitions of “direct interaction” and “obvious” use cases and critiques the asymmetric allocation of responsibility to providers while excluding deployers. The analysis reveals that the current framework insufficiently safeguards the rights of individuals, particularly vulnerable groups, due to narrow content requirements, lack of effective remedies, and exemptions for law enforcement and ostensibly “obvious” AI use. The paper argues that transparency, while normatively essential, risks becoming a symbolic rather than a functional safeguard under the current regulatory design. It advocates for the extension of obligations to deployers, greater standardisation of disclosure mechanisms, and a restrictive interpretation of exemptions, particularly in contexts involving children, persons with disabilities, or covert AI deployment. The findings suggest that without further legislative or interpretative measures, Article 50(1) of the European AI Act will remain a formalistic gesture rather than a substantive guarantee of trustworthy human-AI-interaction. As such, the paper calls on the European Commission to issue comprehensive implementation guidelines and to address the regulatory asymmetries that hinder the provision’s effectiveness in real-world contexts.’
Link: https://www.sciencedirect.com/science/article/pii/S2212473X26000684