Beyond echo chambers and rabbit holes: algorithmic drifts and the limits of the Online Safety Act, Digital Services Act, and AI Act – Griffith Law Review

‘This paper uses Karen Barad’s concepts of intra-action and diffraction to argue that dominant models of understanding algorithmic radicalisation (echo chambers, filter bubbles, rabbit holes) and how it is addressed in the UK and EU, are inadequate for deep neural network systems like YouTube. These models assume users seek out bad content and get stuck in reinforcing loops. However, these systems are far more dynamic and relational, constantly probing, drifting, experimenting, seeking rewards to keep users engaged. Consequently, current laws (e.g. Online Safety Act, Digital Services Act, and Artificial Intelligence Act) are ill-equipped to deal with this. They focus on removing ‘bad’ content and punishing bad actors, missing the deeper, systemic processes driving radicalisation even when no ‘bad’ content or actors are necessarily involved. While not claiming to have the answers to solve this complex problem, the paper argues for better questions that account for the complexity of these systems and their relational, intra-active, and diffractive nature.’

Link: https://www.tandfonline.com/doi/full/10.1080/10383441.2025.2481544