‘The European Union (EU) Artificial Intelligence Act (AI Act) requires institutions that deploy high-risk AI systems to ensure that they are overseen by individuals with the necessary competence, training, authority, and support. Judicial institutions may look to judges who use the high-risk decision support systems they deploy to perform this oversight role. These judges are ‘in-the-loop’ in the sense that they review each output the system generates and decide whether to override, disregard, or defer to it. This article explores the implications of making judges-in-the-loop responsible for human oversight under the AI Act by assessing the unique professional responsibilities, skills, motivations, and biases they bring to the AI-supported decision-making process. It finds that the task of overseeing high-risk decision support systems is too big for judges-in-the-loop alone and proposes an alternative way of involving judges in human oversight that not only meets the AI Act’s requirements, but more reliably safeguards judicial values and fundamental rights.’