The effect of prompt framing on AI-generated sentencing recommendations: a research note – Criminal Justice Studies

‘The rapid integration of artificial intelligence (AI) systems into societal domains particularly the legal and criminal justice decision-making demands scrutinity of potential biases in outputs. AI tools assist predictive policing, risk assessment, sentencing recommendations and legal research. This requires ah examination of potential sources of bias in AI systems’ responses and recommendations. This study investigates prompt framing’s impact on AI sentencing recommendations and offender community threat perceptions. We systematically tested six leading AI models – Copilot, Gemini, GPT, Grok, Mistral, and Perplexity – using identical case scenarios of second-degree aggravated assault in a domestic violence contexts one featuring a male offender and one a female offender. The findings reveal that prompt framing shape AI outputs. Notably, we observed differential treatment based on offender gender, with female offenders consistently receiving lower sentencing recommendations and threat ratings despite the scenarios being factually identical. We discuss these findings in terms of the implications for the relevance of framing and the potential perpetuation of gender bias within AI systems.’

Link: https://doi.org/10.1080/1478601X.2026.2624489