Hijacking algorithmic bias: analyzing the political discourse around ChatGPT on social media – Information, Communication & Society

‘Following the introduction of ChatGPT 3.5, the first widely available general-purpose large language model, users were able to experiment and interact with this new AI technology at scale for the first time. This study investigates the appropriation of the discourse on algorithmic bias by conservative users with respect to ChatGPT, particularly on the social media platform X (formerly Twitter), during the initial months following ChatGPT’s public release (February–June 2023). Through a mixed-method analysis of user-generated ‘experiments’ with ChatGPT and a digital ethnography of X discourse, we explore how conservative users have repurposed the concept of ‘algorithmic bias’, originally grounded in liberal values, to advance their ideological agendas. This phenomenon, which we analyze as a form of ‘discourse hijacking’, reveals a critical divergence in how liberal and conservative critiques engage with the notion of power. While liberal critiques are embedded in a critical theory framework that emphasizes structural inequalities and systemic power imbalances, conservative critiques often disregard these dimensions and instead focus on perceived biases against hegemonic groups. Our findings reveal distinct differences between the liberal and conservative critiques of ChatGPT, not only in content but also in the strategies employed, with a mixture of thematic strategies (such as adopting a mocking or playful tone) and coordinated social media actions. These findings underscore the complex relationship between political orientation and public discourse on emerging technologies.’

Link: https://www.tandfonline.com/doi/full/10.1080/1369118X.2025.2561046