‘Many in the information seeking community are excited about the promise of large language models and Generative AI to improve scholarly information access. These models can quickly transform the content of scholarly works in ways that can make them more approachable, digestible, and suitably written for audiences for whom the works may not have been originally intended. However, the current technical implementation of Generative AI can limit their utility in these settings. Issues of hallucination (models generating false or misleading information) or bias propagation are still common, making it difficult to recommend these technologies for critical tasks. Dominant paradigms for addressing these issues and achieving alignment between AI and human values can also cause a reduction in the diversity of output, which can lead to information censorship for stigmatized topics, going against the goal of broad access to high-quality information. In this essay, I discuss the promises of AI for improving access to scholarly content, how current practices in Generative AI training mayl ead to undesirable and possibly unintended consequences, and how libraries and other community organizations could place themselves at the forefront of solutions for improving the individual and community relevance of these technologies.’
Link: https://issuu.com/against-the-grain/docs/june_2024_v36-3/19