‘Most, if not all, journals require the use of Large Language Models (LLMs), such as ChatGPT, to be acknowledged. This article argues that current guidelines do not go far enough as the use of an LLM may be acknowledged but the reviewers, and future readers, do not know which parts of the article were generated with AI (Artificial Intelligence) assistance and how that text was subsequently edited. It’s possible that an entire article could be generated with AI and, as long, as the authors acknowledge that an LLM was used then they are meeting the journal’s guidelines. In this opinion article current publisher guidelines are examined, followed by a brief case study which highlights some of the issues that the scholarly community faces. Proposed changes to the guidelines are presented which say that LLM prompts, and the generated text, should be provided to the reviewers, and to future readers, so that they can see which parts of the article were generated and what edits were made to that text.’
Link: https://link.springer.com/article/10.1007/s10805-024-09581-0