Uncategorized

ESR Journals editors’ joint statement on Guidelines for the Use of Large Language Models by Authors, Reviewers, and Editors


The impact of artificial intelligence (AI)–assisted technologies, such as large language models (LLMs), chatbots, or image creators, on biomedical publishing was discussed by the editors of radiology journals at the annual Radiology Editors’ Forum, held on August 11–12, 2023, in Chicago, IL. The forum was attended by over 40 individuals, representing 30 biomedical imaging journals from 9 countries. In addition to considering the May 2023 ICMJE update [1], the editors considered relevant statements regarding contributions by AI-assisted technologies from other publication committees, associations, and societies, including the World Association of Medical Editors (WAME), the Committee on Publication Ethics (COPE), and the Council of Science Editors (CSE) policies [2,3,4]. New NIH guidelines to address the role of generative AI-assisted review of submitted applications were also reviewed [5], as well as policies developed by various medical journals and medical publishers [6,7,8,9,10,11,12]. At the conclusion of the forum, the following policies were endorsed in principle.

With this article, the Editors-in-Chief of the ESR Journals adapt these policies to their journals. It is obvious that generative AI tools will continue to quickly evolve and develop new possibilities in our daily lives. The statements and policies will need to be re-evaluated and updated regularly.

AI or AI-assisted technologies do not qualify as authors and must not be listed as authors or co-authors [1,2,3, 6,7,8,9,10,11]

Nonhuman AI, LLMs, chatbots, machine learning, or similar generative AI technologies do not meet the four ICMJE criteria for authorship. These qualifications were developed to guarantee that all authors accept full responsibility and stand for the integrity of the entire work. Accordingly, only humans can be authors [2]. AI-assisted technologies that were used to generate results should be reported in the article as methodological devices used in the completion of the work, but not included as authors.

Authors must disclose at submission whether they used AI or AI-assisted technologies in their work

Authors who use such technology must clearly describe how AI or AI-assisted technologies were used in the study and/or manuscript preparation. Authors should be transparent when AI-assisted technologies are used and provide information about their use [2, 3]. If the tools were part of carrying out the research and to generate results, authors must provide this information in the Materials and Methods section or in the relevant section of the manuscript (e.g., figure legends for AI-generated figures) [9]. In all cases of use of AI-assisted technologies, authors should include specific details, such as the name and version of the AI tool, date of access, and name of the manufacturer/creator [6, 7, 9, 10].

Authors must disclose at submission whether they used AI or AI-assisted technologies for writing or editing the manuscript

Authors may use LLMs to assist with medical writing and for content editing to effectively communicate their work. These tasks include assistance with medical writing, grammar, language, and reporting standards. Authors must transparently report how they used such tools in the writing or editing of their submitted work in the Acknowledgment section. Authors are encouraged to include specific details, such as the name of the language model or tool, version number, and manufacturer [9, 10].

All authors are fully responsible for any submitted material that includes AI-assisted technologies

AI-assisted technologies cannot distinguish between true and false information. Humans, i.e., the authors, are and remain fully responsible for the submitted manuscript. Authors should carefully review and edit the results of AI-assisted content, because AI can generate authoritative-sounding output that can be biased, incomplete, or partially or completely incorrect [1].

Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by AI

Humans must ensure appropriate attribution to all quoted material, including full citations [1]. Authors should acknowledge all sources (including material produced by AI-assisted tools) [2, 6,7,8,9,10,11]. Authorship attribution requires accountability for the submitted work. Further, authors are responsible for any text generated by an AI-assisted tool in their manuscript (including the accuracy of what is presented and the absence of plagiarism) and for acknowledging all sources (including material produced by the AI-assisted tool) and ensuring the accuracy and completeness of citations [2]. AI-generated material cannot be referenced as primary source [1].

Any content created by AI or AI-assisted tools must be labelled

The submission and publication of content/images created by AI, language models, machine learning, or similar technologies is discouraged, unless it is part of the formal research design or methods, and is not permitted without clear labelling, meaning a description of the content that was created, the name of the model or tool, version and extension numbers, and manufacturer [6]. Authors are fully responsible for the integrity of the content generated by these models and tools [6]. When generative AI itself is the focus of a study, the use of AI should be explicitly detailed in the Materials and Methods section [9].

Reviewers and editors are obligated to confidentiality and should not upload manuscripts to software or other AI-assisted tools where confidentiality cannot be assured [1, 2]

Reviewers and editors are trusted and required to maintain confidentiality throughout the manuscript review process. Authors trust the reviewers and editors to protect their proprietary, sensitive, and confidential ideas. The use of AI-assisted tools may violate peer review confidentiality expectations, and several journals have followed the ICJME and WAME guidelines and state that entering any part of the manuscript or abstract or the text of your review into a chatbot, language model, or similar tool is a violation of the journals’ confidentiality agreement [7, 9, 12]. The review process is valued for its human expert perspective and human oversight with decision-making in scholarly publication, including the need for accountability and human oversight [9, 13]. If a reviewer or editor used an AI tool as a resource for his/her review in a way that does not violate the journal’s confidentiality policy, he/she must provide the name of the tool and how it was used.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *