Mission Statement

Published

March 25, 2024

Modified

September 22, 2024

Researchers in various disciplines were quick to experiment with generative AI systems when they came to prominence in late 2022 and early 2023. Chatbots were used as the objects of research, the tools used to conduct research, and even the authors of published research literature. The last of these three applications was immediately controversial, and a consensus rapidly grew that AI systems should not be credited as authors.

AI tools cannot be used in the work, nor can figures, images, or graphics be the products of such tools. And an AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works.

—H. Holden Thorp, Editor-in-Chief, Science Journals1

The justification for this stance is almost invariably that AI systems cannot be held accountable for their outputs. However, authors who solicit those outputs and use them to contribute to research manuscripts can and should be held accountable for doing so.

We believe that authors are ultimately responsible for the text generated by NLP systems and must be held accountable for inaccuracies, fallacies, or any other problems in manuscripts. We take this position because 1) NLP systems respond to prompts provided by researchers and do not proactively generate text; 2) authors can juxtapose text generated by an NLP system with other text (e.g., their own writing) or simply revise or paraphrase the generated text; and 3) authors will take credit for the text in any case.

—Hosseini, Rasmussen, and Resnik; Accountability in Research2

This is the general view of numerous academic publishing organizations, including but not limited to the Committee on Publication Ethics (COPE)3, the Council of Science Editors4, the International Committee of Medical Journal Editors (ICMJE)5, and the World Association of Medical Editors (WAME)6. The consensus extends to scientific associations, such as the Institute of Electrical and Electronics Engineers (IEEE)7, the Institute of Physics (IOP)8, and the Society of Photo-Optical Instrumentation Engineers (SPIE)9, and to publishers themselves, including Elsevier10, Frontiers11, the Multidisciplinary Digital Publishing Institute (MDPI)12, the Public Library of Science (PLoS)13, Sage14, Science15, Springer16, Taylor and Francis17, and Wiley18. These organizations also agree that if authors use AI systems to help write their manuscripts, that usage of AI must be declared in the manuscript itself.

Because NLP systems may be used in ways that may not be obvious to the reader, researchers should disclose their use of such systems and indicate which parts of the text were written or co-written by an NLP system.

—Hosseini, Rasmussen, and Resnik; Accountability in Research2

Just as plagiarism can involve the misappropriation or theft of words or ideas, NLP-generated ideas may also affect the integrity of publications. When NLP assistance has impacted the content of a publication (even in the absence of direct use of NLP-generated text), this should be disclosed.

—Hosseini, Rasmussen, and Resnik; Accountability in Research2

Generative AI has manifold implications for research, for better and for worse; the ethical and legal ramifications are too numerous and complex to list here. That researchers should at the very least declare their usage of AI, however, is a simple imperative that imposes no undue strain on the research community. Responsible researchers already declare their usage of various tools, such as scientific instruments, data capture and management software, and programming languages. A declaration of the use of AI, such as that recommended by the cited organizations, is no more taxing.

Yet, as widely adopted as policies of transparency are, they are not enforced uniformly. It is my contention that authors, reviewers, and editors alike should be as scrupulous in ensuring the transparent declaration of AI usage as they are for funding sources, conflicts of interest, and data availability.

On Academ-AI, I am documenting the examples I can find that leave me with little doubt as to their AI origin, but it is impossible to say just how extensive the failure to report AI usage is, considering that many of its outputs may be undetectable, particularly if edited by human authors.

It is possible that habitually holding authors accountable in instances of obvious AI usage will encourage authors to declare AI use even in instances when they could get away with not doing so. Just as conflicts of interest do not necessarily invalidate a researcher’s findings, but failure to declare them should raise suspicion, so the use of generative AI should not be disqualifying in and of itself, but failure to declare it should be considered a serious problem. I have further argued that requiring the declaration of AI usage or lack thereof in all cases would incentivize transparency on the part of authors.19

References

1Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379(6630):313-313. doi:10.1126/science.adg7879
2Hosseini M, Rasmussen LM, Resnik DB. Using AI to write scholarly publications. Accountability in Research. 2023;0(0):1-9. doi:10.1080/08989621.2023.2168535
3Authorship and AI tools. Committee On Publication Ethics. Published February 13, 2023. Accessed March 21, 2024. https://publicationethics.org/cope-position-statements/ai-author
4Jackson J, Landis G, Baskin PK, Hadsell KA, English M, CSE Editorial Policy Committee. CSE Guidance on Machine Learning and Artificial Intelligence Tools. Sci Editor. 2023;46(2):72. doi:10.36591/SE-D-4602-07
5Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Published online January 2024. Accessed March 19, 2024. https://www.icmje.org/icmje-recommendations.pdf
6Zielinski C, Winker MA, Aggarwal R, et al. Chatbots, Generative AI, and Scholarly Manuscripts: WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications. World Association of Medical Editors. Published May 31, 2023. https://wame.org/page3.php?id=106
7Submission and Peer Review Policies. IEEE Author Center. Published 2024. Accessed March 21, 2024. https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/submission-and-peer-review-policies/
8Ethical policy for journals. IOPscience. Accessed July 2, 2024. https://publishingsupport.iopscience.iop.org/ethical-policy-journals/
9Manuscript guidelines and policies. SPIE. Published 2024. Accessed July 2, 2024. https://spie.org/conferences-and-exhibitions/event-resources/manuscript-guidelines-and-policies
10The use of AI and AI-assisted technologies in writing for Elsevier. www.elsevier.com. Published 2024. Accessed March 25, 2024. https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier
11Author Guidelines. Frontiers. Published 2024. Accessed July 2, 2024. https://www.frontiersin.org/guidelines/author-guidelines
12Research and Publication Ethics. MDPI. Published 2024. Accessed June 10, 2024. https://www.mdpi.com/ethics
13Ethical Publishing Practice. PLOS. Accessed July 2, 2024. https://journals.plos.org/plosone/s/ethical-publishing-practice#loc-artificial-intelligence-tools-and-technologies
14ChatGPT and Generative AI. SAGE Publications Inc. Published January 27, 2023. Accessed June 10, 2024. https://us.sagepub.com/en-us/nam/chatgpt-and-generative-ai
15Science Journals: Editorial Policies. Published 2024. Accessed March 21, 2024. https://www.science.org/content/page/science-journals-editorial-policies
16Artificial Intelligence (AI). Springer. Published 2023. Accessed March 21, 2024. https://www.springer.com/us/editorial-policies/artificial-intelligence–ai-/25428500
17Defining authorship in your research paper. Taylor and Francis. Accessed March 21, 2024. https://authorservices.taylorandfrancis.com/editorial-policies/defining-authorship-research-paper/
18Best Practice Guidelines on Research Integrity and Publishing Ethics. Wiley Author Services. Published February 28, 2023. https://authorservices.wiley.com/ethics-guidelines/index.html
19Glynn A. The case for universal artificial intelligence declaration on the precedent of conflict of interest. Accountability in Research. Published online 2024. doi:10.1080/08989621.2024.2345719