About

Academ-AI documents the adverse effects of artificial intelligence (AI) in academia, particularly suspected instances of AI being used to author research without appropriate declaration.

The articles listed on this site have been identified based on phrases that strongly suggest AI use (highlighted in each quoted passage). If you believe that an article has been wrongly included, please let me know at .

If you suspect the use of AI in a published research article, please reach out with:

  • The citation (in any style); please include a URL or DOI if possible
  • The passage(s) that appear to be AI-generated
  • Your name if you wish to be credited for your contribution

At present, I am documenting journal articles, conference papers/proceedings, and books/chapters only. Theses, preprints, blog posts, and other media are out of scope at least for the time being.

Purpose

Researchers in various disciplines were quick to experiment with generative AI systems when they came to prominence in late 2022 and early 2023. Chatbots were used as the objects of research, the tools used to conduct research, and even the authors of published research literature. The last of these three applications was immediately controversial, and a consensus rapidly grew that AI systems should not be credited as authors.

The justification for this stance is almost invariably that AI systems cannot be held accountable for their outputs. However, authors who solicit those outputs and use them to contribute to research manuscripts can and should be held accountable for doing so.

This is the general view of numerous academic publishing organizations, including but not limited to the Committee on Publication Ethics (COPE), the Council of Science Editors, the International Committee of Medical Journal Editors (ICMJE), and the World Association of Medical Editors (WAME). The consensus extends to scientific associations, such as the Institute of Electrical and Electronics Engineers (IEEE), the Institute of Physics (IOP), and the Society of Photo-Optical Instrumentation Engineers (SPIE), and to publishers themselves, including Elsevier, Frontiers, the Multidisciplinary Digital Publishing Institute (MDPI), the Public Library of Science (PLoS), Sage, Science, Springer, Taylor and Francis, and Wiley. These organizations also agree that if authors use AI systems to help write their manuscripts, that usage of AI must be declared in the manuscript itself.

Generative AI has manifold implications for research, for better and for worse; the ethical and legal ramifications are too numerous and complex to list here. That researchers should at the very least declare their usage of AI, however, is a simple imperative that imposes no undue strain on the research community. Responsible researchers already declare their usage of various tools, such as scientific instruments, data capture and management software, and programming languages. A declaration of the use of AI, such as that recommended by the cited organizations, is no more taxing.

Yet, as widely adopted as policies of transparency are, they are not enforced uniformly. It is my contention that authors, reviewers, and editors alike should be as scrupulous in ensuring the transparent declaration of AI usage as they are for funding sources, conflicts of interest, and data availability.

On Academ-AI, I am documenting the examples I can find that leave me with little doubt as to their AI origin, but it is impossible to say just how extensive the failure to report AI usage is, considering that many of its outputs may be undetectable, particularly if edited by human authors.

It is possible that habitually holding authors accountable in instances of obvious AI usage will encourage authors to declare AI use even in instances when they could get away with not doing so. Just as conflicts of interest do not necessarily invalidate a researcher’s findings, but failure to declare them should raise suspicion, so the use of generative AI should not be disqualifying in and of itself, but failure to declare it should be considered a serious problem. I have further argued that requiring the declaration of AI usage or lack thereof in all cases would incentivize transparency on the part of authors (Glynn, 2024).

What to look for

The contributions of a chatbot can often be identified by the use of key phrases, such as “as of my last knowledge update” OR “as an AI language model.” Chatbot responses can also be identified by their conversational style, which does not fit in with academic prose. Elements of chatbot style include:

  • Liberal use of first-person singular pronouns
  • Use of discourse markers, such as “certainly”

As a consequence of the developers’ attempts to prevent harmful output, chatbots are often verbosely conservative, generating multiple sentences explaining why they cannot do as asked. Examples include:

  • Explaining that they have no access to certain data
  • Referring the user to experts in the relevant field
  • Offering alternative subject matter that the bot could discuss

Metadata

Publications are cited using the metadata provided by the publisher even where implausible. For example, Narayanan (2014) refers to a “knowledge update in January 2022” despite alleged publication eight years prior. Since the purpose of Academ-AI is to highlight publishing oversights, no attempt will be made to resolve these errors.

Categories

I assign one or more subject categories, based on the top level of the Library of Congress Classification (LCC) system, to each article listed according to my own best judgement.

  • general works
  • philosophy, psychology, and religion
  • auxiliary sciences of history
  • world history
  • history of the Americas
  • local history of the Americas
  • geography, anthropology, and recreation
  • social sciences
  • political sciencce
  • law
  • education
  • music
  • fine arts
  • language and literature
  • science
  • medicine
  • agriculture
  • technology
  • military science
  • naval science
  • library and information science

Publication types

Each article is classified as:

  • Journal—journal articles
  • Conference—conference papers and proceedings
  • Book—books or book chapters

Post-publication changes

Where an erratum (retraction, corrigendum, or other editorial correction) related to the undeclared use of AI has been published, the article is labeled with type of erratum and the date of its publication. Occasionally, publishers appear to have retroactively corrected LLM-induced errors without informing readers. These instances are marked as “stealth revision.” Since determining when exactly a stealth revision occurred is often impossible, no date is provided for stealth revisions. Errata that do not concern the undeclared use of AI are not listed.

Indexing

Articles published in journals known to be included in widely used indices, such as Web of Science, are labeled as such. These labels are a work-in-progress. However, to my knowledge, every journal indexed in the following databases has been labeled as such:

ISSNs

The ISSNs of all journals that have them are included in each article’s metadata. All ISSNs have been validated using the ISSN portal; where ISSNs were found to be invalid, unreported, provisional, or registered to a different journal, this finding is also noted in the article metadata.

ISBNs

The ISBNs of all conference papers that have them are included in each paper’s metadata.

Similar projects

Other researchers, notably Dr. Guillaume Cabanac (Institut de Recherche en Informatique de Toulouse), have compiled examples of undeclared AI. The list of such papers on Retraction Watch, based on Dr. Cabanac’s search strategy, was used to identify many of the items listed on this website.

Dr. Damien Charlotin (HEC Paris) tracks legal decisions in which generative AI hallucinated content and legal cases in which generative AI was used to make arguments or prove points.

About me

I am Alex Glynn, MA, Research Literacy and Communications Instructor at the Kornhauser Health Sciences Library, University of Louisville, Louisville, KY, United States of America. I previously worked for three years at the University’s School of Medicine in the Division of Infectious Diseases, where I was appointed Managing Editor for the Division’s Journal of Respiratory Infections until my departure in August 2023.

My work has been published in Learned Publishing, Accountability in Research, Intelligent Pharmacy, the Journal of Cardiothoracic and Vascular Anesthesia, and others. I have lectured on AI, as well as other topics, at the University of Louisville and served on multiple AI steering committees.

AI usage in this project

I conduct all literature searches and screening of the results manually. All visualization and analysis is done using traditional statistical methods and tools, principally R and Plotly.js. The Academ-AI website is built using the Nuxt web framework, and data are stored in a PostgreSQL database hosted on Supabase.

Generative AI was used only to produce the logo and color palette for the site and to debug and correct Nuxt code. None of the website copy is AI-generated.

Logo

The Ac-AI berry is based on a vector icon generated by Google Gemini.

Google Gemini output from "draw a vector icon of an acai berry," March 23, 2024

Academ-AI logo, made with TikZ version 3.1.10 (TeX Live 2024)

Palette

The site's color palette was generated by ChatGPT, based on the purple of the Ac-AI logo. Provided only with the hex code, ChatGPT named this color “Royal Plum,” which could not be allowed to stand, but the other colors and are unchanged.

Slate Blue
#303e4e
Apolitical Açaí
#61476a
Olive Forest
#4e6147
Soft Lavender
#b6a0c1
Warm Sand
#d8b48c
Pale Gray
#e6e8ea
Charcoal Navy
#1f2a35
Misty Lilac
#f2e9f5

References

Glynn A. The case for universal artificial intelligence declaration on the precedent of conflict of interest. Accountability in Research. 2024. p. 1-2. doi: 10.1080/08989621.2024.2345719.

Hosseini M, Rasmussen L, Resnik D. Using AI to write scholarly publications. Account Res. Taylor & Francis; 2023. p. 1-9. doi: 10.1080/08989621.2023.2168535.

Narayanan K. Good Governance Practices in Indian Municipalities Leveraging Technology for Efficient Service Delivery. International Journal of Transcontinental Discoveries. 2014. p. 27-33.

Thorp H. ChatGPT is fun, but not an author. Science. 2023. p. 313. doi: 10.1126/science.adg7879.