(0100) Artificial Intelligence (AI) Generated This: The Dangers of AI in Peer-Reviewed Publications
Monday, September 30, 2024
12:00 PM – 1:00 PM EDT
Has Audio
Disclosure(s):
Ravi Rajendra Shah, MD: No relevant relationships to disclose.
Introduction: Generative artificial intelligence (AI) software has immeasurable potential in medicine; from scribing in an electronic medical record, to interpretation of radiology and histopathology, to clinical decision support for treatment plans. Unfortunately, there are a myriad of dangers as well. Here, the author focuses on one concern in particular: fabricated research publications.
Methods: In this observational study, the author utilized a widely accessible internet-based AI program (Chat Generative Pre-trained Transformer, or ChatGPT; OpenAI, San Francisco, CA) to create a variety of fictitious research abstracts, manuscripts, and reference citations.
Results: Although some of the author's requests were denied by the AI program based on "ethical standards and academic integrity," a variety of finely-worded requests yielded full publications or reference lists that could be passed off to an unsuspecting journal or audience as true, original research.
Conclusions: Generative AI can be used to fabricate elements of research articles, or even entire manuscripts. Despite programmed warnings for ethical standards, revising user prompts for AI programs can easily bypass these "hard-stops." Savvy journal editors may use various methods such as signed ethical statements by authors, discerning peer reviewers, confirmatory literature searches, plagiarism detection software, and pattern recognition software to identify AI-written manuscripts, but it is important to remain vigilant.