Your browser doesn't support javascript.
loading
Exploring the Limits of Artificial Intelligence for Referencing Scientific Articles.
Graf, Emily M; McKinney, Jordan A; Dye, Alexander B; Lin, Lifeng; Sanchez-Ramos, Luis.
Afiliación
  • Graf EM; Department of Obstetrics and Gynecology, University of Florida College of Medicine, Jacksonville, Florida.
  • McKinney JA; Department of Obstetrics and Gynecology, University of Florida College of Medicine, Jacksonville, Florida.
  • Dye AB; Department of Obstetrics and Gynecology, University of Florida College of Medicine, Jacksonville, Florida.
  • Lin L; Department of Epidemiology and Biostatistics, University of Arizona, Tucson, Arizona.
  • Sanchez-Ramos L; Department of Obstetrics and Gynecology, University of Florida College of Medicine, Jacksonville, Florida.
Am J Perinatol ; 2024 Apr 23.
Article en En | MEDLINE | ID: mdl-38653452
ABSTRACT

OBJECTIVE:

To evaluate the reliability of three artificial intelligence (AI) chatbots (ChatGPT, Google Bard, and Chatsonic) in generating accurate references from existing obstetric literature. STUDY

DESIGN:

Between mid-March and late April 2023, ChatGPT, Google Bard, and Chatsonic were prompted to provide references for specific obstetrical randomized controlled trials (RCTs) published in 2020. RCTs were considered for inclusion if they were mentioned in a previous article that primarily evaluated RCTs published by the top medical and obstetrics and gynecology journals with the highest impact factors in 2020 as well as RCTs published in a new journal focused on publishing obstetric RCTs. The selection of the three AI models was based on their popularity, performance in natural language processing, and public availability. Data collection involved prompting the AI chatbots to provide references according to a standardized protocol. The primary evaluation metric was the accuracy of each AI model in correctly citing references, including authors, publication title, journal name, and digital object identifier (DOI). Statistical analysis was performed using a permutation test to compare the performance of the AI models.

RESULTS:

Among the 44 RCTs analyzed, Google Bard demonstrated the highest accuracy, correctly citing 13.6% of the requested RCTs, whereas ChatGPT and Chatsonic exhibited lower accuracy rates of 2.4 and 0%, respectively. Google Bard often substantially outperformed Chatsonic and ChatGPT in correctly citing the studied reference components. The majority of references from all AI models studied were noted to provide DOIs for unrelated studies or DOIs that do not exist.

CONCLUSION:

To ensure the reliability of scientific information being disseminated, authors must exercise caution when utilizing AI for scientific writing and literature search. However, despite their limitations, collaborative partnerships between AI systems and researchers have the potential to drive synergistic advancements, leading to improved patient care and outcomes. KEY POINTS · AI chatbots often cite scientific articles incorrectly.. · AI chatbots can create false references.. · Responsible AI use in research is vital..

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Am J Perinatol Año: 2024 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Am J Perinatol Año: 2024 Tipo del documento: Article