Your browser doesn't support javascript.
loading
Assessing the accuracy of ChatGPT references in head and neck and ENT disciplines.
Frosolini, Andrea; Franz, Leonardo; Benedetti, Simone; Vaira, Luigi Angelo; de Filippis, Cosimo; Gennaro, Paolo; Marioni, Gino; Gabriele, Guido.
Afiliação
  • Frosolini A; Department of Maxillo-Facial Surgery, Policlinico Le Scotte, University of Siena, Siena, Italy. andreafrosolini@gmail.com.
  • Franz L; Phoniatris and Audiology Unit, Department of Neuroscience DNS, University of Padova, Treviso, Italy.
  • Benedetti S; Artificial Intelligence in Medicine and Innovation in Clinical Research and Methodology (PhD Program), Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy.
  • Vaira LA; Department of Maxillo-Facial Surgery, Policlinico Le Scotte, University of Siena, Siena, Italy.
  • de Filippis C; Maxillofacial Surgery Operative Unit, Department of Medicine, Surgery and Pharmacy, University of Sassari, Sassari, Italy.
  • Gennaro P; PhD School of Biomedical Sciences, Department of Biomedical Sciences, University of Sassari, Sassari, Italy.
  • Marioni G; Phoniatris and Audiology Unit, Department of Neuroscience DNS, University of Padova, Treviso, Italy.
  • Gabriele G; Department of Maxillo-Facial Surgery, Policlinico Le Scotte, University of Siena, Siena, Italy.
Eur Arch Otorhinolaryngol ; 280(11): 5129-5133, 2023 Nov.
Article em En | MEDLINE | ID: mdl-37679532
ABSTRACT

PURPOSE:

ChatGPT has gained popularity as a web application since its release in 2022. While artificial intelligence (AI) systems' potential in scientific writing is widely discussed, their reliability in reviewing literature and providing accurate references remains unexplored. This study examines the reliability of references generated by ChatGPT language models in the Head and Neck field.

METHODS:

Twenty clinical questions were generated across different Head and Neck disciplines, to prompt ChatGPT versions 3.5 and 4.0 to produce texts on the assigned topics. The generated references were categorized as "true," "erroneous," or "inexistent" based on congruence with existing records in scientific databases.

RESULTS:

ChatGPT 4.0 outperformed version 3.5 in terms of reference reliability. However, both versions displayed a tendency to provide erroneous/non-existent references.

CONCLUSIONS:

It is crucial to address this challenge to maintain the reliability of scientific literature. Journals and institutions should establish strategies and good-practice principles in the evolving landscape of AI-assisted scientific writing.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Cabeça Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Cabeça Idioma: En Ano de publicação: 2023 Tipo de documento: Article