Your browser doesn't support javascript.
loading
Assessing the accuracy of ChatGPT references in head and neck and ENT disciplines.
Frosolini, Andrea; Franz, Leonardo; Benedetti, Simone; Vaira, Luigi Angelo; de Filippis, Cosimo; Gennaro, Paolo; Marioni, Gino; Gabriele, Guido.
Afiliación
  • Frosolini A; Department of Maxillo-Facial Surgery, Policlinico Le Scotte, University of Siena, Siena, Italy. andreafrosolini@gmail.com.
  • Franz L; Phoniatris and Audiology Unit, Department of Neuroscience DNS, University of Padova, Treviso, Italy.
  • Benedetti S; Artificial Intelligence in Medicine and Innovation in Clinical Research and Methodology (PhD Program), Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy.
  • Vaira LA; Department of Maxillo-Facial Surgery, Policlinico Le Scotte, University of Siena, Siena, Italy.
  • de Filippis C; Maxillofacial Surgery Operative Unit, Department of Medicine, Surgery and Pharmacy, University of Sassari, Sassari, Italy.
  • Gennaro P; PhD School of Biomedical Sciences, Department of Biomedical Sciences, University of Sassari, Sassari, Italy.
  • Marioni G; Phoniatris and Audiology Unit, Department of Neuroscience DNS, University of Padova, Treviso, Italy.
  • Gabriele G; Department of Maxillo-Facial Surgery, Policlinico Le Scotte, University of Siena, Siena, Italy.
Eur Arch Otorhinolaryngol ; 280(11): 5129-5133, 2023 Nov.
Article en En | MEDLINE | ID: mdl-37679532
PURPOSE: ChatGPT has gained popularity as a web application since its release in 2022. While artificial intelligence (AI) systems' potential in scientific writing is widely discussed, their reliability in reviewing literature and providing accurate references remains unexplored. This study examines the reliability of references generated by ChatGPT language models in the Head and Neck field. METHODS: Twenty clinical questions were generated across different Head and Neck disciplines, to prompt ChatGPT versions 3.5 and 4.0 to produce texts on the assigned topics. The generated references were categorized as "true," "erroneous," or "inexistent" based on congruence with existing records in scientific databases. RESULTS: ChatGPT 4.0 outperformed version 3.5 in terms of reference reliability. However, both versions displayed a tendency to provide erroneous/non-existent references. CONCLUSIONS: It is crucial to address this challenge to maintain the reliability of scientific literature. Journals and institutions should establish strategies and good-practice principles in the evolving landscape of AI-assisted scientific writing.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Inteligencia Artificial / Cabeza Límite: Humans Idioma: En Revista: Eur Arch Otorhinolaryngol Asunto de la revista: OTORRINOLARINGOLOGIA Año: 2023 Tipo del documento: Article País de afiliación: Italia Pais de publicación: Alemania

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Inteligencia Artificial / Cabeza Límite: Humans Idioma: En Revista: Eur Arch Otorhinolaryngol Asunto de la revista: OTORRINOLARINGOLOGIA Año: 2023 Tipo del documento: Article País de afiliación: Italia Pais de publicación: Alemania