Your browser doesn't support javascript.
loading
On the limitations of large language models in clinical diagnosis.
Reese, Justin T; Danis, Daniel; Caufield, J Harry; Groza, Tudor; Casiraghi, Elena; Valentini, Giorgio; Mungall, Christopher J; Robinson, Peter N.
Afiliação
  • Reese JT; Division of Environmental Genomics and Systems Biology, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA.
  • Danis D; The Jackson Laboratory for Genomic Medicine, Farmington CT, 06032, USA.
  • Caufield JH; Division of Environmental Genomics and Systems Biology, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA.
  • Groza T; Rare Care Centre, Perth Children's Hospital, Perth, WA 6009, Australia.
  • Casiraghi E; Telethon Kids Institute, Perth, WA 6009, Australia.
  • Valentini G; AnacletoLab, Dipartimento di Informatica, Università degli Studi di Milano, Milano, Italy.
  • Mungall CJ; AnacletoLab, Dipartimento di Informatica, Università degli Studi di Milano, Milano, Italy.
  • Robinson PN; ELLIS-European Laboratory for Learning and Intelligent Systems.
medRxiv ; 2024 Feb 26.
Article em En | MEDLINE | ID: mdl-37503093
Objective: Large Language Models such as GPT-4 previously have been applied to differential diagnostic challenges based on published case reports. Published case reports have a sophisticated narrative style that is not readily available from typical electronic health records (EHR). Furthermore, even if such a narrative were available in EHRs, privacy requirements would preclude sending it outside the hospital firewall. We therefore tested a method for parsing clinical texts to extract ontology terms and programmatically generating prompts that by design are free of protected health information. Materials and Methods: We investigated different methods to prepare prompts from 75 recently published case reports. We transformed the original narratives by extracting structured terms representing phenotypic abnormalities, comorbidities, treatments, and laboratory tests and creating prompts programmatically. Results: Performance of all of these approaches was modest, with the correct diagnosis ranked first in only 5.3-17.6% of cases. The performance of the prompts created from structured data was substantially worse than that of the original narrative texts, even if additional information was added following manual review of term extraction. Moreover, different versions of GPT-4 demonstrated substantially different performance on this task. Discussion: The sensitivity of the performance to the form of the prompt and the instability of results over two GPT-4 versions represent important current limitations to the use of GPT-4 to support diagnosis in real-life clinical settings. Conclusion: Research is needed to identify the best methods for creating prompts from typically available clinical data to support differential diagnostics.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Diagnostic_studies Idioma: En Revista: MedRxiv Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Diagnostic_studies Idioma: En Revista: MedRxiv Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos