Your browser doesn't support javascript.
loading
Evaluating Large Language Models for Drafting Emergency Department Discharge Summaries.
Williams, Christopher Y K; Bains, Jaskaran; Tang, Tianyu; Patel, Kishan; Lucas, Alexa N; Chen, Fiona; Miao, Brenda Y; Butte, Atul J; Kornblith, Aaron E.
Afiliación
  • Williams CYK; Bakar Computational Health Sciences Institute; University of California, San Francisco.
  • Bains J; Department of Emergency Medicine; University of California, San Francisco.
  • Tang T; Department of Emergency Medicine; University of California, San Francisco.
  • Patel K; Department of Emergency Medicine; University of California, San Francisco.
  • Lucas AN; Department of Emergency Medicine; University of California, San Francisco.
  • Chen F; Department of Emergency Medicine; University of California, San Francisco.
  • Miao BY; Bakar Computational Health Sciences Institute; University of California, San Francisco.
  • Butte AJ; Bakar Computational Health Sciences Institute; University of California, San Francisco.
  • Kornblith AE; Bakar Computational Health Sciences Institute; University of California, San Francisco.
medRxiv ; 2024 Apr 04.
Article en En | MEDLINE | ID: mdl-38633805
ABSTRACT
Importance Large language models (LLMs) possess a range of capabilities which may be applied to the clinical domain, including text summarization. As ambient artificial intelligence scribes and other LLM-based tools begin to be deployed within healthcare settings, rigorous evaluations of the accuracy of these technologies are urgently needed.

Objective:

To investigate the performance of GPT-4 and GPT-3.5-turbo in generating Emergency Department (ED) discharge summaries and evaluate the prevalence and type of errors across each section of the discharge summary.

Design:

Cross-sectional study.

Setting:

University of California, San Francisco ED.

Participants:

We identified all adult ED visits from 2012 to 2023 with an ED clinician note. We randomly selected a sample of 100 ED visits for GPT-summarization. Exposure We investigate the potential of two state-of-the-art LLMs, GPT-4 and GPT-3.5-turbo, to summarize the full ED clinician note into a discharge summary. Main Outcomes and

Measures:

GPT-3.5-turbo and GPT-4-generated discharge summaries were evaluated by two independent Emergency Medicine physician reviewers across three evaluation criteria 1) Inaccuracy of GPT-summarized information; 2) Hallucination of information; 3) Omission of relevant clinical information. On identifying each error, reviewers were additionally asked to provide a brief explanation for their reasoning, which was manually classified into subgroups of errors.

Results:

From 202,059 eligible ED visits, we randomly sampled 100 for GPT-generated summarization and then expert-driven evaluation. In total, 33% of summaries generated by GPT-4 and 10% of those generated by GPT-3.5-turbo were entirely error-free across all evaluated domains. Summaries generated by GPT-4 were mostly accurate, with inaccuracies found in only 10% of cases, however, 42% of the summaries exhibited hallucinations and 47% omitted clinically relevant information. Inaccuracies and hallucinations were most commonly found in the Plan sections of GPT-generated summaries, while clinical omissions were concentrated in text describing patients' Physical Examination findings or History of Presenting Complaint. Conclusions and Relevance In this cross-sectional study of 100 ED encounters, we found that LLMs could generate accurate discharge summaries, but were liable to hallucination and omission of clinically relevant information. A comprehensive understanding of the location and type of errors found in GPT-generated clinical text is important to facilitate clinician review of such content and prevent patient harm.

Texto completo: 1 Banco de datos: MEDLINE Idioma: En Revista: MedRxiv Año: 2024 Tipo del documento: Article

Texto completo: 1 Banco de datos: MEDLINE Idioma: En Revista: MedRxiv Año: 2024 Tipo del documento: Article