Your browser doesn't support javascript.
loading
Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts.
Van Veen, Dave; Van Uden, Cara; Blankemeier, Louis; Delbrouck, Jean-Benoit; Aali, Asad; Bluethgen, Christian; Pareek, Anuj; Polacin, Malgorzata; Reis, Eduardo Pontes; Seehofnerová, Anna; Rohatgi, Nidhi; Hosamani, Poonam; Collins, William; Ahuja, Neera; Langlotz, Curtis P; Hom, Jason; Gatidis, Sergios; Pauly, John; Chaudhari, Akshay S.
Affiliation
  • Van Veen D; Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
  • Van Uden C; Stanford Center for Artificial Intelligence in Medicine and Imaging, Palo Alto, CA, USA.
  • Blankemeier L; Stanford Center for Artificial Intelligence in Medicine and Imaging, Palo Alto, CA, USA.
  • Delbrouck JB; Department of Computer Science, Stanford University, Stanford, CA, USA.
  • Aali A; Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
  • Bluethgen C; Stanford Center for Artificial Intelligence in Medicine and Imaging, Palo Alto, CA, USA.
  • Pareek A; Stanford Center for Artificial Intelligence in Medicine and Imaging, Palo Alto, CA, USA.
  • Polacin M; Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA.
  • Reis EP; Department of Medicine, Stanford, CA, USA.
  • Seehofnerová A; University Hospital Zurich, Zurich, Switzerland.
  • Rohatgi N; Stanford Center for Artificial Intelligence in Medicine and Imaging, Palo Alto, CA, USA.
  • Hosamani P; Copenhagen University Hospital, Copenhagen, Denmark.
  • Collins W; Department of Medicine, Stanford, CA, USA.
  • Ahuja N; University Hospital Zurich, Zurich, Switzerland.
  • Langlotz CP; Stanford Center for Artificial Intelligence in Medicine and Imaging, Palo Alto, CA, USA.
  • Hom J; Albert Einstein Israelite Hospital, São Paulo, Brazil.
  • Gatidis S; Department of Medicine, Stanford, CA, USA.
  • Pauly J; Department of Radiology, Stanford University, Stanford, CA, USA.
  • Chaudhari AS; Department of Medicine, Stanford, CA, USA.
Res Sq ; 2023 Oct 30.
Article in En | MEDLINE | ID: mdl-37961377
ABSTRACT
Sifting through vast textual data and summarizing key information from electronic health records (EHR) imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy on a diverse range of clinical summarization tasks has not yet been rigorously demonstrated. In this work, we apply domain adaptation methods to eight LLMs, spanning six datasets and four distinct clinical summarization tasks radiology reports, patient questions, progress notes, and doctor-patient dialogue. Our thorough quantitative assessment reveals trade-offs between models and adaptation methods in addition to instances where recent advances in LLMs may not improve results. Further, in a clinical reader study with ten physicians, we show that summaries from our best-adapted LLMs are preferable to human summaries in terms of completeness and correctness. Our ensuing qualitative analysis highlights challenges faced by both LLMs and human experts. Lastly, we correlate traditional quantitative NLP metrics with reader study scores to enhance our understanding of how these metrics align with physician preferences. Our research marks the first evidence of LLMs outperforming human experts in clinical text summarization across multiple tasks. This implies that integrating LLMs into clinical workflows could alleviate documentation burden, empowering clinicians to focus more on personalized patient care and the inherently human aspects of medicine.

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Res Sq Year: 2023 Document type: Article Affiliation country: Estados Unidos

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Res Sq Year: 2023 Document type: Article Affiliation country: Estados Unidos