Your browser doesn't support javascript.
loading
Performance of Large Language Models on Medical Oncology Examination Questions.
Longwell, Jack B; Hirsch, Ian; Binder, Fernando; Gonzalez Conchas, Galileo Arturo; Mau, Daniel; Jang, Raymond; Krishnan, Rahul G; Grant, Robert C.
Afiliação
  • Longwell JB; Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.
  • Hirsch I; Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.
  • Binder F; Department of Medicine, University of Toronto, Toronto, Ontario, Canada.
  • Gonzalez Conchas GA; Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.
  • Mau D; Department of Medicine, University of Toronto, Toronto, Ontario, Canada.
  • Jang R; Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.
  • Krishnan RG; Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada.
  • Grant RC; Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada.
JAMA Netw Open ; 7(6): e2417641, 2024 Jun 03.
Article em En | MEDLINE | ID: mdl-38888919
ABSTRACT
Importance Large language models (LLMs) recently developed an unprecedented ability to answer questions. Studies of LLMs from other fields may not generalize to medical oncology, a high-stakes clinical setting requiring rapid integration of new information.

Objective:

To evaluate the accuracy and safety of LLM answers on medical oncology examination questions. Design, Setting, and

Participants:

This cross-sectional study was conducted between May 28 and October 11, 2023. The American Society of Clinical Oncology (ASCO) Oncology Self-Assessment Series on ASCO Connection, the European Society of Medical Oncology (ESMO) Examination Trial questions, and an original set of board-style medical oncology multiple-choice questions were presented to 8 LLMs. Main Outcomes and

Measures:

The primary outcome was the percentage of correct answers. Medical oncologists evaluated the explanations provided by the best LLM for accuracy, classified the types of errors, and estimated the likelihood and extent of potential clinical harm.

Results:

Proprietary LLM 2 correctly answered 125 of 147 questions (85.0%; 95% CI, 78.2%-90.4%; P < .001 vs random answering). Proprietary LLM 2 outperformed an earlier version, proprietary LLM 1, which correctly answered 89 of 147 questions (60.5%; 95% CI, 52.2%-68.5%; P < .001), and the best open-source LLM, Mixtral-8x7B-v0.1, which correctly answered 87 of 147 questions (59.2%; 95% CI, 50.0%-66.4%; P < .001). The explanations provided by proprietary LLM 2 contained no or minor errors for 138 of 147 questions (93.9%; 95% CI, 88.7%-97.2%). Incorrect responses were most commonly associated with errors in information retrieval, particularly with recent publications, followed by erroneous reasoning and reading comprehension. If acted upon in clinical practice, 18 of 22 incorrect answers (81.8%; 95% CI, 59.7%-94.8%) would have a medium or high likelihood of moderate to severe harm. Conclusions and Relevance In this cross-sectional study of the performance of LLMs on medical oncology examination questions, the best LLM answered questions with remarkable performance, although errors raised safety concerns. These results demonstrated an opportunity to develop and evaluate LLMs to improve health care clinician experiences and patient care, considering the potential impact on capabilities and safety.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Oncologia Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Oncologia Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article