Your browser doesn't support javascript.
loading
Performance of trauma-trained large language models on surgical assessment questions: A new approach in resource identification.
Mahajan, Arnav; Tran, Andrew; Tseng, Esther S; Como, John J; El-Hayek, Kevin M; Ladha, Prerna; Ho, Vanessa P.
Afiliación
  • Mahajan A; Department of Surgery, The MetroHealth System, Case Western Reserve University School of Medicine, Cleveland, OH. Electronic address: https://twitter.com/arnavmahajan_.
  • Tran A; Department of Surgery, The MetroHealth System, Case Western Reserve University School of Medicine, Cleveland, OH.
  • Tseng ES; Department of Surgery, The MetroHealth System, Case Western Reserve University School of Medicine, Cleveland, OH.
  • Como JJ; Department of Surgery, The MetroHealth System, Case Western Reserve University School of Medicine, Cleveland, OH.
  • El-Hayek KM; Department of Surgery, The MetroHealth System, Case Western Reserve University School of Medicine, Cleveland, OH.
  • Ladha P; Department of Surgery, The MetroHealth System, Case Western Reserve University School of Medicine, Cleveland, OH.
  • Ho VP; Department of Surgery, The MetroHealth System, Case Western Reserve University School of Medicine, Cleveland, OH; Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, OH. Electronic address: vho@metrohealth.org.
Surgery ; 2024 Sep 23.
Article en En | MEDLINE | ID: mdl-39317517
ABSTRACT

BACKGROUND:

Large language models have successfully navigated simulated medical board examination questions. However, whether and how language models can be used in surgical education is less understood. Our study evaluates the efficacy of domain-specific large language models in curating study materials for surgical board style questions.

METHODS:

We developed EAST-GPT and ACS-GPT, custom large language models with domain-specific knowledge from published guidelines from the Eastern Association of the Surgery of Trauma and the American College of Surgeons Trauma Quality Programs. EAST-GPT, ACS-GPT, and an untrained GPT-4 performance were assessed trauma-related questions from Surgical Education and Self-Assessment Program (18th edition). Large language models were asked to choose answers and provide answer rationales. Rationales were assessed against an educational framework with 5 domains accuracy, relevance, comprehensiveness, evidence-base, and clarity.

RESULTS:

Ninety guidelines trained EAST-GPT and 10 trained ACS-GPT. All large language models were tested on 62 trauma questions. EAST-GPT correctly answered 76%, whereas ACS-GPT answered 68% correctly. Both models outperformed ChatGPT-4 (P < .05), which answered 45% correctly. For reasoning, EAST-GPT achieved the gratest mean scores across all 5 educational framework metrics. ACS-GPT scored lower than ChatGPT-4 in comprehensiveness and evidence-base; however, these differences were not statistically significant.

CONCLUSION:

Our study presents a novel methodology in identifying test-preparation resources by training a large language model to answer board-style multiple choice questions. Both trained models outperformed ChatGPT-4, demonstrating its answers were accurate, relevant, and evidence-based. Potential implications of such AI integration into surgical education must be explored.

Texto completo: 1 Bases de datos: MEDLINE Idioma: En Revista: Surgery Año: 2024 Tipo del documento: Article

Texto completo: 1 Bases de datos: MEDLINE Idioma: En Revista: Surgery Año: 2024 Tipo del documento: Article