Your browser doesn't support javascript.
loading
Evaluating prompt engineering on GPT-3.5's performance in USMLE-style medical calculations and clinical scenarios generated by GPT-4.
Patel, Dhavalkumar; Raut, Ganesh; Zimlichman, Eyal; Cheetirala, Satya Narayan; Nadkarni, Girish N; Glicksberg, Benjamin S; Apakama, Donald U; Bell, Elijah J; Freeman, Robert; Timsina, Prem; Klang, Eyal.
Affiliation
  • Patel D; Mount Sinai Health System, New York, USA. pateldhaval021@hotmail.com.
  • Raut G; Mount Sinai Health System, New York, USA.
  • Zimlichman E; Hospital Management, Sheba Medical Center, Affiliated to Tel-Aviv University, Tel Aviv, Israel.
  • Cheetirala SN; ARC Innovation Center, Sheba Medical Center, Affiliated to Tel-Aviv University, Tel Aviv, Israel.
  • Nadkarni GN; Mount Sinai Health System, New York, USA.
  • Glicksberg BS; The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
  • Apakama DU; The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
  • Bell EJ; The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
  • Freeman R; University of California, Los Angeles, USA.
  • Timsina P; Mount Sinai Health System, New York, USA.
  • Klang E; Mount Sinai Health System, New York, USA.
Sci Rep ; 14(1): 17341, 2024 07 28.
Article in En | MEDLINE | ID: mdl-39069520
ABSTRACT
This study was designed to assess how different prompt engineering techniques, specifically direct prompts, Chain of Thought (CoT), and a modified CoT approach, influence the ability of GPT-3.5 to answer clinical and calculation-based medical questions, particularly those styled like the USMLE Step 1 exams. To achieve this, we analyzed the responses of GPT-3.5 to two distinct sets of questions a batch of 1000 questions generated by GPT-4, and another set comprising 95 real USMLE Step 1 questions. These questions spanned a range of medical calculations and clinical scenarios across various fields and difficulty levels. Our analysis revealed that there were no significant differences in the accuracy of GPT-3.5's responses when using direct prompts, CoT, or modified CoT methods. For instance, in the USMLE sample, the success rates were 61.7% for direct prompts, 62.8% for CoT, and 57.4% for modified CoT, with a p-value of 0.734. Similar trends were observed in the responses to GPT-4 generated questions, both clinical and calculation-based, with p-values above 0.05 indicating no significant difference between the prompt types. The conclusion drawn from this study is that the use of CoT prompt engineering does not significantly alter GPT-3.5's effectiveness in handling medical calculations or clinical scenario questions styled like those in USMLE exams. This finding is crucial as it suggests that performance of ChatGPT remains consistent regardless of whether a CoT technique is used instead of direct prompts. This consistency could be instrumental in simplifying the integration of AI tools like ChatGPT into medical education, enabling healthcare professionals to utilize these tools with ease, without the necessity for complex prompt engineering.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Educational Measurement Limits: Humans Country/Region as subject: America do norte Language: En Journal: Sci Rep Year: 2024 Document type: Article Affiliation country: United States Country of publication: United kingdom

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Educational Measurement Limits: Humans Country/Region as subject: America do norte Language: En Journal: Sci Rep Year: 2024 Document type: Article Affiliation country: United States Country of publication: United kingdom