Your browser doesn't support javascript.
loading
The Performance of ChatGPT on the American Society for Surgery of the Hand Self-Assessment Examination.
Arango, Sebastian D; Flynn, Jason C; Zeitlin, Jacob; Lorenzana, Daniel J; Miller, Andrew J; Wilson, Matthew S; Strohl, Adam B; Weiss, Lawrence E; Weir, Tristan B.
Affiliation
  • Arango SD; Department of Orthopaedic Surgery, Philadelphia Hand to Shoulder Center, Philadelphia, USA.
  • Flynn JC; Department of Orthopaedic Surgery, Sidney Kimmel Medical College, Philadelphia, USA.
  • Zeitlin J; Department of Orthopaedic Surgery, Philadelphia Hand to Shoulder Center, Philadelphia, USA.
  • Lorenzana DJ; Department of Orthopaedic Surgery, Philadelphia Hand to Shoulder Center, Philadelphia, USA.
  • Miller AJ; Department of Orthopaedic Surgery, Philadelphia Hand to Shoulder Center, Philadelphia, USA.
  • Wilson MS; Department of Orthopaedic Surgery, Philadelphia Hand to Shoulder Center, Philadelphia, USA.
  • Strohl AB; Department of Orthopaedic Surgery, Philadelphia Hand to Shoulder Center, Philadelphia, USA.
  • Weiss LE; Division of Orthopaedic Hand Surgery, OAA Orthopaedic Specialists, Allentown, USA.
  • Weir TB; Department of Orthopaedic Surgery, Philadelphia Hand to Shoulder Center, Philadelphia, USA.
Cureus ; 16(4): e58950, 2024 Apr.
Article in En | MEDLINE | ID: mdl-38800302
ABSTRACT

BACKGROUND:

This study aims to compare the performance of ChatGPT-3.5 (GPT-3.5) and ChatGPT-4 (GPT-4) on the American Society for Surgery of the Hand (ASSH) Self-Assessment Examination (SAE) to determine their potential as educational tools.

METHODS:

This study assessed the proportion of correct answers to text-based questions on the 2021 and 2022 ASSH SAE between untrained ChatGPT versions. Secondary analyses assessed the performance of ChatGPT based on question difficulty and question category. The outcomes of ChatGPT were compared with the performance of actual examinees on the ASSH SAE.

RESULTS:

A total of 238 questions were included in the analysis. Compared with GPT-3.5, GPT-4 provided significantly more correct answers overall (58.0% versus 68.9%, respectively; P = 0.013), on the 2022 SAE (55.9% versus 72.9%; P = 0.007), and more difficult questions (48.8% versus 63.6%; P = 0.02). In a multivariable logistic regression analysis, correct answers were predicted by GPT-4 (odds ratio [OR], 1.66; P = 0.011), increased question difficulty (OR, 0.59; P = 0.009), Bone and Joint questions (OR, 0.18; P < 0.001), and Soft Tissue questions (OR, 0.30; P = 0.013). Actual examinees scored a mean of 21.6% above GPT-3.5 and 10.7% above GPT-4. The mean percentage of correct answers by actual examinees was significantly higher for correct (versus incorrect) ChatGPT answers.

CONCLUSIONS:

GPT-4 demonstrated improved performance over GPT-3.5 on the ASSH SAE, especially on more difficult questions. Actual examinees scored higher than both versions of ChatGPT, but the margin was cut in half by GPT-4.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Cureus Year: 2024 Document type: Article Affiliation country: United States Country of publication: United States

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Cureus Year: 2024 Document type: Article Affiliation country: United States Country of publication: United States