Your browser doesn't support javascript.
loading
Applying GPT-4 to the Plastic Surgery Inservice Training Examination.
Gupta, Rohun; Park, John B; Herzog, Isabel; Yosufi, Nahid; Mangan, Amelia; Firouzbakht, Peter K; Mailey, Brian A.
Afiliação
  • Gupta R; Division of Plastic Surgery, Department of Surgery, St. Louis University School of Medicine, St. Louis, MO, USA. Electronic address: rohunguptamd@gmail.com.
  • Park JB; Department of Plastic Surgery, Rutgers New Jersey School of Medicine, Newark, NJ, USA.
  • Herzog I; Department of Plastic Surgery, Rutgers New Jersey School of Medicine, Newark, NJ, USA.
  • Yosufi N; Oakland University William Beaumont School of Medicine, Rochester, MI, USA.
  • Mangan A; Division of Plastic Surgery, Department of Surgery, St. Louis University School of Medicine, St. Louis, MO, USA.
  • Firouzbakht PK; Division of Plastic Surgery, Department of Surgery, St. Louis University School of Medicine, St. Louis, MO, USA.
  • Mailey BA; Division of Plastic Surgery, Department of Surgery, St. Louis University School of Medicine, St. Louis, MO, USA.
J Plast Reconstr Aesthet Surg ; 87: 78-82, 2023 12.
Article em En | MEDLINE | ID: mdl-37812847
BACKGROUND: The recent introduction of Generative Pre-trained Transformer (GPT)-4 has demonstrated the potential to be a superior version of ChatGPT-3.5. According to many, GPT-4 is seen as a more reliable and creative version of GPT-3.5. OBJECTIVE: In conjugation with our prior manuscript, we wanted to determine if GPT-4 could be exploited as an instrument for plastic surgery graduate medical education by evaluating its performance on the Plastic Surgery Inservice Training Examination (PSITE). METHODS: Sample assessment questions from the 2022 PSITE were obtained from the American Council of Academic Plastic Surgeons website and manually inputted into GPT-4. Responses by GPT-4 were qualified using the properties of natural coherence. Incorrect answers were stratified into the consequent categories: informational, logical, or explicit fallacy. RESULTS: From a total of 242 questions, GPT-4 provided correct answers for 187, resulting in a 77.3% accuracy rate. Logical reasoning was utilized in 95.0% of questions, internal information in 98.3%, and external information in 97.5%. Upon separating the questions based on incorrect and correct responses, a statistically significant difference was identified in GPT-4's application of logical reasoning. CONCLUSION: GPT-4 has shown to be more accurate and reliable for plastic surgery resident education when compared to GPT-3.5. Users should look to utilize the tool to enhance their educational curriculum. Those who adopt the use of such models may be better equipped to deliver high-quality care to their patients.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Cirurgia Plástica / Procedimentos de Cirurgia Plástica Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Cirurgia Plástica / Procedimentos de Cirurgia Plástica Idioma: En Ano de publicação: 2023 Tipo de documento: Article