Your browser doesn't support javascript.
loading
ChatGPT can pass the AHA exams: Open-ended questions outperform multiple-choice format.
Zhu, Lingxuan; Mou, Weiming; Yang, Tao; Chen, Rui.
Afiliação
  • Zhu L; Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, 160 Pujian Road, Shanghai 200127, China; The First Clinical Medical School, Southern Medical University, 1023 Shatai South Road, Guangzhou, 510515 Guangdong, China.
  • Mou W; The First Clinical Medical School, Southern Medical University, 1023 Shatai South Road, Guangzhou, 510515 Guangdong, China.
  • Yang T; The First Clinical Medical School, Southern Medical University, 1023 Shatai South Road, Guangzhou, 510515 Guangdong, China.
  • Chen R; Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, 160 Pujian Road, Shanghai 200127, China. Electronic address: drchenrui@foxmail.com.
Resuscitation ; 188: 109783, 2023 Jul.
Article em En | MEDLINE | ID: mdl-37349064
ABSTRACT
The study by Fijacko et al. tested ChatGPT's ability to pass the BLS and ACLS exams of AHA, but found that ChatGPT failed both exams. A limitation of their study was using ChatGPT to generate only one response, which may have introduced bias. When generating three responses per question, ChatGPT can pass BLS exam with an overall accuracy of 84%. When incorrectly answered questions were rewritten as open-ended questions, ChatGPT's accuracy rate increased to 96% and 92.1% for the BLS and ACLS exams, respectively, allowing ChatGPT to pass both exams with outstanding results.

Texto completo: 1 Base de dados: MEDLINE Idioma: En Revista: Resuscitation Ano de publicação: 2023 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Base de dados: MEDLINE Idioma: En Revista: Resuscitation Ano de publicação: 2023 Tipo de documento: Article País de afiliação: China