Your browser doesn't support javascript.
loading
Comparing the Performance of ChatGPT-4 and Medical Students on MCQs at Varied Levels of Bloom's Taxonomy.
Bharatha, Ambadasu; Ojeh, Nkemcho; Fazle Rabbi, Ahbab Mohammad; Campbell, Michael H; Krishnamurthy, Kandamaran; Layne-Yarde, Rhaheem N A; Kumar, Alok; Springer, Dale C R; Connell, Kenneth L; Majumder, Md Anwarul Azim.
Afiliação
  • Bharatha A; Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados.
  • Ojeh N; Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados.
  • Fazle Rabbi AM; Department of Population Sciences, University of Dhaka, Dhaka, Bangladesh.
  • Campbell MH; Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados.
  • Krishnamurthy K; Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados.
  • Layne-Yarde RNA; Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados.
  • Kumar A; Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados.
  • Springer DCR; Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados.
  • Connell KL; Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados.
  • Majumder MAA; Faculty of Medical Sciences, The University of the West Indies, Bridgetown, Barbados.
Adv Med Educ Pract ; 15: 393-400, 2024.
Article em En | MEDLINE | ID: mdl-38751805
ABSTRACT

Introduction:

This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark.

Methods:

A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer-based testing.

Results:

The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4's performance, but revised Bloom's Taxonomy levels did not. A detailed association check between program levels and Bloom's taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p<0.001), reflecting a concentration of "remember-level" questions in preclinical and "evaluate-level" questions in clinical courses.

Discussion:

The study highlights ChatGPT-4's proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content.

Conclusion:

While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI's impact on medical education and student performance across educational levels and courses.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article