Your browser doesn't support javascript.
loading
Performance of ChatGPT on Chinese Master's Degree Entrance Examination in Clinical Medicine.
Li, Ke-Cheng; Bu, Zhi-Jun; Shahjalal, Md; He, Bai-Xiang; Zhuang, Zi-Fan; Li, Chen; Liu, Jian-Ping; Wang, Bin; Liu, Zhao-Lan.
Afiliación
  • Li KC; Department of Andrology, Dongzhimen Hospital, Beijing University of Chinese Medicine, Beijing, China.
  • Bu ZJ; Centre for Evidence-Based Chinese Medicine, Beijing University of Chinese Medicine, Beijing, China.
  • Shahjalal M; Department of Public Health, North South University, Dhaka, Bangladesh.
  • He BX; Department of Gastroenterology, Dongzhimen Hospital, Beijing University of Chinese Medicine, Beijing, China.
  • Zhuang ZF; Department of Endocrinology, Guang'anmen Hospital, China Academy of Chinese Medical Sciences, Beijing, China.
  • Li C; Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shannxi, China.
  • Liu JP; Centre for Evidence-Based Chinese Medicine, Beijing University of Chinese Medicine, Beijing, China.
  • Wang B; Department of Andrology, Dongzhimen Hospital, Beijing University of Chinese Medicine, Beijing, China.
  • Liu ZL; Centre for Evidence-Based Chinese Medicine, Beijing University of Chinese Medicine, Beijing, China.
PLoS One ; 19(4): e0301702, 2024.
Article en En | MEDLINE | ID: mdl-38573944
ABSTRACT

BACKGROUND:

ChatGPT is a large language model designed to generate responses based on a contextual understanding of user queries and requests. This study utilised the entrance examination for the Master of Clinical Medicine in Traditional Chinese Medicine to assesses the reliability and practicality of ChatGPT within the domain of medical education.

METHODS:

We selected 330 single and multiple-choice questions from the 2021 and 2022 Chinese Master of Clinical Medicine comprehensive examinations, which did not include any images or tables. To ensure the test's accuracy and authenticity, we preserved the original format of the query and alternative test texts, without any modifications or explanations.

RESULTS:

Both ChatGPT3.5 and GPT-4 attained average scores surpassing the admission threshold. Noteworthy is that ChatGPT achieved the highest score in the Medical Humanities section, boasting a correct rate of 93.75%. However, it is worth noting that ChatGPT3.5 exhibited the lowest accuracy percentage of 37.5% in the Pathology division, while GPT-4 also displayed a relatively lower correctness percentage of 60.23% in the Biochemistry section. An analysis of sub-questions revealed that ChatGPT demonstrates superior performance in handling single-choice questions but performs poorly in multiple-choice questions.

CONCLUSION:

ChatGPT exhibits a degree of medical knowledge and the capacity to aid in diagnosing and treating diseases. Nevertheless, enhancements are warranted to address its accuracy and reliability limitations. Imperatively, rigorous evaluation and oversight must accompany its utilization, accompanied by proactive measures to surmount prevailing constraints.
Asunto(s)

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Inteligencia Artificial / Medicina Clínica / Evaluación Educacional Idioma: En Revista: PLoS One Asunto de la revista: CIENCIA / MEDICINA Año: 2024 Tipo del documento: Article País de afiliación: China

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Inteligencia Artificial / Medicina Clínica / Evaluación Educacional Idioma: En Revista: PLoS One Asunto de la revista: CIENCIA / MEDICINA Año: 2024 Tipo del documento: Article País de afiliación: China
...