Your browser doesn't support javascript.
loading
Performance of generative pre-trained transformers (GPTs) in Certification Examination of the College of Family Physicians of Canada.
Mousavi, Mehdi; Shafiee, Shabnam; Harley, Jason M; Cheung, Jackie Chi Kit; Abbasgholizadeh Rahimi, Samira.
Afiliação
  • Mousavi M; Department of Family Medicine, Faculty of Medicine, University of Saskatchewan, Nipawin, Saskatchewan, Canada.
  • Shafiee S; Department of Family Medicine, Saskatchewan Health Authority, Riverside Health Complex, Turtleford, Saskatchewan, Canada.
  • Harley JM; Department of Surgery, Faculty of Medicine and Health Sciences, McGill University, Montreal, Quebec, Canada.
  • Cheung JCK; Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada.
  • Abbasgholizadeh Rahimi S; Institute for Health Sciences Education, Faculty of Medicine and Health Sciences, McGill University, Montreal, Quebec, Canada.
Fam Med Community Health ; 12(Suppl 1)2024 May 28.
Article em En | MEDLINE | ID: mdl-38806403
ABSTRACT

INTRODUCTION:

The application of large language models such as generative pre-trained transformers (GPTs) has been promising in medical education, and its performance has been tested for different medical exams. This study aims to assess the performance of GPTs in responding to a set of sample questions of short-answer management problems (SAMPs) from the certification exam of the College of Family Physicians of Canada (CFPC).

METHOD:

Between August 8th and 25th, 2023, we used GPT-3.5 and GPT-4 in five rounds to answer a sample of 77 SAMPs questions from the CFPC website. Two independent certified family physician reviewers scored AI-generated responses twice first, according to the CFPC answer key (ie, CFPC score), and second, based on their knowledge and other references (ie, Reviews' score). An ordinal logistic generalised estimating equations (GEE) model was applied to analyse repeated measures across the five rounds.

RESULT:

According to the CFPC answer key, 607 (73.6%) lines of answers by GPT-3.5 and 691 (81%) by GPT-4 were deemed accurate. Reviewer's scoring suggested that about 84% of the lines of answers provided by GPT-3.5 and 93% of GPT-4 were correct. The GEE analysis confirmed that over five rounds, the likelihood of achieving a higher CFPC Score Percentage for GPT-4 was 2.31 times more than GPT-3.5 (OR 2.31; 95% CI 1.53 to 3.47; p<0.001). Similarly, the Reviewers' Score percentage for responses provided by GPT-4 over 5 rounds were 2.23 times more likely to exceed those of GPT-3.5 (OR 2.23; 95% CI 1.22 to 4.06; p=0.009). Running the GPTs after a one week interval, regeneration of the prompt or using or not using the prompt did not significantly change the CFPC score percentage.

CONCLUSION:

In our study, we used GPT-3.5 and GPT-4 to answer complex, open-ended sample questions of the CFPC exam and showed that more than 70% of the answers were accurate, and GPT-4 outperformed GPT-3.5 in responding to the questions. Large language models such as GPTs seem promising for assisting candidates of the CFPC exam by providing potential answers. However, their use for family medicine education and exam preparation needs further studies.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Certificação Limite: Humans País como assunto: America do norte Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Certificação Limite: Humans País como assunto: America do norte Idioma: En Ano de publicação: 2024 Tipo de documento: Article