Your browser doesn't support javascript.
loading
Performance of Generative Artificial Intelligence in Dental Licensing Examinations.
Chau, Reinhard Chun Wang; Thu, Khaing Myat; Yu, Ollie Yiru; Hsung, Richard Tai-Chiu; Lo, Edward Chin Man; Lam, Walter Yu Hang.
Afiliação
  • Chau RCW; Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China.
  • Thu KM; Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China.
  • Yu OY; Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China.
  • Hsung RT; Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China; Department of Computer Science, Hong Kong Chu Hai College, Hong Kong Special Administrative Region, China.
  • Lo ECM; Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China.
  • Lam WYH; Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China; Musketeers Foundation Institute of Data Science, The University of Hong Kong, Hong Kong Special Administrative Region, China. Electronic address: retlaw@hku.hk.
Int Dent J ; 74(3): 616-621, 2024 Jun.
Article em En | MEDLINE | ID: mdl-38242810
ABSTRACT

OBJECTIVES:

Generative artificial intelligence (GenAI), including large language models (LLMs), has vast potential applications in health care and education. However, it is unclear how proficient LLMs are in interpreting written input and providing accurate answers in dentistry. This study aims to investigate the accuracy of GenAI in answering questions from dental licensing examinations.

METHODS:

A total of 1461 multiple-choice questions from question books for the US and the UK dental licensing examinations were input into 2 versions of ChatGPT 3.5 and 4.0. The passing rates of the US and UK dental examinations were 75.0% and 50.0%, respectively. The performance of the 2 versions of GenAI in individual examinations and dental subjects was analysed and compared.

RESULTS:

ChatGPT 3.5 correctly answered 68.3% (n = 509) and 43.3% (n = 296) of questions from the US and UK dental licensing examinations, respectively. The scores for ChatGPT 4.0 were 80.7% (n = 601) and 62.7% (n = 429), respectively. ChatGPT 4.0 passed both written dental licensing examinations, whilst ChatGPT 3.5 failed. ChatGPT 4.0 answered 327 more questions correctly and 102 incorrectly compared to ChatGPT 3.5 when comparing the 2 versions.

CONCLUSIONS:

The newer version of GenAI has shown good proficiency in answering multiple-choice questions from dental licensing examinations. Whilst the more recent version of GenAI generally performed better, this observation may not hold true in all scenarios, and further improvements are necessary. The use of GenAI in dentistry will have significant implications for dentist-patient communication and the training of dental professionals.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Avaliação Educacional / Licenciamento em Odontologia Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Avaliação Educacional / Licenciamento em Odontologia Idioma: En Ano de publicação: 2024 Tipo de documento: Article