Your browser doesn't support javascript.
loading
Validity evidence supporting clinical skills assessment by artificial intelligence compared with trained clinician raters.
Johnsson, Vilma; Søndergaard, Morten Bo; Kulasegaram, Kulamakan; Sundberg, Karin; Tiblad, Eleonor; Herling, Lotta; Petersen, Olav Bjørn; Tolsgaard, Martin G.
Afiliação
  • Johnsson V; Center for Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark.
  • Søndergaard MB; Faculty of Health and Medical Science, University of Copenhagen, Copenhagen, Denmark.
  • Kulasegaram K; Copenhagen Academy for Medical Education and Simulation, Copenhagen, Denmark.
  • Sundberg K; Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
  • Tiblad E; Department of Family and Community Medicine and Scientist, Wilson Centre, Toronto, Ontario, Canada.
  • Herling L; Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
  • Petersen OB; Center for Fetal Medicine, Department of Obstetrics, Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark.
  • Tolsgaard MG; Center for Fetal Medicine, Karolinska University Hospital, Stockholm, Sweden.
Med Educ ; 58(1): 105-117, 2024 01.
Article em En | MEDLINE | ID: mdl-37615058
ABSTRACT

BACKGROUND:

Artificial intelligence (AI) is becoming increasingly used in medical education, but our understanding of the validity of AI-based assessments (AIBA) as compared with traditional clinical expert-based assessments (EBA) is limited. In this study, the authors aimed to compare and contrast the validity evidence for the assessment of a complex clinical skill based on scores generated from an AI and trained clinical experts, respectively.

METHODS:

The study was conducted between September 2020 to October 2022. The authors used Kane's validity framework to prioritise and organise their evidence according to the four inferences scoring, generalisation, extrapolation and implications. The context of the study was chorionic villus sampling performed within the simulated setting. AIBA and EBA were used to evaluate performances of experts, intermediates and novice based on video recordings. The clinical experts used a scoring instrument developed in a previous international consensus study. The AI used convolutional neural networks for capturing features on video recordings, motion tracking and eye movements to arrive at a final composite score.

RESULTS:

A total of 45 individuals participated in the study (22 novices, 12 intermediates and 11 experts). The authors demonstrated validity evidence for scoring, generalisation, extrapolation and implications for both EBA and AIBA. The plausibility of assumptions related to scoring, evidence of reproducibility and relation to different training levels was examined. Issues relating to construct underrepresentation, lack of explainability, and threats to robustness were identified as potential weak links in the AIBA validity argument compared with the EBA validity argument.

CONCLUSION:

There were weak links in the use of AIBA compared with EBA, mainly in their representation of the underlying construct but also regarding their explainability and ability to transfer to other datasets. However, combining AI and clinical expert-based assessments may offer complementary benefits, which is a promising subject for future research.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Competência Clínica / Educação Médica Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Revista: Med Educ Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Dinamarca

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Competência Clínica / Educação Médica Tipo de estudo: Prognostic_studies Limite: Humans Idioma: En Revista: Med Educ Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Dinamarca