Your browser doesn't support javascript.
loading
A Comparison of Psychometric Properties of the American Board of Anesthesiology's In-Person and Virtual Standardized Oral Examinations.
Keegan, Mark T; Harman, Ann E; Deiner, Stacie G; Sun, Huaping.
Affiliation
  • Keegan MT; M.T. Keegan is professor, Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, Minnesota; ORCID: 0000-0003-1427-6705.
  • Harman AE; A.E. Harman is chief assessment officer, the American Board of Anesthesiology, Raleigh, North Carolina.
  • Deiner SG; S.G. Deiner is professor, Department of Anesthesiology, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire.
  • Sun H; H. Sun is director of psychometrics and research, the American Board of Anesthesiology, Raleigh, North Carolina; ORCID: 0000-0002-3266-988X.
Acad Med ; 2024 Jun 07.
Article in En | MEDLINE | ID: mdl-38857338
ABSTRACT

PURPOSE:

The COVID-19 pandemic prompted training institutions and national credentialing organizations to administer examinations virtually. This study compared task difficulty, examiner grading, candidate performance, and other psychometric properties between in-person and virtual standardized oral examinations (SOEs) administered by the American Board of Anesthesiology.

METHOD:

This retrospective study included SOEs administered in person from March 2018 through March 2020 and virtually from December 2020 through November 2021. The in-person and virtual SOEs share the same structure, including 4 tasks of preoperative evaluation, intraoperative management, postoperative care, and additional topics. The Many-Facet Rasch Model was used to estimate candidate performance, examiner grading severity, and task difficulty for the in-person and virtual SOEs separately; the virtual SOE was equated to the in-person SOE by common examiners and all tasks. The independent-samples and partially overlapping-samples t tests were used to compare candidate performance and examiner grading severity between these 2 formats, respectively.

RESULTS:

In-person (n = 3,462) and virtual (n = 2,959) first-time candidates were comparable in age, sex, race and ethnicity, and whether they were U.S. medical school graduates. The mean (standard deviation [SD]) candidate performance was 2.96 (1.76) logits for the virtual SOE, which was statistically significantly better than that for the in-person SOE (mean [SD], 2.86 [1.75]; Welch independent-samples t test, P = .02); however, the effect size was negligible (Cohen d = 0.06). The difference in the grading severity of examiners who rated the in-person (n = 398; mean [SD], 0.00 [0.73]) vs virtual (n = 341; mean [SD], 0.07 [0.77]) SOE was not statistically significant (Welch partially overlapping-samples t test, P = .07).

CONCLUSIONS:

Candidate performance and examiner grading severity were comparable between the in-person and virtual SOEs, supporting the reliability and validity of the virtual oral exam in this large-volume, high-stakes setting.

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Acad Med Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Acad Med Year: 2024 Document type: Article