Your browser doesn't support javascript.
loading
Are evaluations in simulated medical encounters reliable among rater types? A comparison between standardized patient and outside observer ratings of OSCEs.
Wollney, Easton N; Vasquez, Taylor S; Stalvey, Carolyn; Close, Julia; Markham, Merry Jennifer; Meyer, Lynne E; Cooper, Lou Ann; Bylund, Carma L.
Afiliação
  • Wollney EN; Dept. of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, USA.
  • Vasquez TS; College of Journalism and Communications, University of Florida, Gainesville, FL, USA.
  • Stalvey C; Dept. of Internal Medicine, College of Medicine, University of Florida, Gainesville, FL USA.
  • Close J; Dept. of Hematology and Oncology, College of Medicine, University of Florida, Gainesville, FL, USA.
  • Markham MJ; Dept. of Hematology and Oncology, College of Medicine, University of Florida, Gainesville, FL, USA.
  • Meyer LE; Graduate Medical Education, College of Medicine, University of Florida, Gainesville, FL, USA.
  • Cooper LA; Dept. of Medical Education, College of Medicine, University of Florida, Gainesville, FL, USA.
  • Bylund CL; Dept. of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, USA.
PEC Innov ; 2: 100125, 2023 Dec.
Article em En | MEDLINE | ID: mdl-37214504
ABSTRACT

Objective:

By analyzing Objective Structured Clinical Examination (OSCE) evaluations of first-year interns' communication with standardized patients (SP), our study aimed to examine the differences between ratings of SPs and a set of outside observers with training in healthcare communication.

Methods:

Immediately following completion of OSCEs, SPs evaluated interns' communication skills using 30 items. Later, two observers independently coded video recordings using the same items. We conducted two-tailed t-tests to examine differences between SP and observers' ratings.

Results:

Rater scores differed significantly on 21 items (p < .05), with 20 of the 21 differences due to higher SP in-person evaluation scores. Items most divergent between SPs and observers included items related to empathic communication and nonverbal communication.

Conclusion:

Differences between SP and observer ratings should be further investigated to determine if additional rater training is needed or if a revised evaluation measure is needed. Educators may benefit from adjusting evaluation criteria to decrease the number of items raters must complete and may do so by encompassing more global questions regarding various criteria. Furthermore, evaluation measures may be strengthened by undergoing reliability and validity testing. Innovation This study highlights the strengths and limitations to rater types (observers or SPs), as well as evaluation methods (recorded or in-person).
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: PEC Innov Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: PEC Innov Ano de publicação: 2023 Tipo de documento: Article País de afiliação: Estados Unidos