Your browser doesn't support javascript.
loading
Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education.
Colbert-Getz, Jorie M; Ryan, Michael; Hennessey, Erin; Lindeman, Brenessa; Pitts, Brian; Rutherford, Kim A; Schwengel, Deborah; Sozio, Stephen M; George, Jessica; Jung, Julianna.
Afiliação
  • Colbert-Getz JM; Assistant Dean of Assessment and Evaluation, University of Utah School of Medicine.
  • Ryan M; Assistant Professor, Department of Internal Medicine, University of Utah School of Medicine.
  • Hennessey E; Assistant Dean for Clinical Medical Education, Virginia Commonwealth University School of Medicine.
  • Lindeman B; Associate Professor, Department of Pediatrics, Virginia Commonwealth University School of Medicine.
  • Pitts B; Program Director for the Anesthesia Critical Care Medicine Fellowship, Stanford University School of Medicine.
  • Rutherford KA; Clinical Assistant Professor, Department of Anesthesia and Critical Care Medicine, Stanford University School of Medicine.
  • Schwengel D; Fellow and Associate Surgeon, Department of Surgery, Brigham and Women's Hospital.
  • Sozio SM; Instructor of Surgery, Harvard Medical School.
  • George J; Associate Professor, Department of Anesthesiology, University of California, Davis, School of Medicine.
  • Jung J; Assistant Professor, Departments of Pediatrics and Emergency Medicine, Pennsylvania State University College of Medicine.
MedEdPORTAL ; 13: 10588, 2017 May 24.
Article em En | MEDLINE | ID: mdl-30800790
INTRODUCTION: Prior research has identified seven elements of a good assessment, but the elements have not been operationalized in the form of a rubric to rate assessment utility. It would be valuable for medical educators to have a systematic way to evaluate the utility of an assessment in order to determine if the assessment used is optimal for the setting. METHODS: We developed and refined an assessment utility rubric using a modified Delphi process. Twenty-nine graduate students pilot-tested the rubric in 2016 with hypothetical data from three examinations, and interrater reliability of rubric scores was measured with interclass correlation coefficients (ICCs). RESULTS: Consensus for all rubric items was reached after three rounds. The resulting assessment utility rubric includes four elements (equivalence, educational effect, catalytic effect, acceptability) with three items each, one element (validity evidence) with five items, and space to provide four feasibility items relating to time and cost. Rater scores had ICC values greater than .75. DISCUSSION: The rubric shows promise in allowing educators to evaluate the utility of an assessment specific to their setting. The medical education field needs to give more consideration to how an assessment drives learning forward, how it motivates trainees, and whether it produces acceptable ranges of scores for all stakeholders.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Revista: MedEdPORTAL Ano de publicação: 2017 Tipo de documento: Article País de publicação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Revista: MedEdPORTAL Ano de publicação: 2017 Tipo de documento: Article País de publicação: Estados Unidos