Assessment of medical student clinical reasoning by "lay" vs physician raters: inter-rater reliability using a scoring guide in a multidisciplinary objective structured clinical examination.
Am J Surg
; 203(1): 81-6, 2012 Jan.
Article
em En
| MEDLINE
| ID: mdl-22172486
ABSTRACT
BACKGROUND:
To determine whether a "lay" rater could assess clinical reasoning, interrater reliability was measured between physician and lay raters of patient notes written by medical students as part of an 8-station objective structured clinical examination.METHODS:
Seventy-five notes were rated on core elements of clinical reasoning by physician and lay raters independently, using a scoring guide developed by physician consensus. Twenty-five notes were rerated by a 2nd physician rater as an expert control. Kappa statistics and simple percentage agreement were calculated in 3 areas evidence for and against each diagnosis and diagnostic workup.RESULTS:
Agreement between physician and lay raters for the top diagnosis was as follows supporting evidence, 89% (κ = .72); evidence against, 89% (κ = .81); and diagnostic workup, 79% (κ = .58). Physician rater agreement was 83% (κ = .59), 92% (κ = .87), and 96% (κ = .87), respectively.CONCLUSIONS:
Using a comprehensive scoring guide, interrater reliability for physician and lay raters was comparable with reliability between 2 expert physician raters.
Texto completo:
1
Base de dados:
MEDLINE
Assunto principal:
Estudantes de Medicina
/
Pensamento
/
Dor Abdominal
/
Avaliação Educacional
Idioma:
En
Ano de publicação:
2012
Tipo de documento:
Article