Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Bases de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
3.
Am J Surg ; 207(2): 209-12, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24238603

RESUMEN

BACKGROUND: The aim of this study was to compare the performance of students completing an 8-week versus a 6-week surgery clerkship on an objective structured clinical examination (OSCE) and the National Board of Medical Examiners (NBME) clinical science surgery examination. METHODS: One hundred fifteen students from the 8-week clerkship and 99 from the 6-week clerkship were included. Performance on a summative OSCE was assessed using behaviorally anchored checklists. NBME exams were graded using the NBME's standard scaled scores. Results were compared using 2-tailed, independent-samples, unequal-variance t tests. RESULTS: Mean OSCE scores for the 8-week and 6-week curricula were not statistically different. Mean NBME scores also did not statistically differ. Six-week students performed significantly better in the specific OSCE subdomains of blood pressure, orthostatic blood pressure, rectal exam, and fecal occult blood test. CONCLUSIONS: Overall OSCE and NBME exam performance did not differ between 8-week and 6-week surgery clerkship students.


Asunto(s)
Prácticas Clínicas/organización & administración , Competencia Clínica , Educación Médica Continua/organización & administración , Cirugía General/educación , Conocimientos, Actitudes y Práctica en Salud , Estudiantes de Medicina , Evaluación Educacional , Estudios de Seguimiento , Humanos , Estudios Retrospectivos , Factores de Tiempo
4.
Am J Surg ; 203(1): 81-6, 2012 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-22172486

RESUMEN

BACKGROUND: To determine whether a "lay" rater could assess clinical reasoning, interrater reliability was measured between physician and lay raters of patient notes written by medical students as part of an 8-station objective structured clinical examination. METHODS: Seventy-five notes were rated on core elements of clinical reasoning by physician and lay raters independently, using a scoring guide developed by physician consensus. Twenty-five notes were rerated by a 2nd physician rater as an expert control. Kappa statistics and simple percentage agreement were calculated in 3 areas: evidence for and against each diagnosis and diagnostic workup. RESULTS: Agreement between physician and lay raters for the top diagnosis was as follows: supporting evidence, 89% (κ = .72); evidence against, 89% (κ = .81); and diagnostic workup, 79% (κ = .58). Physician rater agreement was 83% (κ = .59), 92% (κ = .87), and 96% (κ = .87), respectively. CONCLUSIONS: Using a comprehensive scoring guide, interrater reliability for physician and lay raters was comparable with reliability between 2 expert physician raters.


Asunto(s)
Dolor Abdominal/diagnóstico , Evaluación Educacional/normas , Estudiantes de Medicina/psicología , Pensamiento , Competencia Clínica , Curriculum , Educación Médica , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Análisis y Desempeño de Tareas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA