Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 1 de 1
Filtrer
Plus de filtres










Base de données
Gamme d'année
1.
BMC Med Educ ; 24(1): 817, 2024 Jul 29.
Article de Anglais | MEDLINE | ID: mdl-39075511

RÉSUMÉ

CONTEXT: Objective Structured Clinical Examinations (OSCEs) are an increasingly popular evaluation modality for medical students. While the face-to-face interaction allows for more in-depth assessment, it may cause standardization problems. Methods to quantify, limit or adjust for examiner effects are needed. METHODS: Data originated from 3 OSCEs undergone by 900-student classes of 5th- and 6th-year medical students at Université Paris Cité in the 2022-2023 academic year. Sessions had five stations each, and one of the three sessions was scored by consensus by two raters (rather than one). We report OSCEs' longitudinal consistency for one of the classes and staff-related and student variability by session. We also propose a statistical method to adjust for inter-rater variability by deriving a statistical random student effect that accounts for staff-related and station random effects. RESULTS: From the four sessions, a total of 16,910 station scores were collected from 2615 student sessions, with two of the sessions undergone by the same students, and 36, 36, 35 and 20 distinct staff teams in each station for each session. Scores had staff-related heterogeneity (p<10-15), with staff-level standard errors approximately doubled compared to chance. With mixed models, staff-related heterogeneity explained respectively 11.4%, 11.6%, and 4.7% of station score variance (95% confidence intervals, 9.5-13.8, 9.7-14.1, and 3.9-5.8, respectively) with 1, 1 and 2 raters, suggesting a moderating effect of consensus grading. Student random effects explained a small proportion of variance, respectively 8.8%, 11.3%, and 9.6% (8.0-9.7, 10.3-12.4, and 8.7-10.5), and this low amount of signal resulted in student rankings being no more consistent over time with this metric, rather than with average scores (p=0.45). CONCLUSION: Staff variability impacts OSCE scores as much as student variability, and the former can be reduced with dual assessment or adjusted for with mixed models. Both are small compared to unmeasured sources of variability, making them difficult to capture consistently.


Sujet(s)
Compétence clinique , Évaluation des acquis scolaires , Biais de l'observateur , Étudiant médecine , Humains , Évaluation des acquis scolaires/méthodes , Évaluation des acquis scolaires/normes , Compétence clinique/normes , Enseignement médical premier cycle/normes , Paris , Reproductibilité des résultats
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE