Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Med Teach ; : 1-9, 2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38742827

RESUMEN

BACKGROUND: Our institution simultaneously transitioned all postgraduate specialty training programs to competency-based medical education (CBME) curricula. We explored experiences of CBME-trained residents graduating from five-year programs to inform the continued evolution of CBME in Canada. METHODS: We utilized qualitative description to explore residents' experiences and inform continued CBME improvement. Data were collected from fifteen residents from various specialties through focus groups, interviews, and written responses. The data were analyzed inductively, using conventional content analysis. RESULTS: We identified five overarching themes. Three themes provided insight into residents' experiences with CBME, describing discrepancies between the intentions of CBME and how it was enacted, challenges with implementation, and variation in residents' experiences. Two themes - adaptations and recommendations - could inform meaningful refinements for CBME going forward. CONCLUSIONS: Residents graduating from CBME training programs offered a balanced perspective, including criticism and recognition of the potential value of CBME when implemented as intended. Their experiences provide a better understanding of residents' needs within CBME curricula, including greater balance and flexibility within programs of assessment and curricula. Many challenges that residents faced with CBME could be alleviated by greater accountability at program, institutional, and national levels. We conclude with actionable recommendations for addressing residents' needs in CBME.

2.
Teach Learn Med ; : 1-13, 2023 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-37964518

RESUMEN

CONSTRUCT: The McMaster Narrative Comment Rating Tool aims to capture critical features reflecting the quality of written narrative comments provided in the medical education context: valence/tone of language, degree of correction versus reinforcement, specificity, actionability, and overall usefulness. BACKGROUND: Despite their role in competency-based medical education, not all narrative comments contribute meaningfully to the development of learners' competence. To develop solutions to mitigate this problem, robust measures of narrative comment quality are needed. While some tools exist, most were created in specialty-specific contexts, have focused on one or two features of feedback, or have focused on faculty perceptions of feedback, excluding learners from the validation process. In this study, we aimed to develop a detailed, broadly applicable narrative comment quality assessment tool that drew upon features of high-quality assessment and feedback and could be used by a variety of raters to inform future research, including applications related to automated analysis of narrative comment quality. APPROACH: In Phase 1, we used the literature to identify five critical features of feedback. We then developed rating scales for each of the features, and collected 670 competency-based assessments completed by first-year surgical residents in the first six-weeks of training. Residents were from nine different programs at a Canadian institution. In Phase 2, we randomly selected 50 assessments with written feedback from the dataset. Two education researchers used the scale to independently score the written comments and refine the rating tool. In Phase 3, 10 raters, including two medical education researchers, two medical students, two residents, two clinical faculty members, and two laypersons from the community, used the tool to independently and blindly rate written comments from another 50 randomly selected assessments from the dataset. We compared scores between and across rater pairs to assess reliability. FINDINGS: Single and average measures intraclass correlation (ICC) scores ranged from moderate to excellent (ICCs = .51-.83 and .91-.98) across all categories and rater pairs. All tool domains were significantly correlated (p's <.05), apart from valence, which was only significantly correlated with degree of correction versus reinforcement. CONCLUSION: Our findings suggest that the McMaster Narrative Comment Rating Tool can reliably be used by multiple raters, across a variety of rater types, and in different surgical contexts. As such, it has the potential to support faculty development initiatives on assessment and feedback, and may be used as a tool to conduct research on different assessment strategies, including automated analysis of narrative comments.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA