Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
BMC Med Educ ; 24(1): 487, 2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38698352

RESUMEN

BACKGROUND: Workplace-based assessment (WBA) used in post-graduate medical education relies on physician supervisors' feedback. However, in a training environment where supervisors are unavailable to assess certain aspects of a resident's performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses' assessment of trainee performance and results have demonstrated strong evidence for validity in Orthopedic Surgery. However, different clinical settings may impact a tool's performance. This project studied the use of the O-RON in three different specialties at the University of Ottawa. METHODS: O-RON forms were distributed on Internal Medicine, General Surgery, and Obstetrical wards at the University of Ottawa over nine months. Validity evidence related to quantitative data was collected. Exit interviews with nurse managers were performed and content was thematically analyzed. RESULTS: 179 O-RONs were completed on 30 residents. With four forms per resident, the ORON's reliability was 0.82. Global judgement response and frequency of concerns was correlated (r = 0.627, P < 0.001). CONCLUSIONS: Consistent with the original study, the findings demonstrated strong evidence for validity. However, the number of forms collected was less than expected. Exit interviews identified factors impacting form completion, which included clinical workloads and interprofessional dynamics.


Asunto(s)
Competencia Clínica , Internado y Residencia , Psicometría , Humanos , Reproducibilidad de los Resultados , Femenino , Masculino , Evaluación Educacional/métodos , Ontario , Medicina Interna/educación
2.
Med Teach ; 44(1): 79-86, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34579618

RESUMEN

BACKGROUND: There may be no competency more shrouded in uncertainty than health advocacy (HA), raising questions about the robustness of advocacy training in postgraduate medical education. By understanding how programs currently train HA, we can identify whether trainees' learning needs are being met. METHODS: From 2017 to 2019, we reviewed curricular documents across nine direct-entry specialties at all Ontario medical schools, comparing content for the HA and communicator roles to delineate role-specific challenges. We then conducted semi-structured interviews with trainees (n = 9) and faculty (n = 6) to review findings and discuss their impact. Data were analyzed using thematic content analysis. RESULTS: Curricular documents revealed vague objectives and ill-defined modes of assessment for both intrinsic roles. This uncertainty was perceived as more problematic for HA, in part because HA seemed both undervalued in, and disconnected from, clinical learning. Trainees felt that the onus was on them to figure out how to develop and demonstrate HA competence, causing many to turn their learning attention elsewhere. DISCUSSION: Lack of curricular focus seems to create the perception that advocacy isn't valuable, deterring trainees-even those keen to become competent advocates-from developing HA skills. Such ambivalence may have troubling downstream effects for both patient care and trainees' professional development.


Asunto(s)
Educación Médica , Medicina , Competencia Clínica , Educación de Postgrado en Medicina , Humanos , Aprendizaje , Ontario , Incertidumbre
3.
Radiographics ; 41(4): E126-E137, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34143712

RESUMEN

The number of implanted devices such as orthopedic hardware and cardiac implantable devices continues to increase with an increase in the age of the patient population, as well as an increase in the number of indications for specific devices. Many patients with these devices have or will develop clinical conditions that are best depicted at MRI. However, implanted devices containing paramagnetic or ferromagnetic substances can cause significant artifact, which could limit the diagnostic capability of this modality. Performing imaging with MRI when an implant is present may be challenging, and there are numerous techniques the radiologist and technologist can use to help minimize artifacts related to implants. First, knowledge of the presence of an implant before patient arrival is critical to ensure safety of the patient when the device is subjected to a strong magnetic field. Once safety is ensured, the examination should be performed with the MRI system that is expected to provide the best image quality. The selection of the MRI system includes multiple considerations such as the effects of field strength and availability of specific sequences, which can reduce metal artifact. Appropriate patient positioning, attention to MRI parameters (including bandwidth, voxel size, and echo), and appropriate selection of sequences (those with less metal artifact and advanced metal reduction sequences) are critical to improve image quality. Patients with implants can be successfully imaged with MRI with appropriate planning and understanding of how to minimize artifacts. This improves image quality and the diagnostic confidence of the radiologist. ©RSNA, 2021.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Prótesis e Implantes , Humanos , Metales
4.
Med Educ ; 55(9): 1047-1055, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34060651

RESUMEN

PURPOSE: Competency-based medical education (CBME) has prompted widespread implementation of workplace-based assessment (WBA) tools using entrustment anchors. This study aimed to identify factors that influence faculty's rating choices immediately following assessment and explore their experiences using WBAs with entrustment anchors, specifically the Ottawa Surgical Competency Operating Room Evaluation scale. METHOD: A convenience sample of 50 semi-structured interviews with Emergency Medicine (EM) physicians from a single Canadian hospital were conducted between July and August 2019. All interviews occurred within two hours of faculty completing a WBA of a trainee. Faculty were asked what they considered when rating the trainee's performance and whether they considered an alternate rating. Two team members independently analysed interview transcripts using conventional content analysis with line-by-line coding to identify themes. RESULTS: Interviews captured interactions between 70% (26/37) of full-time EM faculty and 86% (19/22) of EM trainees. Faculty most commonly identified the amount of guidance the trainee required as influencing their rating. Other variables such as clinical context, trainee experience, past experiences with the trainee, perceived competence and confidence were also identified. While most faculty did not struggle to assign ratings, some had difficulty interpreting the language of entrustment anchors, being unsure whether their assessment should be retrospective or prospective in nature, and if/how the assessment should change whether they were 'in the room' or not. CONCLUSIONS: By going to the frontline during WBA encounters, this study captured authentic and honest reflections from physicians immediately engaged in assessment using entrustment anchors. While many of the factors identified are consistent with previous retrospective work, we highlight how some faculty consider factors outside the prescribed approach and struggle with the language of entrustment anchors. These results further our understanding of 'in-the-moment' assessments using entrustment anchors and may facilitate effective faculty development regarding WBA in CBME.


Asunto(s)
Internado y Residencia , Lugar de Trabajo , Canadá , Competencia Clínica , Docentes Médicos , Humanos
5.
Med Educ ; 55(5): 582-594, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33034082

RESUMEN

INTRODUCTION: The underrepresentation of women among senior faculty members in medical education is a longstanding problem. The purpose of this international qualitative investigation was to explore women and men's experiences of attaining full professorship and to investigate why women remain underrepresented among the senior faculty ranks. METHODS: Conducted within a social constructionist orientation, our qualitative study employed narrative analysis. Two female and two male participants working in medical education were recruited from five nations: Australia, Canada, the Netherlands, United Kingdom and United States. All participants held an MD or PhD. During telephone interviews, participants narrated the story of their careers. The five faculty members on the research team were also interviewed. Their narratives were included in analysis, rendering their experiences equal to those of the participants. RESULTS: A total of 24 full professors working in medical education were interviewed (n = 15 females and n = 9 males). While some aspects were present across all narratives (ie personal events, career milestones and facilitating and/or impeding factors), participants' experience of those aspects differed by gender. Men did not narrate fatherhood as a role navigated professionally, but women narrated motherhood as intimately connected to their professional roles. Both men and women narrated career success in terms of hard work and overcoming obstacles; however, male participants described promotion as inevitable, whereas women narrated promotion as a tenuous navigation of social structures towards uncertain outcomes. Female and male participants encountered facilitators and inhibitors throughout their careers but described acting on those experiences differently within the cultural contexts they faced. DISCUSSION: Our data suggest that female and male participants had different experiences of the work involved in achieving full professor status. Understanding these gendered experiences and their impact on career progression is an important advancement for better understanding what leads to the underrepresentation of women among senior faculty members in medical education.


Asunto(s)
Movilidad Laboral , Educación Médica , Australia , Canadá , Docentes Médicos , Femenino , Humanos , Masculino , Países Bajos , Reino Unido , Estados Unidos
6.
Teach Learn Med ; 31(2): 146-153, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30514128

RESUMEN

Construct: We compared a single-item performance score with the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) for their ability in assessing surgical competency. BACKGROUND: Surgical programs are adopting competency-based frameworks. The adoption of these frameworks for assessment requires tools that produce accurate and valid assessments of knowledge and technical performance. An assessment tool that is quick to complete could improve feasibility, reduce delays, and result in a higher volume of assessments of learners. Previous work demonstrated that the 9-item O-SCORE can produce valid results; the goal of this study was to determine if a single-item performance rating (Is candidate competent to independently complete procedure: yes or no) completed at a separate viewing would correlate to the O-SCORE, thus increasing feasibility of procedural competence assessment. APPROACH: Nineteen residents and 2 staff orthopedic surgeons from the University of Ottawa volunteered for a 2-part OSCE-style station including a written questionnaire and videotaped simulated open reduction and internal fixation midshaft radius fracture. Each performance was rated independently by 3 orthopedic surgeons using a single-item performance score (Time 1). The performances were assessed again 6 weeks later by the 3 raters using the O-SCORE (Time 2). Correlation between the single-item performance score and the O-SCORE were evaluated. RESULTS: Three orthopedic surgeons completed 21 ratings each resulting in 63 orthopedic ratings. There was a high level of correlation and agreement between the single-item performance score at Time 1 and Time 2 (κ correlation =0.72-1.00; p < .001; percentage agreement =90%-100%). The reliability of the O-SCORE at Time 2 with three raters was 0.83 and the internal consistency was 0.89. There was a tendency for each rater to assign more yes responses to the more senior trainees. CONCLUSIONS: A single-item performance score correlated highly with the O-SCORE in an orthopedic setting. A single-item score could be used to supplement a multi-item score with similar results in orthopedics. There is still benefit in completing multi-item scores such as the O-SCORE evaluations to guide specific areas of improvement and direct feedback.


Asunto(s)
Lista de Verificación , Competencia Clínica/normas , Evaluación Educacional/métodos , Cirugía General/educación , Canadá , Humanos
7.
BMC Med Educ ; 18(1): 218, 2018 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-30236097

RESUMEN

BACKGROUND: Workplace based assessment (WBA) is crucial to competency-based education. The majority of healthcare is delivered in the ambulatory setting making the ability to run an entire clinic a crucial core competency for Internal Medicine (IM) trainees. Current WBA tools used in IM do not allow a thorough assessment of this skill. Further, most tools are not aligned with the way clinical assessors conceptualize performances. To address this, many tools aligned with entrustment decisions have recently been published. The Ottawa Clinic Assessment Tool (OCAT) is an entrustment-aligned tool that allows for such an assessment but was developed in the surgical setting and it is not known if it can perform well in an entirely different context. The aim of this study was to implement the OCAT in an IM program and collect psychometric data in this different setting. Using one tool across multiple contexts may reduce the need for tool development and ensure that tools used have proper psychometric data to support them. METHODS: Psychometrics characteristics were determined. Descriptive statistics and effect sizes were calculated. Scores were compared between levels of training (juniors (PGY1), seniors (PGY2s and PGY3s) & fellows (PGY4s and PGY5s)) using a one-way ANOVA. Safety for independent practice was analyzed with a dichotomous score. Variance components were generated and used to estimate the reliability of the OCAT. RESULTS: Three hundred ninety OCATs were completed over 52 weeks by 86 physicians assessing 44 residents. The range of ratings varied from 2 (I had to talk them through) to 5 (I did not need to be there) for most items. Mean scores differed significantly by training level (p < .001) with juniors having lower ratings (M = 3.80 (out of 5), SD = 0.49) than seniors (M = 4.22, SD = - 0.47) who had lower ratings than fellows (4.70, SD = 0.36). Trainees deemed safe to run the clinic independently had significantly higher mean scores than those deemed not safe (p < .001). The generalizability coefficient that corresponds to internal consistency is 0.92. CONCLUSIONS: This study's psychometric data demonstrates that we can reliably use the OCAT in IM. We support assessing existing tools within different contexts rather than continuous developing discipline-specific instruments.


Asunto(s)
Competencia Clínica , Educación Basada en Competencias , Evaluación Educacional , Medicina Interna/educación , Internado y Residencia , Atención Ambulatoria , Humanos , Psicometría
8.
Med Educ ; 51(12): 1260-1268, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-28971502

RESUMEN

CONTEXT: Work-based assessments (WBAs) represent an increasingly important means of reporting expert judgements of trainee competence in clinical practice. However, the quality of WBAs completed by clinical supervisors is of concern. The episodic and fragmented interaction that often occurs between supervisors and trainees has been proposed as a barrier to the completion of high-quality WBAs. OBJECTIVES: The primary purpose of this study was to determine the effect of supervisor-trainee continuity on the quality of assessments documented on daily encounter cards (DECs), a common form of WBA. The relationship between trainee performance and DEC quality was also examined. METHODS: Daily encounter cards representing three differing degrees of supervisor-trainee continuity (low, intermediate, high) were scored by two raters using the Completed Clinical Evaluation Report Rating (CCERR), a previously published nine-item quantitative measure of DEC quality. An analysis of variance (anova) was performed to compare mean CCERR scores among the three groups. Linear regression analysis was conducted to examine the relationship between resident performance and DEC quality. RESULTS: Differences in mean CCERR scores were observed between the three continuity groups (p = 0.02); however, the magnitude of the absolute differences was small (partial eta-squared = 0.03) and not educationally meaningful. Linear regression analysis demonstrated a significant inverse relationship between resident performance and CCERR score (p < 0.001, r2  = 0.18). This inverse relationship was observed in both groups representing on-service residents (p = 0.001, r2  = 0.25; p = 0.04, r2  = 0.19), but not in the Off-service group (p = 0.62, r2  = 0.05). CONCLUSIONS: Supervisor-trainee continuity did not have an educationally meaningful influence on the quality of assessments documented on DECs. However, resident performance was found to affect assessor behaviours in the On-service group, whereas DEC quality remained poor regardless of performance in the Off-service group. The findings suggest that greater attention should be given to determining ways of improving the quality of assessments reported for off-service residents, as well as for those residents demonstrating appropriate clinical competence progression.


Asunto(s)
Competencia Clínica/normas , Evaluación Educacional/métodos , Docentes Médicos , Internado y Residencia , Educación de Postgrado en Medicina/métodos , Medicina de Emergencia/educación , Humanos , Reproducibilidad de los Resultados
9.
Teach Learn Med ; 28(1): 72-9, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26787087

RESUMEN

UNLABELLED: CONSTRUCT: The Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) is a 9-item surgical evaluation tool designed to assess technical competence in surgical trainees using behavioral anchors. BACKGROUND: The initial development of the O-SCORE produced evidence for valid results. Further work is required to determine if the use of a single surgeon or an unblinded rater introduces bias. In addition, the relationship of the O-SCORE to other currently used technical assessment tools should be explored to provide validity evidence related to the relationship to other measures. We have designed this project to provide continued validity evidence for the O-SCORE related to these two issues. APPROACH: Nineteen residents and 2 staff Orthopedic Surgeons from the University of Ottawa volunteered to participate in a 2-part OSCE style station. Participants completed a written questionnaire followed by a videotaped 10-minute simulated open reduction and internal fixation of a midshaft radius fracture. Videos were rated individually by 2 blinded staff orthopedic surgeons using an Objective Structured Assessment of Technical Skills (OSATS) global rating scale, an OSATS checklist, and the O-SCORE in random order. RESULTS: O-SCORE results appeared sensitive to surgical training level even when raters were blinded. In addition, strong agreement between two independent observers using the O-SCORE suggests that the measure captures a performance easily recognized by surgical observers. Ratings on the O-SCORE also were strongly associated with global ratings on the currently most validated technical evaluation tool (OSATS). Collectively, these results suggest that the O-SCORE generates accurate, reproducible, and meaningful results when used in a randomized and blinded fashion, providing continued validity evidence for using this tool to evaluate surgical trainee competence. CONCLUSIONS: The O-SCORE was able to differentiate surgical trainee level using blinded raters providing further evidence of validity for the O-SCORE. There was strong agreement between two independent observers using the O-SCORE. Ratings on the O-SCORE also demonstrated equivalence to scores on the most validated technical evaluation tool (OSATS). These results suggest that the O-SCORE demonstrates accurate and reproducible results when used in a randomized and blinded fashion providing continued validity evidence for this tool in the evaluation of surgical competence in the trainees.


Asunto(s)
Lista de Verificación/normas , Competencia Clínica/normas , Quirófanos , Entrenamiento Simulado , Femenino , Humanos , Internado y Residencia , Masculino , Ortopedia , Cirujanos , Encuestas y Cuestionarios
10.
Teach Learn Med ; 28(4): 385-394, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27285377

RESUMEN

Construct: This article describes the development and validity evidence behind a new rating scale to assess feedback quality in the clinical workplace. BACKGROUND: Competency-based medical education has mandated a shift to learner-centeredness, authentic observation, and frequent formative assessments with a focus on the delivery of effective feedback. Because feedback has been shown to be of variable quality and effectiveness, an assessment of feedback quality in the workplace is important to ensure we are providing trainees with optimal learning opportunities. The purposes of this project were to develop a rating scale for the quality of verbal feedback in the workplace (the Direct Observation of Clinical Skills Feedback Scale [DOCS-FBS]) and to gather validity evidence for its use. APPROACH: Two panels of experts (local and national) took part in a nominal group technique to identify features of high-quality feedback. Through multiple iterations and review, 9 features were developed into the DOCS-FBS. Four rater types (residents n = 21, medical students n = 8, faculty n = 12, and educators n = 12) used the DOCS-FBS to rate videotaped feedback encounters of variable quality. The psychometric properties of the scale were determined using a generalizability analysis. Participants also completed a survey to gather data on a 5-point Likert scale to inform the ease of use, clarity, knowledge acquisition, and acceptability of the scale. RESULTS: Mean video ratings ranged from 1.38 to 2.96 out of 3 and followed the intended pattern suggesting that the tool allowed raters to distinguish between examples of higher and lower quality feedback. There were no significant differences between rater type (range = 2.36-2.49), suggesting that all groups of raters used the tool in the same way. The generalizability coefficients for the scale ranged from 0.97 to 0.99. Item-total correlations were all above 0.80, suggesting some redundancy in items. Participants found the scale easy to use (M = 4.31/5) and clear (M = 4.23/5), and most would recommend its use (M = 4.15/5). Use of DOCS-FBS was acceptable to both trainees (M = 4.34/5) and supervisors (M = 4.22/5). CONCLUSIONS: The DOCS-FBS can reliably differentiate between feedback encounters of higher and lower quality. The scale has been shown to have excellent internal consistency. We foresee the DOCS-FBS being used as a means to provide objective evidence that faculty development efforts aimed at improving feedback skills can yield results through formal assessment of feedback quality.


Asunto(s)
Educación Basada en Competencias , Educación de Postgrado en Medicina , Retroalimentación , Competencia Clínica , Humanos , Estudiantes de Medicina
11.
Med Teach ; 38(11): 1092-1099, 2016 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27602533

RESUMEN

BACKGROUND: Many clinical educators feel unprepared and/or unwilling to report unsatisfactory trainee performance. This systematic review consolidates knowledge from medical, nursing, and dental literature on the experiences and perceptions of evaluators or assessors with this failure to fail phenomenon. METHODS: We searched the English language literature in CINAHL, EMBASE, and MEDLINE from January 2005 to January 2015. Qualitative and quantitative studies were included. Following our review protocol, registered with BEME, reviewers worked in pairs to identify relevant articles. The investigators participated in thematic analysis of the qualitative data reported in these studies. Through several cycles of analysis, discussion and reflection, the team identified the barriers and enablers to failing a trainee. RESULTS: From 5330 articles, we included 28 publications in the review. The barriers identified were (1) assessor's professional considerations, (2) assessor's personal considerations, (3) trainee related considerations, (4) unsatisfactory evaluator development and evaluation tools, (5) institutional culture and (6) consideration of available remediation for the trainee. The enablers identified were: (1) duty to patients, to society, and to the profession, (2) institutional support such as backing a failing evaluation, support from colleagues, evaluator development, and strong assessment systems, and (3) opportunities for students after failing. DISCUSSION/CONCLUSIONS: The inhibiting and enabling factors to failing an underperforming trainee were common across the professions included in this study, across the 10 years of data, and across the educational continuum. We suggest that these results can inform efforts aimed at addressing the failure to fail problem.


Asunto(s)
Competencia Clínica , Educación Profesional/normas , Empleos en Salud/educación , Educación en Odontología/normas , Educación Médica/normas , Educación en Enfermería/normas , Evaluación Educacional/normas , Escolaridad , Docentes/organización & administración , Docentes/psicología , Humanos , Desarrollo de Personal/normas
12.
Teach Learn Med ; 27(3): 274-9, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26158329

RESUMEN

UNLABELLED: CONSTRUCT: The competence of a trainee to perform a surgical procedure was assessed using an electronic tool. BACKGROUND: "Going paperless" in healthcare has received significant attention over the past decades given the numerous potential benefits of converting to electronic health records. Not surprisingly, medical educators have also considered the potential benefits of electronic assessments for their trainees. What literature exists on the transition from paper-based to electronic-based assessments suggests a positive outcome. In contrast, work done examining the transition to and implementation of electronic health records has noted that hospitals who have implemented these systems have not gone paperless despite the benefits of doing so. APPROACH: This study sought to transition a paper-based assessment tool, the Ottawa Surgical Competency Operating Room Evaluation (which has strong evidence for validity) to an electronic version, in three surgical specialties (Orthopedic Surgery, Urology, General Surgery). However, as the project progressed, it became necessary to change the focus of the study to explore the issues of transitioning to a paperless assessment tool as we identified an extremely low participation rate. RESULTS: Over the first 3 months 440 assessment cases were logged. However, only a small portion of these cases were assessed using the electronic tool (Orthopedic Surgery = 16%, Urology = 5%, General Surgery = 0%). Participants identified several barriers in using the electronic assessment tool such as increased time compared to the paper version and technological issues related to the log-in procedure. CONCLUSIONS: Essentially, users want the tool to be as convenient as paper. This is consistent with research on electronic health records implementation but different from previous work in medical education. Thus, we believe our study highlights an important finding. Transitioning from a paper-based assessment tool to an electronic one is not necessarily a neutral process. Consideration of potential barriers and finding solutions to these barriers will be necessary in order to realize the many benefits of electronic assessments.


Asunto(s)
Automatización , Competencia Clínica/normas , Evaluación Educacional/métodos , Especialidades Quirúrgicas/educación , Estudiantes de Medicina , Humanos
13.
Med Educ ; 48(7): 724-32, 2014 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-24909534

RESUMEN

OBJECTIVES: In-training evaluation (ITE) is used to assess resident competencies in clinical settings. This assessment is documented on an evaluation report (In-Training Evaluation Report [ITER]). Unfortunately, the quality of these reports can be questionable. Therefore, training programmes to improve report quality are common. The Completed Clinical Evaluation Report Rating (CCERR) was developed to assess completed report quality and has been shown to do so in a reliable manner, thus enabling the evaluation of these programmes. The CCERR is a resource-intensive instrument, which may limit its use. The purpose of this study was to create a screening measure (Proxy-CCERR) that can predict the CCERR outcome in a less resource-intensive manner. METHODS: Using multiple regression, the authors analysed a dataset of 269 ITERs to create a model that can predict the associated CCERR scores. The resulting predictive model was tested on the CCERR scores for an additional sample of 300 ITERs. RESULTS: The quality of an ITER, as measured by the CCERR, can be predicted using a model involving only three variables (R(2)  = 0.61). The predictive variables included the total number of words in the comments, the variability of the ratings and the proportion of comment boxes completed on the form. CONCLUSIONS: It is possible to model CCERR scores in a highly predictive manner. The predictive variables can be easily extracted in an automated process. Because this model is less resource-intensive than the CCERR, it makes it possible to provide feedback from ITER training programmes to large groups of supervisors and institutions, and even to create automated feedback systems using Proxy-CCERR scores.


Asunto(s)
Competencia Clínica/normas , Educación Médica/normas , Evaluación Educacional/normas , Modelos Estadísticos , Evaluación de Programas y Proyectos de Salud/normas , Documentación/normas , Evaluación Educacional/estadística & datos numéricos , Docentes Médicos , Humanos , Evaluación de Programas y Proyectos de Salud/estadística & datos numéricos , Estudios Prospectivos , Reproducibilidad de los Resultados , Estudios Retrospectivos
14.
Med Teach ; 36(12): 1038-42, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-24986650

RESUMEN

Assessing learners in the clinical setting is vital to determining their level of professional competence. Clinical performance assessments can be documented using In-training evaluation reports (ITERs). Previous research has suggested a need for faculty development in order to improve the quality of these reports. Previous work identified key features of high-quality completed ITERs which primarily involve the narrative comments. This aligns well with the recent discourse in the assessment literature focusing on the value of qualitative assessments. Evidence exists to demonstrate that faculty can be trained to complete higher quality ITERs. We present 12 key strategies to assist clinical supervisors in improving the quality of their completed ITERs. Higher quality completed ITERs will improve the documentation of the trainee's progress and be more defensible when questioned in an appeal or legal process.


Asunto(s)
Competencia Clínica , Evaluación Educacional/métodos , Preceptoría , Guías como Asunto , Humanos
15.
Perspect Med Educ ; 13(1): 44-55, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38343554

RESUMEN

Traditional approaches to assessment in health professions education systems, which have generally focused on the summative function of assessment through the development and episodic use of individual high-stakes examinations, may no longer be appropriate in an era of competency based medical education. Contemporary assessment programs should not only ensure collection of high-quality performance data to support robust decision-making on learners' achievement and competence development but also facilitate the provision of meaningful feedback to learners to support reflective practice and performance improvement. Programmatic assessment is a specific approach to designing assessment systems through the intentional selection and combination of a variety of assessment methods and activities embedded within an educational framework to simultaneously optimize the decision-making and learning function of assessment. It is a core component of competency based medical education and is aligned with the goals of promoting assessment for learning and coaching learners to achieve predefined levels of competence. In Canada, postgraduate specialist medical education has undergone a transformative change to a competency based model centred around entrustable professional activities (EPAs). In this paper, we describe and reflect on the large scale, national implementation of a program of assessment model designed to guide learning and ensure that robust data is collected to support defensible decisions about EPA achievement and progress through training. Reflecting on the design and implications of this assessment system may help others who want to incorporate a competency based approach in their own country.


Asunto(s)
Educación Médica , Humanos , Canadá , Educación Médica/métodos , Educación Basada en Competencias/métodos , Curriculum , Evaluación de Programas y Proyectos de Salud
16.
Acad Radiol ; 2024 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-38320946

RESUMEN

RATIONALE AND OBJECTIVES: This study aims to assess the quality of abdominal MR images acquired on a commercial 0.55T scanner and compare these images with those acquired on conventional 1.5T/3T scanners in both healthy subjects and patients. MATERIALS AND METHODS: Fifteen healthy subjects and 52 patients underwent abdominal Magnetic Resonance Imaging at 0.55T. Images were also collected in healthy subjects at 1.5T, and comparison 1.5/3T images identified for 28 of the 52 patients. Image quality was rated by two radiologists on a 4-point Likert scale. Readers were asked whether they could answer the clinical question for patient studies. Wilcoxon signed-rank test was used to test for significant differences in image ratings and acquisition times, and inter-reader reliability was computed. RESULTS: The overall image quality of all sequences at 0.55T were rated as acceptable in healthy subjects. Sequences were modified to improve signal-to-noise ratio and reduce artifacts and deployed for clinical use; 52 patients were enrolled in this study. Radiologists were able to answer the clinical question in 52 (reader 1) and 46 (reader 2) of the patient cases. Average image quality was considered to be diagnostic (>3) for all sequences except arterial phase FS 3D T1w gradient echo (GRE) and 3D magnetic resonance cholangiopancreatography for one reader. In comparison to higher field images, significantly lower scores were given to 0.55T IP 2D GRE and arterial phase FS 3D T1w GRE, and significantly higher scores to diffusion-weighted echo planar imaging at 0.55T; other sequences were equivalent. The average scan time at 0.55T was 54 ± 10 minutes vs 36 ± 11 minutes at higher field strengths (P < .001). CONCLUSION: Diagnostic-quality abdominal MR images can be obtained on a commercial 0.55T scanner at a longer overall acquisition time compared to higher field systems, although some sequences may benefit from additional optimization.

17.
Perspect Med Educ ; 13(1): 201-223, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38525203

RESUMEN

Postgraduate medical education is an essential societal enterprise that prepares highly skilled physicians for the health workforce. In recent years, PGME systems have been criticized worldwide for problems with variable graduate abilities, concerns about patient safety, and issues with teaching and assessment methods. In response, competency based medical education approaches, with an emphasis on graduate outcomes, have been proposed as the direction for 21st century health profession education. However, there are few published models of large-scale implementation of these approaches. We describe the rationale and design for a national, time-variable competency-based multi-specialty system for postgraduate medical education called Competence by Design. Fourteen innovations were bundled to create this new system, using the Van Melle Core Components of competency based medical education as the basis for the transformation. The successful execution of this transformational training system shows competency based medical education can be implemented at scale. The lessons learned in the early implementation of Competence by Design can inform competency based medical education innovation efforts across professions worldwide.


Asunto(s)
Educación Médica , Medicina , Humanos , Educación Basada en Competencias/métodos , Educación Médica/métodos , Competencia Clínica , Publicaciones
18.
CJEM ; 25(6): 475-480, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37166679

RESUMEN

INTRODUCTION: Workplace-based assessments are an important tool for trainee feedback and as a means of reporting expert judgments of trainee competence in the workplace. However, the literature has demonstrated that gender bias can exist within these assessments. We aimed to determine whether gender differences in the quality of workplace-based assessment data exist in our residency training program. METHODS: This study was conducted at the University of Ottawa in the Department of Emergency Medicine. Four end-of-shift workplace-based assessments completed by men faculty and four completed by women faculty were randomly selected for each resident during the 2018-2019 academic year. Two blinded raters scored each workplace-based assessment using the Completed Clinical Evaluation Report Rating (CCERR), a published nine-item quantitative measure of workplace-based assessment quality. A 2 × 2 mixed measures analysis of variance (ANOVA) of resident gender and faculty gender was conducted, with mean CCERR score as the dependent variable. The ANOVA was repeated with mean workplace-based assessment rating as the dependent variable. RESULTS: A total of 363 workplace-based assessments were analyzed for 46 residents. There were no significant effects of faculty or resident gender on the quality of workplace-based assessments (p = 0.30). There was no difference in mean workplace-based assessment ratings between women and men residents (p = 0.92), and no interaction between resident and faculty gender (p = 0.62). Mean CCERR score was 25.8, SD = 4.2, indicating average quality assessments. CONCLUSIONS: We did not find faculty or resident gender differences in the quality of workplace-based assessments completed in our training program. While the literature has previously demonstrated gender bias in trainee assessments, our results are not surprising as assessment culture varies by institution and program. Our study cautions against generalizing gender bias across contexts, and offers an approach that educators can use to evaluate whether gender bias in the quality of trainee assessments exists within their program.


RéSUMé: INTRODUCTION: Les évaluations sur le lieu de travail constituent un outil important pour le retour d'information des stagiaires et comme moyen de rapporter les jugements d'experts sur les compétences des stagiaires sur le lieu de travail. Cependant, la littérature a démontré que des préjugés sexistes peuvent exister dans ces évaluations. Nous avons cherché à déterminer s'il existe des différences entre les sexes dans la qualité des données d'évaluation sur le lieu de travail dans notre programme de formation en résidence. MéTHODES: Cette étude a été menée à l'Université d'Ottawa dans le département de médecine d'urgence. Quatre évaluations en fin de poste de travail complétées par des professeurs hommes et 4 complétées par des professeurs femmes ont été sélectionnées au hasard pour chaque résident au cours de l'année universitaire 2018-2019. Deux évaluateurs en aveugle ont noté chaque évaluation sur le lieu de travail à l'aide du Completed Clinical Evaluation Report Rating (CCERR), une mesure quantitative publiée en neuf points de la qualité de l'évaluation sur le lieu de travail. Une analyse de variance (ANOVA) à mesures mixtes 2 × 2 du sexe des résidents et du sexe des enseignants a été réalisée, avec le score CCERR moyen comme variable dépendante. L'ANOVA a été répétée en prenant comme variable dépendante la note moyenne de l'évaluation sur le lieu de travail. RéSULTATS: Au total, 363 évaluations sur le lieu de travail ont été analysées pour 46 résidents. Il n'y avait aucun effet significatif du sexe du corps professoral ou du résident sur la qualité des évaluations en milieu de travail (p = 0,30). Il n'y avait pas de différence dans les évaluations moyennes sur le lieu de travail entre les femmes et les hommes résidents (p = 0,92), et pas d'interaction entre le sexe du résident et celui de la faculté (p = 0,62). Le score moyen du CCERR était de 25,8, SD = 4,2, ce qui indique des évaluations de qualité moyenne. CONCLUSIONS: Nous n'avons pas constaté de différences entre les sexes au sein du corps professoral ou des résidents en ce qui concerne la qualité des évaluations en milieu de travail effectuées dans le cadre de notre programme de formation. Bien que la littérature ait déjà démontré l'existence de préjugés sexistes dans les évaluations des stagiaires, nos résultats ne sont pas surprenants car la culture de l'évaluation varie selon les établissements et les programmes. Notre étude met en garde contre la généralisation des préjugés sexistes dans tous les contextes et propose une approche que les éducateurs peuvent utiliser pour évaluer s'il existe des préjugés sexistes dans la qualité des évaluations des stagiaires au sein de leur programme.


Asunto(s)
Docentes Médicos , Internado y Residencia , Humanos , Masculino , Femenino , Competencia Clínica , Sexismo , Lugar de Trabajo
19.
Med Teach ; 34(11): e725-31, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23140304

RESUMEN

BACKGROUND: The quality of medical student and resident clinical evaluation reports submitted by rotation supervisors is a concern. The effectiveness of faculty development (FD) interventions in changing report quality is uncertain. AIMS: This study assessed whether faculty could be trained to complete higher quality reports. METHOD: A 3-h interactive program designed to improve evaluation report quality, previously developed and tested locally, was offered at three different Canadian medical schools. To assess for a change in report quality, three reports completed by each supervisor prior to the workshop and all reports completed for 6 months following the workshop were evaluated by three blinded, independent raters using the Completed Clinical Evaluation Report Rating (CCERR): a validated scale that assesses report quality. RESULTS: A total of 22 supervisors from multiple specialties participated. The mean CCERR score for reports completed after the workshop was significantly higher (21.74 ± 4.91 versus 18.90 ± 5.00, p = 0.02). CONCLUSIONS: This study demonstrates that this FD workshop had a positive impact upon the quality of the participants' evaluation reports suggesting that faculty have the potential to be trained with regards to trainee assessment. This adds to the literature which suggests that FD is an important component in improving assessment quality.


Asunto(s)
Docentes Médicos/organización & administración , Facultades de Medicina/organización & administración , Desarrollo de Personal/organización & administración , Canadá , Humanos , Capacitación en Servicio
20.
J Telemed Telecare ; 28(4): 280-290, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-33657913

RESUMEN

High-quality correspondence between healthcare providers is critical for effective patient care. We developed an assessment tool to measure the quality of specialist correspondence to primary care providers (PCPs) via electronic consultation (eConsult), where specialists provide advice without specialist-patient interactions. We incorporated fourteen previously described features of high-quality eConsult correspondence into an assessment tool named the eConsult Specialist Quality of Response (eSQUARE). Six PCPs and two specialists applied the 10-item eSQUARE tool to 30 eConsults of varying quality as informed by PCP survey data. Content, response process, and internal structure validity evidence was gathered. Psychometric properties were calculated using descriptive statistics and generalizability analyses. Mean total score for low-quality eConsults (M = 24 ± 5.6) was significantly lower than moderate-quality eConsults (M = 38 ± 4.7; p<0.001) which was significantly lower than high-quality eConsults (M = 46 ± 3.0; p = 0.002). Reliability measures were high, including generalizability coefficient (0.96), inter-item (≥0.55) and item-total correlations (≥0.68). A decision study demonstrated that a single rater was adequate to achieve a reliability measure of ≥0.70. This study demonstrates initial validity evidence including multiple reliability measures for the eSQUARE. A single rater is adequate to achieve reliability measures for formative feedback. Future studies can apply the eSQUARE when planning educational initiatives aiming to improve specialist-to-PCP correspondence via eConsult.


Asunto(s)
Consulta Remota , Accesibilidad a los Servicios de Salud , Humanos , Atención Primaria de Salud , Derivación y Consulta , Reproducibilidad de los Resultados , Especialización
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA