RESUMO
PURPOSE: A lack of contemporary dental literature exists on evaluating dental residents in postgraduate education, with no standardized criteria or information on technology adoption. This study aims to understand current evaluation methods in dental residency programs and compare them to existing literature. METHODS: A survey with 22 questions was sent to program directors of 749 ADA/CODA (American Dental Association/Commission on Dental Accreditation)-accredited postgraduate dental residencies. The questions focused on evaluation frequency, faculty involvement, submission methods, and electronic software usage. RESULTS: The survey response rate was 30.2% (226 responses). Results show that 98% of program directors are involved in evaluations, but require more support from auxiliary faculty. Evaluations are typically submitted twice or four times a year, with 12% of programs wanting increased frequency. Face-to-face delivery of feedback is considered crucial. While desktop computers are widely used for evaluation submission, 55% of programs utilize mobile devices, which correlates with higher program director satisfaction. The most popular evaluation software includes New Innovations, MEd Hub, and Qualtrics. Overall, 86.96% of program directors are satisfied with current evaluation practices. Areas needing improvement are non-program director faculty involvement, resident response/feedback, and software navigation. CONCLUSION: This study found that a majority of program directors of the postgraduate dental education system are using electronic technology for their resident evaluation systems and are satisfied with their current mechanism of evaluation.
Assuntos
Internato e Residência , Estados Unidos , Humanos , Avaliação de Programas e Projetos de Saúde , Inquéritos e Questionários , Docentes , Especialidades Odontológicas , Educação de Pós-Graduação em MedicinaRESUMO
OBJECTIVE: Intentionally self-driven professional development of surgical resident physicians is a hallmark of surgical training and is expected to gain further traction as Entrustable Professional Activities (EPAs) become the new paradigm for surgical education. We aimed to analyze how surgical residents rate themselves as compared to the evaluation of the Clinical Competency Committee using ACGME Milestones Version 1 (M1.0) and Version 2 (M2.0). DESIGN: We asked 22 general surgical trainees for self-evaluation of Milestones (both M1.0 and M2.0) from 2017 semiannually to 2022. ACGME-required Milestone evaluations by the Clinical Competency Committee (CCC) were independently performed after the time window for resident self-evaluation. Neither trainees nor CCC were aware of the other party's evaluations. There were 1552 paired data available for evaluating individual competencies by both trainees and CCC. Paired Wilcoxon signed-rank tests were then performed among the corresponding pairs. SETTING: MercyOne Des Moines Medical Center, Des Moines, IA; Teaching tertiary referral center. PARTICIPANTS: Twenty-two general surgical trainees at this hospital and 28 faculty surgeons participated in this study. RESULTS: The average self-evaluation of surgical residents was lower in the M1.0 cohort compared to the corresponding CCC evaluation (1.96 ± 0.72 vs. 2.11 ± 0.67; p < 0.001). M1.0 self-assessments and CCC-assessments were statistically similar for ICS (pâ¯=â¯0.548) and PROF (pâ¯=â¯0.554) competencies and differed for MK (p < 0.001), PBLI (p < 0.001), PC (p < 0.001), SBP (pâ¯=â¯0.008). On the contrary, the M2.0 cohort demonstrated higher average self-evaluation of surgical residents compared to the corresponding CCC evaluation (2.75 ± 0.87 vs. 2.12 ± 0.97; p < 0.001). Significant differences were observed for all 6 ACGME competencies using M2.0 self-assessments and CCC-assessments (all p < 0.001). Multivariate regression modeling (p < 0.001, R2â¯=â¯0.255) predicted the degree of discordance between self-assessment and CCC-assessed achievement of competencies with a significant effect of gender (baseline male: coefâ¯=â¯-0.232, p < 0.001), PGY level (-0.083 per year, p < 0.001) and Milestone version (0.831, p < 0.001). A significant interaction exists for all gender/Milestone combinations except for the female trainees with M1.0. CONCLUSIONS: The difference between self-evaluated Milestone achievement and faculty-driven CCC evaluation of surgical resident physician performance is more evident in Milestones 2.0 than in Milestones 1.0. Residents self-evaluate higher compared to faculty using Milestones 2.0. This discrepancy is seen among both genders and is more pronounced among male residents overestimating core competencies with M2.0 self-evaluation than formal CCC assessment.
Assuntos
Competência Clínica , Internato e Residência , Humanos , Masculino , Feminino , Autoavaliação (Psicologia) , Autoavaliação Diagnóstica , Educação de Pós-Graduação em Medicina , Avaliação Educacional , Centros de Atenção TerciáriaRESUMO
OBJECTIVE: Given widespread disparities in the surgical workforce and the advent of competency-based training models that rely on objective evaluations of resident performance, this review aims to describe the landscape of bias in the evaluation methods of residents in surgical training programs in the United States. DESIGN: A scoping review was conducted within PubMed, Embase, Web of Science, and ERIC in May 2022, without a date restriction. Studies were screened and reviewed in duplicate by 3 reviewers. Data were described descriptively. SETTING/PARTICIPANTS: English-language studies conducted in the United States that assessed bias in the evaluation of surgical residents were included. RESULTS: The search yielded 1641 studies, of which 53 met inclusion criteria. Of the included studies, 26 (49.1%) were retrospective cohort studies, 25 (47.2%) were cross-sectional studies, and 2 (3.8%) were prospective cohort studies. The majority included general surgery residents (nâ¯=â¯30, 56.6%) and nonstandardized examination modalities (nâ¯=â¯38, 71.7%), such as video-based skills evaluations (nâ¯=â¯5, 13.2%). The most common performance metric evaluated was operative skill (nâ¯=â¯22, 41.5%). Overall, the majority of studies demonstrated bias (nâ¯=â¯38, 73.6%) and most investigated gender bias (nâ¯=â¯46, 86.8%). Most studies reported disadvantages for female trainees regarding standardized examinations (80.0%), self-evaluations (73.7%), and program-level evaluations (71.4%). Four studies (7.6%) assessed racial bias, of which all reported disadvantages for trainees underrepresented in surgery. CONCLUSIONS: Evaluation methods for surgery residents may be prone to bias, particularly with regard to female trainees. Research is warranted regarding other implicit and explicit biases, such as racial bias, as well as for nongeneral surgery subspecialties.
Assuntos
Cirurgia Geral , Internato e Residência , Humanos , Masculino , Feminino , Estados Unidos , Estudos Retrospectivos , Estudos Prospectivos , Competência Clínica , Sexismo , Cirurgia Geral/educaçãoRESUMO
The operating room continues to be the location where surgical residents develop both technical and nontechnical skills, ultimately culminating with them being capable of safe and independent practice. The process of intraoperative instruction is, by necessity, moving from an apprentice-based model where skills are acquired somewhat randomly through repeated exposure and evaluation is done in a global gestalt fashion. Modern surgical education demands that intraoperative instruction be intentional and that evaluation provides formative and summative feedback. This chapter describes some best practice approaches to intraoperative teaching and evaluation.
Assuntos
Competência Clínica , Feedback Formativo , Cirurgia Geral/educação , Internato e Residência/métodos , Procedimentos Cirúrgicos Operatórios/educação , Ensino , Humanos , Estados UnidosRESUMO
INTRODUCTION: Decreased rates of General Anesthesia (GA) for Cesarean Section (C-section) create a learning problem for anesthesia trainees. In this context, training the management of GA for C-section using simulation techniques allows a safe environment for exposure, learning, performance improvement, and capability retention. OBJECTIVE: Analyze anesthesia residents' performance regarding a simulated clinical case of GA for emergency C-section and identify specific deficits in skill acquisition. METHODS: Between 2015 and 2018, we evaluated the performance of 25 anesthesiology residents challenged by a simulated clinical case of GA for emergency C-section after the conclusion of the obstetric anesthesia rotation. Each resident performed the clinical case once followed by the assessment of their performance. Final scores were given according to the completion rate of 14-tasks, going from 0% to 100%. Two study groups were considered according to residency year for subsequent comparison of results (Group 1, second and third residency years and Group 2, fourth and fifth residency years). RESULTS AND DISCUSSION: Mean score was 64.29% ± 13.62. Comparatively, Group 1 obtained a higher score than Group 2 (70.63% ± 14.02 vs. 60.27% ± 11.94), although with no statistically significant difference (p = 0.063). The tasks most frequently accomplished were opioid administration (100%), rapid sequence technique (100%), pre-oxygenation (92%), gastric content aspiration prophylaxis (84%), and previous clinical history (84%). Conversely, the tasks less frequently accomplished were confirming presence of pediatrician (64%), oxytocin administration (56%), PONV prophylaxis (56%), and preoperative airway assessment (48%). CONCLUSION: The performance of the residents observed in this study was comparable to results previously published. The final score did not depend on the residency year.
Assuntos
Anestesiologia , Internato e Residência , Anestesia Geral , Anestesiologia/educação , Cesárea , Competência Clínica , Feminino , Humanos , GravidezRESUMO
BACKGROUND: The education of surgical trainees is ideally designed to produce surgeons with both confidence and competence. This involves the development of autonomy in the operating room. Factors associated with autonomy and entrustment have been studied in high-resource settings. In a resource-limited context, where autonomy is solely at the discretion of faculty, and there are fewer external constraints to restrict it, we hypothesized that assessment of a trainee's performance would be dependent upon reported confidence levels of both faculty and trainees in those trainees' abilities. MATERIALS AND METHODS: At a teaching hospital in rural Kenya, operative experience surveys were administered to eleven general surgery trainees (PGY1-5) and six faculty paired dyads immediately following operative procedures in May 2016 to elicit self-reported assessments of confidence, hesitation, and ability as measured by the Zwisch Scale. We examined factors related to learning and used dyadic structural equation models to understand factors related to the assessment of ability. RESULTS: There were 107 paired surveys among 136 trainees and 130 faculty evaluations. Faculty scrubbed into 76 (72%) cases. In comparison to trainees, faculty were more likely to give a higher average score for confidence (4.08 versus 3.90; P value: 0.005), a lower score for hesitation (2.67 versus 2.84; P value: 0.001), and a lower score for the ability to perform the operation independently (2.73 versus 3.02; P value: 0.01). Faculty and trainee perceptions of hesitation influenced their ability scores. Trainee hesitation (OR 12.1; 1.2-127.6, P = 0.04) predicted whether trainees reported experiencing learning. CONCLUSIONS: Between trainees and faculty at a teaching program in rural Kenya, assessment scores of confidence, hesitation, and ability differ in value but remain fairly correlated. Hesitation is predictive of ability assessment, as well as self-reported learning opportunities. Focus upon identifying when trainees hesitate to proceed with a case may yield important educational opportunities.
Assuntos
Países em Desenvolvimento , Docentes de Medicina/psicologia , Cirurgia Geral/educação , Autoavaliação (Psicologia) , Cirurgiões/psicologia , Competência Clínica , Humanos , QuêniaRESUMO
An important competency for residents developing skills in quality improvement (QI) and patient safety (PS) is to independently carry out an improvement project. The authors describe the development and reliability testing of the Quality Improvement Project Evaluation Rubric (QIPER) for use in rating project presentations in the Department of Veterans Affairs Chief Resident in Quality and Safety Program. QIPER contains 19 items across 6 domains to assess competence in designing, implementing, analyzing results of, and reporting on a QI/PS project. Interrater reliability of the instrument was calculated using the intraclass correlation coefficient (ICC). QIPER scores ranged from 28 to 72 out of a possible 76. QIPER demonstrated good reliability overall (ICC = 0.63). Although further testing is warranted, QIPER shows promise as a tool to assess a comprehensive set of skills involved in conducting QI/PS projects and has the sensitivity to detect varied competence and utility for providing learner feedback.
Assuntos
Competência Clínica/normas , Melhoria de Qualidade/organização & administração , Atenção à Saúde/normas , Educação de Pós-Graduação em Medicina , Segurança do Paciente/normas , Reprodutibilidade dos TestesRESUMO
Providing learner-specific didactic instruction to a small group of residents can be a challenge in postdoctoral education programs. To date, no data has been published reporting Team-Based Learning (TBL) outcomes when modified for a small group of resident learners, nor on stratifying information presented to learners based on postgraduate year (PGY). Stratification of the advance assignment appears effective as each individual resident outperformed their peers on information assigned to their training level. The group performed significantly better on questions pertaining to their assigned reading than on questions from reading assigned to other residents (p = 0.02), due to the significant difference in PGY1 performance (p = 0.01). Overall performance is similar to traditional TBL, shown by a significantly better group Team Readiness Assurance Test (TRAT) score over Individual Readiness Assurance Test (IRAT) scores (p = 0.01). The Accreditation Council for Graduate Medical Education (ACGME) and Commission on Dental Accreditation (CODA) require evaluation of residents relative to their training level and to their peers. This Small-Group Stratified-Learner TBL (SGSL-TBL) may offer useful resident evaluation tools, providing quantitative data not previously available to small-group resident-training programs through application of TBL.
RESUMO
Introduction Faculty are required to assess the development of residents using educational milestones. This descriptive study examined the end-of-rotation milestone-based evaluations of anesthesiology residents by rotation faculty directors. The goals were to measure: (1) how many of the 25 Accreditation Council for Graduate Medical Education (ACGME) anesthesiology subcompetency milestones were included in each of the residency's rotations evaluations, (2) the percentage of evaluations sent to the rotation director that were actually completed by the director, (3) the length of time between the end of the residents' rotations and completion of the evaluations, (4) the frequency of straight line scoring, defined as the resident receiving the same milestone level score for all subcompetencies on the evaluation, and (5) how often a resident received a score below a Level 4 in at least one subcompetency in the three months prior to graduating. Methods In 2013, the directors for each the 24 anesthesia rotations in the Stanford University School of Medicine Anesthesiology Residency Program created new milestone-based evaluations to be used at the end of rotations to evaluate residents. The directors selected the subcompetencies from the list released by the ACGME that were most appropriate for their rotation. End-of-rotation evaluations for the post-graduate year (PGY)-2 to PGY-4 from July 1, 2014 to June 30, 2017 were retrospectively analyzed for a sample of 10 residents randomly selected from 22 residents in the graduating class. Results The mean number of subcompetencies evaluated by each of the 24 rotations in the residency equaled 17.88 (standard deviation (SD): 3.39, range 10-24, median 18.5) from the available possible total of 25 subcompetencies. Three subcompetencies (medical knowledge, communication with patients and families, and coordination of patient care within the healthcare system) were included in the evaluation instruments of all 24 rotations. The three least frequently listed subcompetencies were: "acute, chronic, and cancer-related pain consultation/management" (25% of rotations had this on the end-of-rotation evaluation), "triage and management of critically ill patient in non-operative setting" (33%), and "education of patient, families, students, residents, and others" (38%). Overall, 418 end-of-rotation evaluations were issued and 341 (82%) completed, with 63% completed within one month, 22% between month one and two, and 15% after two months. The frequency of straight line scoring varied, from never occurring (0%) in three rotations to always occurring (100%) in two rotations, with an overall average of 51% (SD: 33%). Sixty-one percent of straight line scoring corresponded to the residents' postgraduate year whereby, for example, a post-graduate year two resident received an ACGME Level 2 proficiency for all subcompetencies. Thirty-one percent of the straight line scoring was higher than the resident's year of training (e.g., a PGY-2 received Level 3 or higher for all the subcompetencies). The remaining 7% of straight line scoring was below the expected level for the year of training. Three of seven residents had at least one subcompetency rated as below a Level 4 on one of the evaluations during the three months prior to finishing residency. Conclusion Formal analysis of a residency program's end-of-rotation milestone evaluations may uncover opportunities to improve competency-based evaluations.
RESUMO
OBJECTIVE: Identifying gaps in medical knowledge, patient management, and procedural competence is difficult early in surgical residency. We designed and implemented an end-of-year examination for our postgraduate year 1 residents, entitled Surgical Trainee Assessment of Readiness (STAR). Our objective in this study was to determine whether STAR scores correlated with other available indicators of resident performance, such as the American Board of Surgery in-training exam (ABSITE) and Milestone scores, and if they provided evidence of additional discriminatory value. STUDY DESIGN: Overall and component scores of the STAR exam were compared to the ABSITE and Milestone assessment scores for the 17 categorical residents that took the exam in 2016 and 2017. SETTING: Harbor-UCLA Medical Center, a university-affiliated academic medical center. PARTICIPANTS: Seventeen categorical general surgery residents. RESULTS: The STAR Total Test Score (ßâ¯=â¯2.77, pâ¯=â¯0.006) was an independent predictor of the ABSITE taken the same year, and components of the STAR were independent predictors of ABSITE taken the following year. The STAR Total Test Score was lowest in the 3 residents who had at least 1 low Milestone score assessed in the same year; and 2 of these 3 residents had at least 1 low Milestone score assigned the next year after STAR. Lastly, the Patient Care 1 and 2 Milestones assessed in the same year as STAR were uniformly scored as appropriate for level of training, yet the corresponding STAR component for those milestones demonstrated 3 residents as having deficiencies. CONCLUSIONS: We have created a multifaceted standardized STAR exam, which correlates with performance on the ABSITE and early milestone scores. It also appears to discriminate resident performance where milestone assessments do not. Further evaluation of the STAR exam with longer term follow-up is needed to confirm these initial findings.
Assuntos
Competência Clínica/normas , Cirurgia Geral/educação , Internato e Residência/normas , Fatores de Tempo , Apoio ao Desenvolvimento de Recursos Humanos , Estados UnidosRESUMO
INTRODUCTION: In July 2014, US residency programs fully implemented the Next Accreditation System including the use of milestone evaluation and reporting. Currently, there has been little investigation into the result of implementation of this new system. Therefore, this study sought to evaluate perceptions of Obstetrics and Gynecology residents and program directors regarding the use of milestone-based feedback and identify areas of deficiency. METHODS: A Web-based survey was sent to US-based Obstetrics and Gynecology residents and program directors regarding milestone-based assessment implementation. RESULTS: Out of 245 program directors, 84 responded to our survey (34.3% response rate). Of responding program directors, most reported that milestone-based feedback was useful (74.7%), fair (83.0%), and accurate (76.5%); however, they found it administratively burdensome (78.1%). Residents felt that milestone-based feedback was useful (62.7%) and fair (70.0%). About 64.3% of residents and 74.7% of program directors stated that milestone-based feedback is an effective tool to track resident progression; however, a sizable minority of both groups believe that it does not capture surgical aptitude. Qualitative analysis of free response comments was largely negative and highlighted the administrative burden and lack of accuracy of milestone-based feedback. CONCLUTION: Overall, both Obstetrics and Gynecology program directors and residents report that milestone-based feedback is useful and fair. Issues of administrative burden, timeliness, evaluation of surgical aptitude, and ability to act on assigned milestone levels were identified. Although this study is limited to one specialty, such issues are likely important to all residents, faculty, and program directors who have implemented the Next Accreditation System requirements.
RESUMO
BACKGROUND: Operative rating tools can enhance performance assessment in surgical training. However, assessments completed late may have questionable reliability. We evaluated the reliability of assessments according to evaluation time-to-completion. METHODS: We stratified assessments from MileMarker's™ Operative Entrustability Assessment by evaluation time-to-completion, using concordance correlation coefficient (CCC) between self-assessment and evaluator scores as a measure of reliability. RESULTS: Overall, self-assessment and evaluator scores were strongly correlated (CCCâ¯=â¯0.72; pâ¯<â¯0.001) though self-assessments were slightly higher (pâ¯=â¯0.048). Reliability remained stable for evaluations completed within 0 days (CCCâ¯=â¯0.77; pâ¯<â¯0.001), 1-3 days (CCCâ¯=â¯0.73; pâ¯<â¯0.001), and 4-13 days after surgery (CCCâ¯=â¯0.69; pâ¯<â¯0.001), but dropped for evaluations completed within 14-38 days (CCCâ¯=â¯0.60; pâ¯<â¯0.001) and over 38 days (CCCâ¯=â¯0.54; pâ¯<â¯0.001) after surgery. There was strong evidence for an interaction between time-to-completion and reliability (pâ¯<â¯0.001). CONCLUSIONS: Our data support the reliability of assessments completed until 2 weeks after surgery. This finding may help refine the interpretation of evaluation scores as surgical specialties move toward competency-based accreditation.
Assuntos
Competência Clínica , Educação de Pós-Graduação em Medicina , Cirurgia Geral/educação , Conhecimento Psicológico de Resultados , Humanos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Autoavaliação (Psicologia) , Fatores de TempoRESUMO
OBJECTIVE: Operative performance feedback is essential for surgical training. We aimed to understand surgical trainees' views on their operative performance feedback needs and to characterize feedback to elucidate factors affecting its value from the resident perspective. DESIGN: Using a qualitative research approach, 2 research fellows conducted semistructured, one-on-one interviews with surgical trainees. We analyzed recurring themes generated during interviews related to feedback characteristics, as well as the extent to which performance rating tools can help meet trainees' operative feedback needs. SETTING: Departments or divisions of general or plastic surgery at 9 US academic institutions. PARTICIPANTS: Surgical residents and clinical fellows in general or plastic surgery. RESULTS: We conducted 30 interviews with 9 junior residents, 14 senior residents, and 7 clinical fellows. Eighteen (60%) participants were in plastic and 12 (40%) were in general surgery. Twenty-four participants (80%) reported feedback as very or extremely important during surgical training. All trainees stated that verbal, face-to-face feedback is the most valuable, especially if occurring during (92%) or immediately after (65%) cases. Of those trainees using performance rating tools (74%), most (57%) expressed positive views about them but wanted the tools to complement and not replace verbal feedback in surgical education. Trainees value feedback more if received within 1 week or the case. CONCLUSIONS: Verbal, face-to-face feedback is very or extremely important to surgical trainees. Residents and fellows prefer to receive feedback during or immediately after a case and continue to value feedback if received within 1 week of the event. Performance rating tools can be useful for providing formative feedback and documentation but should not replace verbal, face-to-face feedback. Considering trainee views on feedback may help reduce perceived gaps in feedback demand-versus-supply in surgical training, which may be essential to overcoming current challenges in surgical education.
Assuntos
Competência Clínica , Bolsas de Estudo , Feedback Formativo , Cirurgia Geral/educação , Internato e Residência , Cirurgia Plástica/educação , Feminino , Humanos , Masculino , Avaliação das Necessidades , Pesquisa QualitativaRESUMO
OBJECTIVES: The Accreditation Council for Graduate Medical Education (ACGME) requires residency programs to assess communication skills and provide feedback to residents. We aimed to develop a feasible data collection process that generates objective clinical performance information to guide training activities, inform ACGME milestone evaluations, and validate assessment instruments. DESIGN: Residents care for patients in the surgical clinic and in the hospital, and participate in a communication curriculum providing practice with standardized patients (SPs). We measured perception of resident communication using the 14-item Communication Assessment Tool (CAT), collecting data from patients at the surgery clinic and surgical wards in the hospital, and from SP encounters during simulated training scenarios. We developed a handout of CAT example behaviors to guide patients completing the communication assessment. SETTING: Independent academic medical center. PARTICIPANTS: General surgery residents. RESULTS: The primary outcome is the percentage of total items patients rated "excellent;" we collected data on 24 of 25 residents. Outpatient evaluations resulted in significantly higher scores (mean 84.5% vs. 68.6%, p < 0.001), and female patients provided nearly statistically significantly higher ratings (mean 85.2% vs. 76.7%, p = 0.084). In multivariate analysis, after controlling for patient gender, visit reason, and race, (1) residents' CAT scores from SPs in simulation were independently associated with communication assessments in their concurrent patient population (p = 0.017), and (2) receiving CAT example instructions was associated with a lower percentage of excellent ratings by 9.3% (p = 0.047). CONCLUSIONS: Our data collection process provides a model for obtaining meaningful information about resident communication proficiency. CAT evaluations of surgical residents by the inpatient population had not previously been described in the literature; our results provide important insight into relationships between the evaluations provided by inpatients, clinic patients, and SPs in simulation. Our example behaviors guide shows promise for addressing a common concern, minimizing ceiling effects when measuring physician-patient communication.
Assuntos
Cirurgia Geral/educação , Comunicação Interdisciplinar , Assistência Centrada no Paciente/métodos , Relações Médico-Paciente , Centros Médicos Acadêmicos , Acreditação , Adulto , California , Estágio Clínico , Comunicação , Currículo , Educação de Pós-Graduação em Medicina/métodos , Avaliação Educacional , Feminino , Humanos , Internato e Residência/métodos , Modelos Lineares , Masculino , Simulação de Paciente , Melhoria de Qualidade , Estatísticas não ParamétricasRESUMO
OBJECTIVE: The Accreditation Council for Graduate Medical Education requires accredited residency programs to implement competency-based assessments of medical trainees based upon nationally established Milestones. Clinical competency committees (CCC) are required to prepare biannual reports using the Milestones and ensure reporting to the Accreditation Council for Graduate Medical Education. Previous research demonstrated a strong correlation between CCC and resident scores on the Milestones at 1 institution. We sought to evaluate a national sampling of general surgery residency programs and hypothesized that CCC and resident assessments are similar. DESIGN: Details regarding the makeup and process of each CCC were obtained. Major disparities were defined as an absolute mean difference of ≥0.5 on the 4-point scale. A negative assessment disparity indicated that the residents evaluated themselves at a lower level than did the CCC. Statistical analysis included Wilcoxon rank sum and Sign tests. SETTING: CCCs and categorical general surgery residents from 15 residency programs completed the Milestones document independently during the spring of 2016. RESULTS: Overall, 334 residents were included; 44 (13%) and 43 (13%) residents scored themselves ≥0.5 points higher and lower than the CCC, respectively. Female residents scored themselves a mean of 0.08 points lower, and male residents scored themselves a mean of 0.03 points higher than the CCC. Median assessment differences for postgraduate year (PGY) 1-5 were 0.03 (range: -0.94 to 1.28), -0.11 (range: -1.22 to 1.22), -0.08 (range: -1.28 to 0.81), 0.02 (range: -0.91 to 1.00), and -0.19 (range: -1.16 to 0.50), respectively. Residents in university vs. independent programs had higher rates of negative assessment differences in medical knowledge (15% vs. 6%; P = 0.015), patient care (17% vs. 5%; P = 0.002), professionalism (23% vs. 14%; P = 0.013), and system-based practice (18% vs. 9%; P = 0.031) competencies. Major assessment disparities by sex or PGY were similar among individual competencies. CONCLUSIONS: Surgery residents in this national cohort demonstrated self-awareness when compared to assessments by their respective CCCs. This was independent of program type, sex, or level of training. PGY 5 residents, female residents, and those from university programs consistently rated themselves lower than the CCC, but these were not major disparities and the significance of this is unclear.
Assuntos
Acreditação , Competência Clínica , Educação de Pós-Graduação em Medicina/métodos , Cirurgia Geral/educação , Autoavaliação (Psicologia) , Comitês Consultivos , Estudos de Coortes , Educação Baseada em Competências , Feminino , Humanos , Internato e Residência/métodos , Masculino , Estudos Prospectivos , Estados UnidosRESUMO
BACKGROUND: In 2013, we developed the Operative Entrustability Assessment (OEA) to facilitate evaluation and documentation of resident operative skills. This web-based tool provides real-time, transparent feedback to residents on operative performance. This study evaluated the construct validity of the OEA, assessing its association with operative time. METHODS: We used simple and multiple linear regression to estimate associations between OEA scores and operative time in selected procedures performed. RESULTS: OEAs were completed for 93 autologous breast reconstructions and 185 hand procedures. Self-assessed OEA was associated with shorter operative time in breast (p = 0.008) and hand (p = 0.036) cases. Evaluator OEA was associated with shorter operative time in breast (p = 0.018), but not hand cases (p = 0.377). Post-graduate year was not associated. CONCLUSIONS: The OEA demonstrates construct validity: increasing scores are associated with shorter operative time and are better predictors of operative time than post-graduate year, making it an option for documenting competence prior to graduation.
Assuntos
Competência Clínica , Avaliação Educacional/métodos , Mãos/cirurgia , Internato e Residência , Mamoplastia , Duração da Cirurgia , Educação de Pós-Graduação em Medicina , Retroalimentação , Feminino , Humanos , Masculino , Maryland , Pessoa de Meia-Idade , Estudos Retrospectivos , Cirurgia Plástica/educaçãoRESUMO
OBJECTIVE: This study was conducted to assess the effectiveness of a newly implemented electronic web-based review system created at our institution for evaluating resident performance relative to established milestones. DESIGN: Retrospective review of data collected from a survey of general surgery faculty and residents. SETTING: Tertiary care teaching hospital system and independent academic medical center. PARTICIPANTS: A total of 12 general surgery faculty and 17 general surgery residents participated in this study. The survey queried the level of satisfaction before and after the adoption of QuickNotes using several statements scored on a 5-point scale, with 1 being the lowest rating as "not satisfied," and 5 being the highest rating as "completely satisfied." RESULTS: The weighted average improvements from pre- to post-QuickNotes implementation for the faculty responding to the survey ranged from 10% to 40%; weighted average improvements for the residents responding to the survey ranged from 5% to 73%. For the survey of faculty, both sets of weighted averages tended to be higher than the weighted average for the resident's survey responses. The highest rated topic was the faculty's level of satisfaction with the "frequency to provide feedback" with a post-QuickNotes implementation weighted average of 4.25, closely followed by the residents' level of satisfaction with the "evaluation includes positive feedback" with a post-QuickNotes implementation weighted average of 4.24. The most notable increases in weighted averages from preimplementation to postimplementation were noted for "overall satisfaction" (20% increase for faculty, 37% for residents), "reflects actual criteria that matter" (36% increase for faculty, 73% for residents), faculty "opportunity for follow-up" (increase of 40%), resident "reflects overall trends" (increase of 37%), and resident "provides new information about my performance" (increase of 37%). CONCLUSIONS: Our institutional adoption of QuickNotes into the resident evaluation process has been associated with an overall increased level of satisfaction in the evaluation process by both faculty and residents. The design of QuickNotes facilitates its integration into the resident training environment, as it is web based, easy to use, and has no additional cost over the standard New Innovations subscription. Although it is designed to capture snapshots of trainee behavior and performance, monthly reports through QuickNotes can be used effectively in conjunction with the more traditional end-of-rotation evaluations to show trends, identify areas of strength that should be reinforced, demonstrate areas needing improvement, allow for a more tailored individual education plan to be developed, and permit a more accurate determination of milestone progression.
Assuntos
Competência Clínica , Feedback Formativo , Cirurgia Geral/educação , Internet , Internato e Residência/organização & administração , Centros Médicos Acadêmicos , Estudos Transversais , Educação de Pós-Graduação em Medicina/métodos , Feminino , Humanos , Masculino , Corpo Clínico Hospitalar/estatística & dados numéricos , Estudos Retrospectivos , Centros de Atenção Terciária , Estados UnidosRESUMO
OBJECTIVES: Ultrasound (US) is vital to modern emergency medicine (EM). Across residencies, there is marked variability in US training. The "goal-directed focused US" part of the Milestones Project states that trainees must correctly acquire and interpret images to achieve a level 3 milestone. Standardized methods by which programs teach these skills have not been established. Our goal was to determine whether residents could achieve level 3 with or without a dedicated US rotation. METHODS: Thirty-three first- and second-year residents were assigned to control (no rotation) and intervention (US rotation) groups. The intervention group underwent a 2-week curriculum in vascular access, the aorta, echocardiography, focused assessment with sonography for trauma, and pregnancy. To test acquisition, US-trained emergency medicine physicians administered an objective structured clinical examination. To test interpretation, residents had to identify normal versus abnormal findings. Mixed-model logistic regression tested the association of a US rotation while controlling for confounders: weeks in the emergency department (ED) as a resident, medical school US rotation, and postgraduate years. RESULTS: For image acquisition, medical school US rotation and weeks in the ED as a resident were significant (P = .03; P = .04) whereas completion of a US rotation and postgraduate years were not significant. For image interpretation, weeks in the ED as a resident was the only significant predictor of performance (P = .002) whereas completion of a US rotation and medical school US rotation were not significant. CONCLUSIONS: To achieve a level 3 milestone, weeks in the ED as a resident were significant for mastering image acquisition and interpretation. A dedicated US rotation did not have a significant effect. A medical school US rotation had a significant effect on image acquisition but not interpretation. Further studies are needed to best assess methods to meet US milestones.
Assuntos
Competência Clínica/estatística & dados numéricos , Medicina de Emergência/educação , Serviço Hospitalar de Emergência/estatística & dados numéricos , Internato e Residência/métodos , Ultrassom/educação , Humanos , Método Simples-Cego , Fatores de TempoRESUMO
OBJECTIVE: The Accreditation Council for Graduate Medical Education requires accredited general surgery residencies to implement competency-based developmental outcomes in resident evaluations. Overall, 16 milestones are evaluated by a clinical competency committee (CCC). The milestones span 8 domains of surgical practice and 6 Accreditation Council for Graduate Medical Education clinical competencies. The highest level suggests preparedness for independent practice. Our objective was to compare self-assessments and committee evaluations within the milestone framework. STUDY DESIGN: All residents underwent semiannual evaluations from 2013 to 2015. Residents independently completed a self-assessment using the milestones. The CCC completed the milestones document using resident evaluations and consensus opinion of committee members. Assessment differences were calculated for each evaluation. A negative value indicated that the residents evaluated themselves at a lower level than the committee. Major assessment disparities were defined as >0.5 on a 4-point scale. SETTING: An independent academic medical center. PARTICIPANTS: General surgery residents. RESULTS: Overall, 20 residents participated; 7 were female. In total, 5 (7%) evaluations had a mean overall assessment difference >0.5, whereas 6 (8%) had a difference <-0.5. Residents evaluated themselves lower than the committee with a median assessment difference of -0.06 [-0.25 to 0.16] (p = 0.041). Evaluations were similar across surgical domains. Negative self-evaluations were more common for medical knowledge (-0.25 [-0.25 to 0.25], p = 0.025). Female residents had 2% positive and 13% negative major assessment disparity rates versus 10% positive and 9% negative rates among male residents. Postgraduate year III residents had 12% positive and 4% negative major disparity rates; all other years had higher negative than positive rates. CONCLUSIONS: Surgery residents within our program demonstrated adequate self-awareness, with most self-evaluations falling within a half level of the CCC report. This self-awareness was consistent across surgical domains and most clinical competencies. Residents perceived a lower level of medical knowledge than the CCC. Subgroup analysis revealed interesting trends in the effects of sex, postgraduate year level, and academic year timing, which will take additional study to fully delineate.
Assuntos
Acreditação , Competência Clínica , Cirurgia Geral/educação , Internato e Residência/organização & administração , Relações Interprofissionais , Corpo Clínico Hospitalar/estatística & dados numéricos , Centros Médicos Acadêmicos/organização & administração , Adulto , Membro de Comitê , Educação Baseada em Competências , Feminino , Humanos , Masculino , Autoavaliação (Psicologia)RESUMO
OBJECTIVES: To assess resident cataract surgery outcomes at an academic teaching institution using 2 Physician Quality Reporting System (PQRS) cataract measures, which are intended to serve as a proxy for quality of surgical care. DESIGN: A retrospective review comparing cataract surgery outcomes of resident and attending surgeries using 2 PQRS measures: (1) 20/40 or better best-corrected visual acuity following cataract surgery and (2) complications within 30 days following cataract surgery requiring additional surgical procedures. SETTING: An academic ophthalmology center. PARTICIPANTS: A total of 2487 surgeries performed at the Massachusetts Eye and Ear Infirmary from January 1, 2011 to December 31, 2012 were included in this study. RESULTS: Of all 2487 cataract surgeries, 98.95% achieved a vision of at least 20/40 at or before 90 days, and only 0.64% required a return to the operating room for postoperative complications. Of resident surgeries, 98.9% (1370 of 1385) achieved 20/40 vision at or before 90 days follow-up. Of attending surgeries, 99.0% (1091 of 1102) achieved 20/40 vision at or before 90 days (p = 1.00). There were no statistically significant differences between resident and attending cases regarding postoperative complications needing a return to the operating room (i.e., 0.65%, or 9 of 1385 resident cases vs 0.64%, or 7 of 1102 attending cases; p = 1.00). CONCLUSIONS: Using PQRS Medicare cataract surgery criteria, this study establishes new benchmarks for cataract surgery outcomes at a teaching institution and supplemental measure for assessing resident surgical performance. Excellent cataract outcomes were achieved at an academic teaching institution, with results exceeding Medicare thresholds of 50%. There appears to be no significant difference in supervised trainee and attending cataract surgeon outcomes using 2 PQRS measures currently used by Medicare to determine physician reimbursement and quality of care.