Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 38
1.
Acad Med ; 99(5): 518-523, 2024 May 01.
Article En | MEDLINE | ID: mdl-38285547

PROBLEM: Competency-based medical education is increasingly regarded as a preferred framework for physician training, but implementation is limited. U.S. residency programs remain largely time based, with variable assessments and limited opportunities for individualization. Gaps in graduates' readiness for unsupervised care have been noted across specialties. Logistical barriers and regulatory requirements constrain movement toward competency-based, time-variable (CBTV) graduate medical education (GME), despite its theoretical benefits. APPROACH: The authors describe a vision for CBTV-GME and an implementation model that can be applied across specialties. Termed "Promotion in Place" (PIP), the model relies on enhanced assessment, clear criteria for advancement, and flexibility to adjust individuals' responsibilities and time in training based on demonstrated competence. PIP allows a resident's graduation to be advanced or delayed accordingly. Residents deemed competent for early graduation can transition to attending physician status within their training institution and benefit from a period of "sheltered independence" until the standard graduation date. Residents who need extended time to achieve competency have graduation delayed to incorporate additional targeted education. OUTCOMES: A proposal to pilot the PIP model of CBTV-GME received funding through the American Medical Association's "Reimagining Residency" initiative in 2019. Ten of 46 residency programs in a multihospital system expressed interest and pursued initial planning. Seven programs withdrew for reasons including program director transitions, uncertainty about resident reactions, and the COVID-19 pandemic. Three programs petitioned their specialty boards for exemptions from time-based training. One program was granted the needed exemption and launched a PIP pilot, now in year 4, demonstrating the feasibility of implementing this model. Implementation tools and templates are described. NEXT STEPS: Larger-scale implementation with longer-term assessment is needed to evaluate the impact and generalizability of this CBTV-GME model.


COVID-19 , Clinical Competence , Competency-Based Education , Education, Medical, Graduate , Internship and Residency , Humans , Education, Medical, Graduate/methods , Competency-Based Education/methods , United States , COVID-19/epidemiology , SARS-CoV-2 , Time Factors , Models, Educational
2.
J Pediatr ; 255: 58-64.e6, 2023 04.
Article En | MEDLINE | ID: mdl-37081778

OBJECTIVE: To address gaps in routine recommended care for children with Down syndrome, through quality improvement during the coronavirus disease 2019 (COVID-19) pandemic. STUDY DESIGN: A retrospective chart review of patients with Down syndrome was conducted. Records of visits to the Massachusetts General Hospital Down Syndrome Program were assessed for adherence to 5 components of the 2011 American Academy of Pediatrics (AAP) Clinical Report, "Health Supervision for Children with Down Syndrome." The impact of 2 major changes was analyzed using statistical process control charts: a planned intervention of integrations to the electronic health record for routine health maintenance with age-based logic based on a diagnosis of Down syndrome, created and implemented in July 2020; and a natural disruption in care due to the COVID-19 pandemic, starting in March 2020. RESULTS: From December 2018 to March 2022, 433 patients with Down syndrome had 940 visits. During the COVID-19 pandemic, adherence to the audiology component decreased (from 58% to 45%, P < .001); composite adherence decreased but later improved. Ophthalmology evaluation remained stable. Improvement in adherence to 3 components (thyroid-stimulating hormone, hemoglobin, sleep study ever) in July 2020 coincided with electronic health record integrations. Total adherence to the 5 AAP guideline components was greater for follow-up visits compared with new patient visits (69% and 61%, respectively; P < .01). CONCLUSIONS: The COVID-19 pandemic influenced adherence to components of the AAP Health supervision for children with Down syndrome, but improvements in adherence coincided with implementation of our intervention and reopening after the COVID-19 pandemic.


COVID-19 , Down Syndrome , Child , Humans , COVID-19/epidemiology , Pandemics , Electronic Health Records , Down Syndrome/epidemiology , Down Syndrome/therapy , Down Syndrome/diagnosis , Retrospective Studies , Guideline Adherence
3.
J Gen Intern Med ; 37(9): 2280-2290, 2022 07.
Article En | MEDLINE | ID: mdl-35445932

Assessing residents and clinical fellows is a high-stakes activity. Effective assessment is important throughout training so that identified areas of strength and weakness can guide educational planning to optimize outcomes. Assessment has historically been underemphasized although medical education oversight organizations have strengthened requirements in recent years. Growing acceptance of competency-based medical education and its logical extension to competency-based time-variable (CB-TV) graduate medical education (GME) further highlights the importance of implementing effective evidence-based approaches to assessment. The Clinical Competency Committee (CCC) has emerged as a key programmatic structure in graduate medical education. In the context of launching a multi-specialty pilot of CB-TV GME in our health system, we have examined several program's CCC processes and reviewed the relevant literature to propose enhancements to CCCs. We recommend that all CCCs fulfill three core goals, regularly applied to every GME trainee: (1) discern and describe the resident's developmental status to individualize education, (2) determine readiness for unsupervised practice, and (3) foster self-assessment ability. We integrate the literature and observations from GME program CCCs in our institutions to evaluate how current CCC processes support or undermine these goals. Obstacles and key enablers are identified. Finally, we recommend ways to achieve the stated goals, including the following: (1) assess and promote the development of competency in all trainees, not just outliers, through a shared model of assessment and competency-based advancement; (2) strengthen CCC assessment processes to determine trainee readiness for independent practice; and (3) promote trainee reflection and informed self-assessment. The importance of coaching for competency, robust workplace-based assessments, feedback, and co-production of individualized learning plans are emphasized. Individual programs and their CCCs must strengthen assessment tools and frameworks to realize the potential of competency-oriented education.


Clinical Competence , Internship and Residency , Competency-Based Education , Education, Medical, Graduate , Humans , Self-Assessment
4.
Pediatrics ; 146(2)2020 08.
Article En | MEDLINE | ID: mdl-32636237

OBJECTIVE: The Centers for Disease Control and Prevention recommend testing for Chlamydia trachomatis in sexually active female patients <25 years old using nucleic-acid amplification tests (NAAT) from a vaginal swab. Our providers were typically testing using the less sensitive urine NAATs. We aimed to increase the percentage of urogenital C trachomatis NAATs performed by using vaginal swabs in adolescent female patients ages 10 through 20 years from 1.4% to 25%. METHODS: We implemented 3 interventions at 3 pediatric practices over 12 months including education, process standardization, and cross-training. We used statistical process control to analyze the effect of interventions on our primary outcome: the percentage of urogenital C trachomatis tests performed with a vaginal swab. Our balance measure was the total number of urogenital C trachomatis tests. RESULTS: There were 818 urogenital C trachomatis tests performed: 289 before and 529 after the first intervention. Of urogenital C trachomatis tests in the preintervention time period, 1.4% were performed by using vaginal swabs. We surpassed our aim of 25% 6 weeks after the first intervention. We noted sustained improvement after the second intervention, with an average of 68.3% of tests performed by using vaginal swabs for the remaining postintervention period. There was no difference in the overall number of urogenital C trachomatis tests pre- and postintervention. CONCLUSIONS: Using quality improvement methodology and implementing easily replicable interventions, we significantly and sustainably increased use of vaginal swabs. The interventions standardizing processes were associated with a higher impact than the educational intervention.


Chlamydia Infections/diagnosis , Chlamydia trachomatis/isolation & purification , Nucleic Acid Amplification Techniques/methods , Pediatricians/education , Practice Patterns, Physicians'/trends , Vagina/microbiology , Vaginal Smears/trends , Adolescent , Child , Chlamydia Infections/epidemiology , Female , Humans , Massachusetts/epidemiology , Nucleic Acid Amplification Techniques/statistics & numerical data , Pamphlets , Practice Guidelines as Topic , Practice Patterns, Physicians'/statistics & numerical data , Procedures and Techniques Utilization/trends , Program Evaluation , Quality Improvement , Sexual Behavior , Young Adult
6.
Acad Med ; 95(9): 1325-1328, 2020 09.
Article En | MEDLINE | ID: mdl-32433311

The February 2020 announcement that United States Medical Licensing Examination (USMLE) Step 1 results will be reported as pass/fail instead of numerical scores has been controversial. Step 1 scores have played a key role in residency selection, including screening for interviews. Although Step 1 scores are viewed as an objective criterion, they have been shown to disadvantage female and underrepresented minority applicants, cause student anxiety and financial burden, and affect student well-being. Furthermore, Step 1 scores incompletely predict applicants' overall residency performance. With this paradigm shift in Step 1 score reporting, residency programs will have fewer objective, standardized metrics for selection decisions, which may lead to greater emphasis on USMLE Step 2 Clinical Knowledge scores or yield unintended consequences, including shifting weight to metrics such as medical school reputation.Yet, greater breadth in residency selection metrics will better serve both applicants and programs. Some students excel in coursework, others in research or leadership. All factors should be recognized, and broader metrics should be implemented to promote and recognize diversity of excellence. Given the need for metrics for residency selection as well as for a more holistic approach to evaluating residency applicants, assessment during medical school should be revisited and made more meaningful. Another opportunity may involve use of situational judgment tests to predict professionalism and performance on other competencies. It will be important to evaluate the impact of the new Step 1 paradigm and related initiatives going forward. Residency application overload must also be addressed.


Education, Medical, Undergraduate , Educational Measurement/methods , Internship and Residency , Licensure, Medical , Competency-Based Education , Cultural Diversity , Education, Medical, Undergraduate/methods , Female , Humans , Male , Minority Groups , Sex Factors , United States
12.
Med Teach ; 40(1): 40-44, 2018 01.
Article En | MEDLINE | ID: mdl-29043879

INTRODUCTION: There is limited information about whether OSCE during GME orientation can identify trainee communication deficits before these become evident via clinical performance evaluations. METHODS: Ninety-seven interns matriculating to eight residency programs in six specialties at four hospitals participated in a nine-station communication skills OSCE. Ratings were based on the "Kalamazoo, adapted" communication skills checklist. Possible association with intern performance evaluations was assessed by repeated-measures logistic regression and ROC curves were generated. RESULTS: The mean OSCE score was 4.08 ± 0.27 with a range of 3.3-4.6. Baseline OSCE scores were associated with subsequent communication concerns recorded by faculty, based on 1591 evaluations. A 0.1-unit decrease in the OSCE communication score was associated with an 18% higher odds of being identified with a communication concern by faculty evaluation (odds ratio 1.18, 95% CI 1.01-1.36, p = 0.034). ROC curves did not demonstrate a "cut-off" score (AUC= 0.558). Non-faculty evaluators were 3-5 times more likely than faculty evaluators to identify communication deficits, based on 1900 evaluations. CONCLUSIONS: Lower OSCE performance was associated with faculty communication concerns on performance evaluations; however, a "cut-off" score was not demonstrated that could identify trainees for potential early intervention. Multi-source evaluation also identified trainees with communication skills deficits.


Communication , Educational Measurement/methods , Educational Measurement/standards , Internship and Residency/methods , Internship and Residency/organization & administration , Clinical Competence , Humans , Observer Variation , Patient Education as Topic , Physical Examination , ROC Curve
14.
BMC Med Educ ; 16: 91, 2016 Mar 12.
Article En | MEDLINE | ID: mdl-26968519

BACKGROUND: Adverse events are a significant quality and safety issue in the hospital setting due to their direct impact on patients. Additionally, such events are often handled by junior doctors due to their direct involvement with patients. As such, it is important for health care organizations to prioritize education and training for junior doctors on identifying adverse events and handling them when they occur. The Cancer Cup Challenge is an educational program focuses on quality improvement and adverse event awareness targeting for junior oncology doctors across three international sites. METHODS: A mixed methodology was used to develop and evaluate the program. The Qstream spaced learning platform was used to disseminate information to participants, as it has been demonstrated to impact on both knowledge and behavior. Eight short case based scenarios with expert feedback were developed by a multidisciplinary advisory committee containing representatives from the international sites. At the conclusion of the course impact on participant knowledge was evaluated using analysis of the metrics collected by the Qstream platform. Additionally, an online survey and semi-structured interviews were used to evaluate engagement and perceived value by participants. RESULTS: A total of 35 junior doctors registered to undertake the Qstream program, with 31 (88.57 %) successfully completing it. Analysis of the Qstream metrics revealed 76.57 % of cases were answered correctly on first attempt. The post-program survey received 17 responses, with 76.47 % indicating cases for the course were interesting and 82.35 % feeling cases were relevant. Finally, 14 participants consented to participate in semi-structured interviews about the program, with feedback towards the course being generally very positive. CONCLUSIONS: Our study demonstrates that an online game is well accepted by junior doctors as a method to increase their quality improvement awareness. Developing effective and sustainable training for doctors is important to ensure positive patient outcomes are maintained in the hospital setting. This is particularly important for junior doctors as they are working closely with patients and learning skills and behaviors, which will influence their practice throughout their careers.


Medical Errors/prevention & control , Medical Oncology/education , Patient Safety , Quality Improvement , Curriculum , Educational Measurement , Female , Games, Experimental , Humans , Male , Teaching
15.
Postgrad Med J ; 92(1085): 137-44, 2016 Mar.
Article En | MEDLINE | ID: mdl-26739846

PURPOSE: Quality, patient safety and value are important topics for graduate medical education (GME). Spaced education delivers case-based content in a structured longitudinal experience. Use of spaced education to deliver quality and safety education in GME at an institutional level has not been previously evaluated. OBJECTIVES: To implement a spaced education course in quality, safety and value; to assess learner satisfaction; and to describe trainee knowledge in these areas. METHODS: We developed a case-based spaced education course addressing learning objectives related to quality, safety and value. This course was offered to residents and fellows about two-thirds into the academic year (March 2014) and new trainees during orientation (June 2014). We assessed learner satisfaction by reviewing the course completion rate and a postcourse survey, and trainee knowledge by the per cent of correct responses. RESULTS: The course was offered to 1950 trainees. A total of 305 (15.6%) enrolled in the course; 265/305 (86.9%) answered at least one question, and 106/305 (34.8%) completed the course. Fewer participants completed the March programme compared with the orientation programme (42/177 (23.7%) vs 64/128 (50.0%), p<0.001). Completion rates differed by specialty, 80/199 (40.2%) in non-surgical specialties compared with 16/106 (24.5%) in surgical specialties (p=0.008). The proportion of questions answered correctly on the first attempt was 53.2% (95% CI 49.4% to 56.9%). Satisfaction among those completing the programme was high. CONCLUSIONS: Spaced education can help deliver and assess learners' understanding of quality, safety and value principles. Offering a voluntary course may result in low completion. Learners were satisfied with their experience and were introduced to new concepts.


Clinical Competence/standards , Internship and Residency , Patient Safety/standards , Physicians/standards , Quality of Health Care/standards , Adult , Curriculum , Female , Health Knowledge, Attitudes, Practice , Humans , Learning , Male , Personal Satisfaction , Program Evaluation
18.
Acad Pediatr ; 14(1): 54-61, 2014.
Article En | MEDLINE | ID: mdl-24369869

OBJECTIVE: To assess pediatric residents' perceptions of their quality improvement (QI) education and training, including factors that facilitate learning QI and self-efficacy in QI activities. METHODS: A 22-question survey questionnaire was developed with expert-identified key topics and iterative pretesting of questions. Third-year pediatric residents from 45 residency programs recruited from a random sample of 120 programs. Data were analyzed by descriptive statistics, chi-square tests, and qualitative content analysis. RESULTS: Respondents included 331 residents for a response rate of 47%. Demographic characteristics resembled the national profile of pediatric residents. Over 70% of residents reported that their QI training was well organized and met their needs. Three quarters felt ready to use QI methods in practice. Those with QI training before residency were significantly more confident than those without prior QI training. However, fewer than half of respondents used standard QI methods such as PDSA cycles and run charts in projects. Residents identified faculty support, a structured curriculum, hands-on projects, and dedicated project time as key strengths of their QI educational experiences. A strong QI culture was also considered important, and was reported to be present in most programs sampled. CONCLUSIONS: Overall, third-year pediatric residents reported positive QI educational experiences with strong faculty support and sufficient time for QI projects. However, a third of residents thought that the QI curricula in their programs needed improvement, and a quarter lacked self-efficacy in conducting future QI activities. Continuing curricular improvement, including faculty development, is warranted.


Curriculum , Internship and Residency , Pediatrics/education , Quality Improvement , Adult , Curriculum/standards , Humans , Internship and Residency/organization & administration , Internship and Residency/standards , Organizational Culture , Physicians/psychology , Self Efficacy , Surveys and Questionnaires
20.
BMJ Qual Saf ; 21(10): 819-25, 2012 Oct.
Article En | MEDLINE | ID: mdl-22706930

PURPOSE: To compare the effectiveness of two types of online learning methodologies for improving the patient-safety behaviours mandated in the Joint Commission National Patient Safety Goals (NPSG). METHODS: This randomised controlled trial was conducted in 2010 at Massachusetts General Hospital and Brigham and Women's Hospital (BWH) in Boston USA. Incoming interns were randomised to either receive an online Spaced Education (SE) programme consisting of cases and questions that reinforce over time, or a programme consisting of an online slide show followed by a quiz (SQ). The outcome measures included NPSG-knowledge improvement, NPSG-compliant behaviours in a simulation scenario, self-reported confidence in safety and quality, programme acceptability and programme relevance. RESULTS: Both online learning programmes improved knowledge retention. On four out of seven survey items measuring satisfaction and self-reported confidence, the proportion of SE interns responding positively was significantly higher (p<0.05) than the fraction of SQ interns. SE interns demonstrated a mean 4.79 (36.6%) NPSG-compliant behaviours (out of 13 total), while SQ interns completed a mean 4.17 (32.0%) (p=0.09). Among those in surgical fields, SE interns demonstrated a mean 5.67 (43.6%) NPSG-compliant behaviours, while SQ interns completed a mean 2.33 (17.9%) (p=0.015). Focus group data indicates that SE was more contextually relevant than SQ, and significantly more engaging. CONCLUSION: While both online methodologies improved knowledge surrounding the NPSG, SE was more contextually relevant to trainees and was engaging. SE impacted more significantly on both self-reported confidence and the behaviour of surgical residents in a simulated scenario.


Catheterization, Central Venous/methods , Education, Distance/methods , Health Knowledge, Attitudes, Practice , Internship and Residency/standards , Joint Commission on Accreditation of Healthcare Organizations , Patient Safety/standards , Boston , Comparative Effectiveness Research , Education, Medical/methods , Humans , Organizational Case Studies , Patient Simulation , Program Evaluation , Qualitative Research , Students, Medical/psychology , Surveys and Questionnaires , United States
...