Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 224
Filtrar
Más filtros

Base de datos
Tipo del documento
Intervalo de año de publicación
1.
Ann Surg ; 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39051106

RESUMEN

OBJECTIVE: To establish whether Accreditation Council for Graduate Medical Education Milestones predict future performance of general surgery trainees. SUMMARY BACKGROUND DATA: Milestones provide bi-annual assessments of trainee progress across six competencies. It is unknown whether the Milestones predict surgeon performance after the transition to independent practice. METHODS: We performed a retrospective cohort study of surgeons with complete Milestone assessments in the fourth and fifth clinical years who treated patients in acute care hospitals within Florida, New York, and Pennsylvania, 2015-2018. To account for the multiple ways in which the Milestone assessments might predict post-graduation outcomes, we included 120 Milestones features in our elastic net machine learning models. The primary outcome was risk-adjusted patient death or serious morbidity. RESULTS: 278 general surgeons were included in the study. Milestone assessments 6-months into the fourth clinical year displayed a normal score distribution while multicollinearity and low score discrimination at the final assessment period were detected. Individual Milestones features from the Patient Care, Professionalism, and Systems-based Practice domains were most predictive of patient-related outcomes. For example, surgeons with worse patient outcomes had significantly lower scores in Patient Care 3 when compared to surgeons with better patient outcomes (High DSM, yes: 2.86 vs. no: 3.04, P=0.011). CONCLUSIONS: The Milestones features that were most predictive of better patient outcomes related to intraoperative skills, ethical principles, and patient navigation and safety, measured 12-18 months prior to graduation. The development of a parsimonious set of evidence-based Milestones that better correlate with surgeon experience could enhance surgical education.

2.
JAMA Surg ; 159(5): 546-552, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38477914

RESUMEN

Importance: National data on the development of competence during training have been reported using the Accreditation Council for Graduate Medical Education (ACGME) Milestones system. It is now possible to consider longitudinal analyses that link Milestone ratings during training to patient outcomes data of recent graduates. Objective: To evaluate the association of in-training ACGME Milestone ratings in a surgical specialty with subsequent complication rates following a commonly performed operation, endovascular aortic aneurysm repair (EVAR). Design, Setting, and Participants: This study of patient outcomes followed EVAR in the Vascular Quality Initiative (VQI) registry (4213 admissions from 208 hospitals treated by 327 surgeons). All surgeons included in this study graduated from ACGME-accredited training programs from 2015 through 2019 and had Milestone ratings 6 months prior to graduation. Data were analyzed from December 1, 2021, through September 15, 2023. Because Milestone ratings can vary with program, they were corrected for program effect using a deviation score from the program mean. Exposure: Milestone ratings assigned to individual trainees 6 months prior to graduation, based on judgments of surgical competence. Main Outcomes and Measures: Surgical complications following EVAR for patients treated by recent graduates during the index hospitalization, obtained using the nationwide Society for Vascular Surgery Patient Safety Organization's VQI registry, which includes 929 participating centers in 49 US states. Results: The study included outcomes for 4213 patients (mean [SD] age, 73.25 [8.74] years; 3379 male participants [80.2%]). Postoperative complications included 9.5% major (400 of 4213 cases) and 30.2% minor (1274 of 4213 cases) complications. After adjusting for patient risk factors and site of training, a significant association was identified between individual Milestone ratings of surgical trainees and major complications in early surgical practice in programs with lower mean Milestone ratings (odds ratio, 0.50; 95% CI; 0.27-0.95). Conclusions and Relevance: In this study, Milestone assessments of surgical trainees were associated with subsequent clinical outcomes in their early career. Although these findings represent one surgical specialty, they suggest Milestone ratings can be used in any specialty to identify trainees at risk for future adverse patient outcomes when applying the same theory and methodology. Milestones data should inform data-driven educational interventions and trainee remediation to optimize future patient outcomes.


Asunto(s)
Acreditación , Competencia Clínica , Educación de Postgrado en Medicina , Procedimientos Endovasculares , Complicaciones Posoperatorias , Humanos , Masculino , Femenino , Complicaciones Posoperatorias/epidemiología , Procedimientos Endovasculares/educación , Estados Unidos , Sistema de Registros , Internado y Residencia , Cirujanos/educación , Cirujanos/normas , Anciano , Persona de Mediana Edad
3.
Med Teach ; 46(4): 471-485, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38306211

RESUMEN

Changes in digital technology, increasing volume of data collection, and advances in methods have the potential to unleash the value of big data generated through the education of health professionals. Coupled with this potential are legitimate concerns about how data can be used or misused in ways that limit autonomy, equity, or harm stakeholders. This consensus statement is intended to address these issues by foregrounding the ethical imperatives for engaging with big data as well as the potential risks and challenges. Recognizing the wide and ever evolving scope of big data scholarship, we focus on foundational issues for framing and engaging in research. We ground our recommendations in the context of big data created through data sharing across and within the stages of the continuum of the education and training of health professionals. Ultimately, the goal of this statement is to support a culture of trust and quality for big data research to deliver on its promises for health professions education (HPE) and the health of society. Based on expert consensus and review of the literature, we report 19 recommendations in (1) framing scholarship and research through research, (2) considering unique ethical practices, (3) governance of data sharing collaborations that engage stakeholders, (4) data sharing processes best practices, (5) the importance of knowledge translation, and (6) advancing the quality of scholarship through multidisciplinary collaboration. The recommendations were modified and refined based on feedback from the 2022 Ottawa Conference attendees and subsequent public engagement. Adoption of these recommendations can help HPE scholars share data ethically and engage in high impact big data scholarship, which in turn can help the field meet the ultimate goal: high-quality education that leads to high-quality healthcare.


Asunto(s)
Macrodatos , Empleos en Salud , Difusión de la Información , Humanos , Empleos en Salud/educación , Consenso
4.
J Grad Med Educ ; 16(1): 30-36, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38304606

RESUMEN

Background Although entrustment-supervision ratings are more intuitive compared to other rating scales, it is not known whether their use accurately assesses the appropriateness of care provided by a resident. Objective To determine the frequency of incorrect entrustment ratings assigned by faculty and whether accuracy of an entrustment-supervision scale differed by resident performance when the scripted resident performance level is known. Methods Faculty participants rated standardized residents in 10 videos using a 4-point entrustment-supervision scale. We calculated the frequency of rating a resident incorrectly. We performed generalizability (G) and decision (D) studies for all 10 cases (768 ratings) and repeated the analysis using only cases with an entrustment score of 2. Results The mean score by 77 raters for all videos was 2.87 (SD=0.86) with a mean of 2.37 (SD=0.72), 3.11 (SD=0.67) and 3.78 (SD=0.43) for the scripted levels of 2, 3, and 4. Faculty ratings differed from the scripted score for 331of 768 (43%) ratings. Most errors were ratings higher than the scripted score (223, 67%). G studies estimated the variance proportions of rater and case to be 4.99% and 54.29%. D studies estimated that 3 raters would need to watch 10 cases. The variance proportion of rater was 8.5% when the analysis was restricted to level 2 entrustment, requiring 15 raters to watch 5 cases. Conclusions Participants underestimated residents' potential need for greater supervision. Overall agreement between raters and scripted scores were low.


Asunto(s)
Internado y Residencia , Humanos , Docentes Médicos , Competencia Clínica , Pacientes
5.
Acad Med ; 99(4): 351-356, 2024 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-38266204

RESUMEN

ABSTRACT: Systems-based practice (SBP) was introduced as 1 of 6 core competencies in 1999 because of its recognized importance in the quality and safety of health care provided to patients. Nearly 25 years later, faculty and learners continue to struggle with understanding and implementing this essential competency, thus hindering the medical education community's ability to most effectively teach and learn this important competency.Milestones were first introduced in 2013 as one effort to support implementation of the general competencies. However, each specialty developed its milestones independently, leading to substantial heterogeneity in the narrative descriptions of competencies including SBP. The process to create Milestones 2.0, and more specifically, the Harmonized Milestones, took this experience into account and endeavored to create a shared language for SBP across all specialties and subspecialties. The 3 subcompetencies in SBP are now patient safety and quality improvement, systems navigation for patient-centered care (coordination of care, transitions of care, local population health), and physician's role in health care systems (components of the system, costs and resources, transitions to practice). Milestones 2.0 are also now supported by new supplemental guides that provide specific real-world examples to help learners and faculty put SBP into the context of the complex health care environment.While substantially more resources and tools are now available to aid faculty and to serve as a guide for residents and fellows, much work to effectively implement SBP remains. This commentary will explore the evolutionary history of SBP, the challenges facing implementation, and suggestions for how programs can use the new milestone resources for SBP. The academic medicine community must work together to advance this competency as an essential part of professional development.


Asunto(s)
Educación Médica , Internado y Residencia , Medicina , Humanos , Mejoramiento de la Calidad , Educación de Postgrado en Medicina , Competencia Clínica , Acreditación
6.
J Gen Intern Med ; 39(10): 1795-1802, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38289461

RESUMEN

BACKGROUND: While some prior studies of work-based assessment (WBA) numeric ratings have not shown gender differences, they have been unable to account for the true performance of the resident or explore narrative differences by gender. OBJECTIVE: To explore gender differences in WBA ratings as well as narrative comments (when scripted performance was known). DESIGN: Secondary analysis of WBAs obtained from a randomized controlled trial of a longitudinal rater training intervention in 2018-2019. Participating faculty (n = 77) observed standardized resident-patient encounters and subsequently completed rater assessment forms (RAFs). SUBJECTS: Participating faculty in longitudinal rater training. MAIN MEASURES: Gender differences in mean entrustment ratings (4-point scale) were assessed with multivariable regression (adjusted for scripted performance, rater and resident demographics, and the interaction between study arm and time period [pre- versus post-intervention]). Using pre-specified natural language processing categories (masculine, feminine, agentic, and communal words), multivariable linear regression was used to determine associations of word use in the narrative comments with resident gender, race, and skill level, faculty demographics, and interaction between the study arm and the time period (pre- versus post-intervention). KEY RESULTS: Across 1527 RAFs, there were significant differences in entrustment ratings between women and men standardized residents (2.29 versus 2.54, respectively, p < 0.001) after correction for resident skill level. As compared to men, feminine terms were more common for comments of what the resident did poorly among women residents (ß 0.45, CI 0.12-0.78, p 0.01). This persisted despite adjusting for the faculty's entrustment ratings. There were no other significant linguistic differences by gender. CONCLUSIONS: Contrasting prior studies, we found entrustment rating differences in a simulated WBA which persisted after adjusting for the resident's scripted performance. There were also linguistic differences by gender after adjusting for entrustment ratings, with feminine terms being used more frequently in comments about women in some, but not all narrative comments.


Asunto(s)
Competencia Clínica , Internado y Residencia , Humanos , Femenino , Masculino , Competencia Clínica/normas , Factores Sexuales , Narración , Adulto , Evaluación Educacional/métodos
7.
Acad Med ; 99(2): 146-152, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-37289829

RESUMEN

ABSTRACT: The complexity of improving health in the United States and the rising call for outcomes-based physician training present unique challenges and opportunities for both graduate medical education (GME) and health systems. GME programs have been particularly challenged to implement systems-based practice (SBP) as a core physician competency and educational outcome. Disparate definitions and educational approaches to SBP, as well as limited understanding of the complex interactions between GME trainees, programs, and their health system settings, contribute to current suboptimal educational outcomes elated to SBP. To advance SBP competence at individual, program, and institutional levels, the authors present the rationale for an integrated multilevel systems approach to assess and evaluate SBP, propose a conceptual multilevel data model that integrates health system and educational SBP performance, and explore the opportunities and challenges of using multilevel data to promote an empirically driven approach to residency education. The development, study, and adoption of multilevel analytic approaches to GME are imperative to the successful operationalization of SBP and thereby imperative to GME's social accountability in meeting societal needs for improved health. The authors call for the continued collaboration of national leaders toward producing integrated and multilevel datasets that link health systems and their GME-sponsoring institutions to evolve SBP.


Asunto(s)
Internado y Residencia , Médicos , Humanos , Estados Unidos , Educación de Postgrado en Medicina , Curriculum , Programas de Gobierno
8.
Acad Med ; 98(10): 1102-1103, 2023 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-37433194
9.
Acad Med ; 98(8S): S37-S49, 2023 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-37071705

RESUMEN

Assessment is essential to professional development. Assessment provides the information needed to give feedback, support coaching and the creation of individualized learning plans, inform progress decisions, determine appropriate supervision levels, and, most importantly, help ensure patients and families receive high-quality, safe care in the training environment. While the introduction of competency-based medical education has catalyzed advances in assessment, much work remains to be done. First, becoming a physician (or other health professional) is primarily a developmental process, and assessment programs must be designed using a developmental and growth mindset. Second, medical education programs must have integrated programs of assessment that address the interconnected domains of implicit, explicit and structural bias. Third, improving programs of assessment will require a systems-thinking approach. In this paper, the authors first address these overarching issues as key principles that must be embraced so that training programs may optimize assessment to ensure all learners achieve desired medical education outcomes. The authors then explore specific needs in assessment and provide suggestions to improve assessment practices. This paper is by no means inclusive of all medical education assessment challenges or possible solutions. However, there is a wealth of current assessment research and practice that medical education programs can use to improve educational outcomes and help reduce the harmful effects of bias. The authors' goal is to help improve and guide innovation in assessment by catalyzing further conversations.


Asunto(s)
Educación Médica , Médicos , Humanos , Competencia Clínica , Evaluación de Programas y Proyectos de Salud , Calidad de la Atención de Salud
10.
Acad Med ; 98(7): 813-820, 2023 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-36724304

RESUMEN

PURPOSE: Accurate assessment of clinical performance is essential to ensure graduating residents are competent for unsupervised practice. The Accreditation Council for Graduate Medical Education milestones framework is the most widely used competency-based framework in the United States. However, the relationship between residents' milestones competency ratings and their subsequent early career clinical outcomes has not been established. It is important to examine the association between milestones competency ratings of U.S. general surgical residents and those surgeons' patient outcomes in early career practice. METHOD: A retrospective, cross-sectional study was conducted using a sample of national Medicare claims for 23 common, high-risk inpatient general surgical procedures performed between July 1, 2015, and November 30, 2018 (n = 12,400 cases) by nonfellowship-trained U.S. general surgeons. Milestone ratings collected during those surgeons' last year of residency (n = 701 residents) were compared with their risk-adjusted rates of mortality, any complication, or severe complication within 30 days of index operation during their first 2 years of practice. RESULTS: There were no associations between mean milestone competency ratings of graduating general surgery residents and their subsequent early career patient outcomes, including any complication (23% proficient vs 22% not yet proficient; relative risk [RR], 0.97, [95% CI, 0.88-1.08]); severe complication (9% vs 9%, respectively; RR, 1.01, [95% CI, 0.86-1.19]); and mortality (5% vs 5%; RR, 1.07, [95% CI, 0.88-1.30]). Secondary analyses yielded no associations between patient outcomes and milestone ratings specific to technical performance, or between patient outcomes and composites of operative performance, professionalism, or leadership milestones ratings ( P ranged .32-.97). CONCLUSIONS: Milestone ratings of graduating general surgery residents were not associated with the patient outcomes of those surgeons when they performed common, higher-risk procedures in a Medicare population. Efforts to improve how milestones ratings are generated might strengthen their association with early career outcomes.


Asunto(s)
Internado y Residencia , Anciano , Humanos , Estados Unidos , Estudios Retrospectivos , Estudios Transversales , Competencia Clínica , Medicare , Educación de Postgrado en Medicina/métodos , Acreditación , Evaluación Educacional/métodos
11.
J Grad Med Educ ; 15(1): 81-91, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36817545

RESUMEN

Background: Workplace-based assessment (WBA) is a key assessment strategy in competency-based medical education. However, its full potential has not been actualized secondary to concerns with reliability, validity, and accuracy. Frame of reference training (FORT), a rater training technique that helps assessors distinguish between learner performance levels, can improve the accuracy and reliability of WBA, but the effect size is variable. Understanding FORT benefits and challenges help improve this rater training technique. Objective: To explore faculty's perceptions of the benefits and challenges associated with FORT. Methods: Subjects were internal medicine and family medicine physicians (n=41) who participated in a rater training intervention in 2018 consisting of in-person FORT followed by asynchronous online spaced learning. We assessed participants' perceptions of FORT in post-workshop focus groups and an end-of-study survey. Focus groups and survey free text responses were coded using thematic analysis. Results: All subjects participated in 1 of 4 focus groups and completed the survey. Four benefits of FORT were identified: (1) opportunity to apply skills frameworks via deliberate practice; (2) demonstration of the importance of certain evidence-based clinical skills; (3) practice that improved the ability to discriminate between resident skill levels; and (4) highlighting the importance of direct observation and the dangers using proxy information in assessment. Challenges included time constraints and task repetitiveness. Conclusions: Participants believe that FORT training serves multiple purposes, including helping them distinguish between learner skill levels while demonstrating the impact of evidence-based clinical skills and the importance of direct observation.


Asunto(s)
Internado y Residencia , Humanos , Reproducibilidad de los Resultados , Lugar de Trabajo , Docentes , Grupos Focales , Competencia Clínica
12.
Acad Med ; 98(2): 237-247, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-35857396

RESUMEN

PURPOSE: Prior research evaluating workplace-based assessment (WBA) rater training effectiveness has not measured improvement in narrative comment quality and accuracy, nor accuracy of prospective entrustment-supervision ratings. The purpose of this study was to determine whether rater training, using performance dimension and frame of reference training, could improve WBA narrative comment quality and accuracy. A secondary aim was to assess impact on entrustment rating accuracy. METHOD: This single-blind, multi-institution, randomized controlled trial of a multifaceted, longitudinal rater training intervention consisted of in-person training followed by asynchronous online spaced learning. In 2018, investigators randomized 94 internal medicine and family medicine physicians involved with resident education. Participants assessed 10 scripted standardized resident-patient videos at baseline and follow-up. Differences in holistic assessment of narrative comment accuracy and specificity, accuracy of individual scenario observations, and entrustment rating accuracy were evaluated with t tests. Linear regression assessed impact of participant demographics and baseline performance. RESULTS: Seventy-seven participants completed the study. At follow-up, the intervention group (n = 41), compared with the control group (n = 36), had higher scores for narrative holistic specificity (2.76 vs 2.31, P < .001, Cohen V = .25), accuracy (2.37 vs 2.06, P < .001, Cohen V = .20) and mean quantity of accurate (6.14 vs 4.33, P < .001), inaccurate (3.53 vs 2.41, P < .001), and overall observations (2.61 vs 1.92, P = .002, Cohen V = .47). In aggregate, the intervention group had more accurate entrustment ratings (58.1% vs 49.7%, P = .006, Phi = .30). Baseline performance was significantly associated with performance on final assessments. CONCLUSIONS: Quality and specificity of narrative comments improved with rater training; the effect was mitigated by inappropriate stringency. Training improved accuracy of prospective entrustment-supervision ratings, but the effect was more limited. Participants with lower baseline rating skill may benefit most from training.


Asunto(s)
Competencia Clínica , Internado y Residencia , Humanos , Estudios Prospectivos , Método Simple Ciego , Lugar de Trabajo , Escolaridad
13.
JAMA Netw Open ; 5(12): e2247649, 2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-36580337

RESUMEN

Importance: Previous studies have demonstrated racial and ethnic inequities in medical student assessments, awards, and faculty promotions at academic medical centers. Few data exist about similar racial and ethnic disparities at the level of graduate medical education. Objective: To examine the association between race and ethnicity and performance assessments among a national cohort of internal medicine residents. Design, Setting, and Participants: This retrospective cohort study evaluated assessments of performance for 9026 internal medicine residents from the graduating classes of 2016 and 2017 at Accreditation Council of Graduate Medical Education (ACGME)-accredited internal medicine residency programs in the US. Analyses were conducted between July 1, 2020, and June 31, 2022. Main Outcomes and Measures: The primary outcome was midyear and year-end total ACGME Milestone scores for underrepresented in medicine (URiM [Hispanic only; non-Hispanic American Indian, Alaska Native, or Native Hawaiian/Pacific Islander only; or non-Hispanic Black/African American]) and Asian residents compared with White residents as determined by their Clinical Competency Committees and residency program directors. Differences in scores between Asian and URiM residents compared with White residents were also compared for each of the 6 competency domains as supportive outcomes. Results: The study cohort included 9026 residents from 305 internal medicine residency programs. Of these residents, 3994 (44.2%) were female, 3258 (36.1%) were Asian, 1216 (13.5%) were URiM, and 4552 (50.4%) were White. In the fully adjusted model, no difference was found in the initial midyear total Milestone scores between URiM and White residents, but there was a difference between Asian and White residents, which favored White residents (mean [SD] difference in scores for Asian residents: -1.27 [0.38]; P < .001). In the second year of training, White residents received increasingly higher scores relative to URiM and Asian residents. These racial disparities peaked in postgraduate year (PGY) 2 (mean [SD] difference in scores for URiM residents, -2.54 [0.38]; P < .001; mean [SD] difference in scores for Asian residents, -1.9 [0.27]; P < .001). By the final year 3 assessment, the gap between White and Asian and URiM residents' scores narrowed, and no racial or ethnic differences were found. Trends in racial and ethnic differences among the 6 competency domains mirrored total Milestone scores, with differences peaking in PGY2 and then decreasing in PGY3 such that parity in assessment was reached in all competency domains by the end of training. Conclusions and Relevance: In this cohort study, URiM and Asian internal medicine residents received lower ratings on performance assessments than their White peers during the first and second years of training, which may reflect racial bias in assessment. This disparity in assessment may limit opportunities for physicians from minoritized racial and ethnic groups and hinder physician workforce diversity.


Asunto(s)
Internado y Residencia , Humanos , Femenino , Masculino , Estudios de Cohortes , Estudios Retrospectivos , Educación de Postgrado en Medicina , Etnicidad
15.
Acad Med ; 97(11): 1581, 2022 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-36287721
16.
J Grad Med Educ ; 14(3): 281-288, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35754636

RESUMEN

Background: Graduate medical education (GME) program leaders struggle to incorporate quality measures in the ambulatory care setting, leading to knowledge gaps on how to provide feedback to residents and programs. While nationally collected quality of care data are available, their reliability for individual resident learning and for GME program improvement is understudied. Objective: To examine the reliability of the Healthcare Effectiveness Data and Information Set (HEDIS) clinical performance measures in family medicine and internal medicine GME programs and to determine whether HEDIS measures can inform residents and their programs with their quality of care. Methods: From 2014 to 2017, we collected HEDIS measures from 566 residents in 8 family medicine and internal medicine programs under one sponsoring institution. Intraclass correlation was performed to establish patient sample sizes required for 0.70 and 0.80 reliability levels at the resident and program levels. Differences between the patient sample sizes required for reliable measurement and the actual patients cared for by residents were calculated. Results: The highest reliability levels for residents (0.88) and programs (0.98) were found for the most frequently available HEDIS measure, colorectal cancer screening. At the GME program level, 87.5% of HEDIS measures had sufficient sample sizes for reliable measurement at alpha 0.7 and 75.0% at alpha 0.8. Most resident level measurements were found to be less reliable. Conclusions: GME programs may reliably evaluate HEDIS performance pooled at the program level, but less so at the resident level due to patient volume.


Asunto(s)
Educación Médica , Internado y Residencia , Educación de Postgrado en Medicina , Medicina Familiar y Comunitaria , Humanos , Reproducibilidad de los Resultados , Estados Unidos
17.
J Grad Med Educ ; 14(3): 359-364, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35754650

RESUMEN

Background: The COVID-19 pandemic has affected every facet of American health care, including graduate medical education (GME). Prior studies show that COVID-19 resulted in reduced opportunities for elective surgeries, lower patient volumes, altered clinical rotations, increased reliance on telemedicine, and dependence on virtual didactic conferences. These studies, however, focused on individual specialties. Because the Accreditation Council for Graduate Medical Education (ACGME) routinely collects information from all programs it has an obligation to use these data to inform the profession about important trends affecting GME. Objective: To describe how the pandemic influenced resident training across all specialty programs in areas of clinical experiences, telemedicine, and extended trainings. Methods: The ACGME validated a questionnaire to supplement the Annual Update reporting requirements of all accredited programs. The questionnaire was tested to ensure easy interpretation of instructions, question wording, response options, and to assess respondent burden. The questionnaire was administered through the Accreditation Data System, which is a password-protected online environment for communication between the ACGME and ACGME-accredited programs. Results: We received a response rate of 99.6% (11 290 of 12 420). Emergency medicine, family medicine, internal medicine, and obstetrics and gynecology programs experienced the most significant impact. Most programs reported reduced opportunities for in-person didactics and ambulatory continuity rotations. Hospital-based programs on the "frontline" of COVID-19 care relied least on telemedicine. Family medicine and internal medicine programs accounted for the greatest number of extended trainings. Conclusions: COVID-19 has affected GME training, but its consequences are unevenly distributed across program types and regions of the country.


Asunto(s)
COVID-19 , Internado y Residencia , Acreditación , Educación de Postgrado en Medicina , Becas , Humanos , Pandemias , Estados Unidos
18.
Med Teach ; 44(11): 1228-1236, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35635737

RESUMEN

PURPOSE: Clinical competency committees (CCCs) assess residents' performance on their specialty specific milestones, however there is no 'one-size fits all' blueprint for accomplishing this. Thus, CCCs have had to develop their own procedures. The goal of this study was to examine these efforts to assist new programs embarking on this venture and established programs looking to improve their CCC practices and processes. METHODS: We purposefully sampled CCCs across multiple specialties and institutions. Data from three sources were triangulated: (1) online demographic survey, (2) ethnographic observations of CCC meetings and (3) post-observation semi-structured interviews with the program director and/or CCC chairperson. Template analysis was used to build the coding structure. RESULTS: Sixteen observations were completed with 15 different CCCs at 9 institutions. Three main thematic categories that impact the operations of CCCs emerged: (1) Membership structure and members roles, (2) Roles of the CCC in residency and 3) CCC processes, including trainee presentation to the committee and decision-making. While effective practices were observed, substantial variation existed in all three thematic areas. CONCLUSIONS: While CCCs used some known effective practices, substantial variation in structure and processes was notable across CCCs. Future work should explore the impact of this variation on educational outcomes.


Asunto(s)
Competencia Clínica , Internado y Residencia , Humanos , Antropología Cultural , Educación de Postgrado en Medicina
19.
Acad Med ; 97(8): 1128-1136, 2022 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-35294414

RESUMEN

Undergraduate and graduate medical education have long embraced uniqueness and variability in curricular and assessment approaches. Some of this variability is justified (warranted or necessary variation), but a substantial portion represents unwarranted variation. A primary tenet of outcomes-based medical education is ensuring that all learners acquire essential competencies to be publicly accountable to meet societal needs. Unwarranted variation in curricular and assessment practices contributes to suboptimal and variable educational outcomes and, by extension, risks graduates delivering suboptimal health care quality. Medical education can use lessons from the decades of study on unwarranted variation in health care as part of efforts to continuously improve the quality of training programs. To accomplish this, medical educators will first need to recognize the difference between warranted and unwarranted variation in both clinical care and educational practices. Addressing unwarranted variation will require cooperation and collaboration between multiple levels of the health care and educational systems using a quality improvement mindset. These efforts at improvement should acknowledge that some aspects of variability are not scientifically informed and do not support desired outcomes or societal needs. This perspective examines the correlates of unwarranted variation of clinical care in medical education and the need to address the interdependency of unwarranted variation occurring between clinical and educational practices. The authors explore the challenges of variation across multiple levels: community, institution, program, and individual faculty members. The article concludes with recommendations to improve medical education by embracing the principles of continuous quality improvement to reduce the harmful effect of unwarranted variation.


Asunto(s)
Educación Médica , Curriculum , Atención a la Salud , Educación de Postgrado en Medicina , Humanos , Mejoramiento de la Calidad , Calidad de la Atención de Salud
20.
Acad Med ; 97(5): 643-648, 2022 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-35020616

RESUMEN

The graduate medical education (GME) system is heavily subsidized by the public in return for producing physicians who meet society's needs. Under the terms of this implicit social contract, decisions about how this funding is allocated are deferred to the individual training sites. Institutions receiving public funding face potential conflicts of interest, which have at times prioritized institutional purposes and needs over societal needs, highlighting that there is little public accountability for how such funding is used. The cost and institutional burden of assessing many fundamental GME outcomes, such as specialty, geographic physician distribution, training-imprinted cost behaviors, and populations served, could be mitigated as data sources and methods for assessing GME outcomes and guiding training improvement already exist. This new capacity to assess system-level outcomes could help institutions and policymakers strategically address the greatest public needs. Measurement of educational outcomes can also be used to guide training improvement at every level of the educational system (i.e., the individual trainee, individual teaching institution, and collective GME system levels). There are good examples of institutions, states, and training consortia that are already assessing and using GME outcomes in these ways. The ultimate outcome could be a GME system that better meets the needs of society and better honors what is now only an implicit social contract.


Asunto(s)
Internado y Residencia , Médicos , Educación de Postgrado en Medicina , Humanos , Estados Unidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA