Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
Acad Med ; 99(4S Suppl 1): S64-S70, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38166211

RESUMEN

ABSTRACT: Precision education (PE) systematically leverages data and advanced analytics to inform educational interventions that, in turn, promote meaningful learner outcomes. PE does this by incorporating analytic results back into the education continuum through continuous feedback cycles. These data-informed sequences of planning, learning, assessing, and adjusting foster competence and adaptive expertise. PE cycles occur at individual (micro), program (meso), or system (macro) levels. This article focuses on program- and system-level PE.Data for PE come from a multitude of sources, including learner assessment and program evaluation. The authors describe the link between these data and the vital role evaluation plays in providing evidence of educational effectiveness. By including prior program evaluation research supporting this claim, the authors illustrate the link between training programs and patient outcomes. They also describe existing national reports providing feedback to programs and institutions, as well as 2 emerging, multiorganization program- and system-level PE efforts. The challenges encountered by those implementing PE and the continuing need to advance this work illuminate the necessity for increased cross-disciplinary collaborations and a national cross-organizational data-sharing effort.Finally, the authors propose practical approaches for funding a national initiative in PE as well as potential models for advancing the field of PE. Lessons learned from successes by others illustrate the promise of these recommendations.


Asunto(s)
Educación Basada en Competencias , Curriculum , Humanos , Educación Basada en Competencias/métodos , Evaluación de Programas y Proyectos de Salud
2.
Acad Med ; 99(4S Suppl 1): S48-S56, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38207084

RESUMEN

PURPOSE: The era of precision education is increasingly leveraging electronic health record (EHR) data to assess residents' clinical performance. But precision in what the EHR-based resident performance metrics are truly assessing is not fully understood. For instance, there is limited understanding of how EHR-based measures account for the influence of the team on an individual's performance-or conversely how an individual contributes to team performances. This study aims to elaborate on how the theoretical understandings of supportive and collaborative interdependence are captured in residents' EHR-based metrics. METHOD: Using a mixed methods study design, the authors conducted a secondary analysis of 5 existing quantitative and qualitative datasets used in previous EHR studies to investigate how aspects of interdependence shape the ways that team-based care is provided to patients. RESULTS: Quantitative analyses of 16 EHR-based metrics found variability in faculty and resident performance (both between and within resident). Qualitative analyses revealed that faculty lack awareness of their own EHR-based performance metrics, which limits their ability to act interdependently with residents in an evidence-informed fashion. The lens of interdependence elucidates how resident practice patterns develop across residency training, shifting from supportive to collaborative interdependence over time. Joint displays merging the quantitative and qualitative analyses showed that residents are aware of variability in faculty's practice patterns and that viewing resident EHR-based measures without accounting for the interdependence of residents with faculty is problematic, particularly within the framework of precision education. CONCLUSIONS: To prepare for this new paradigm of precision education, educators need to develop and evaluate theoretically robust models that measure interdependence in EHR-based metrics, affording more nuanced interpretation of such metrics when assessing residents throughout training.


Asunto(s)
Registros Electrónicos de Salud , Internado y Residencia , Humanos , Competencia Clínica , Escolaridad
3.
Acad Med ; 99(4S Suppl 1): S14-S20, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38277444

RESUMEN

ABSTRACT: The goal of medical education is to produce a physician workforce capable of delivering high-quality equitable care to diverse patient populations and communities. To achieve this aim amidst explosive growth in medical knowledge and increasingly complex medical care, a system of personalized and continuous learning, assessment, and feedback for trainees and practicing physicians is urgently needed. In this perspective, the authors build on prior work to advance a conceptual framework for such a system: precision education (PE).PE is a system that uses data and technology to transform lifelong learning by improving personalization, efficiency, and agency at the individual, program, and organization levels. PE "cycles" start with data inputs proactively gathered from new and existing sources, including assessments, educational activities, electronic medical records, patient care outcomes, and clinical practice patterns. Through technology-enabled analytics , insights are generated to drive precision interventions . At the individual level, such interventions include personalized just-in-time educational programming. Coaching is essential to provide feedback and increase learner participation and personalization. Outcomes are measured using assessment and evaluation of interventions at the individual, program, and organizational levels, with ongoing adjustment for repeated cycles of improvement. PE is rooted in patient, health system, and population data; promotes value-based care and health equity; and generates an adaptive learning culture.The authors suggest fundamental principles for PE, including promoting equity in structures and processes, learner agency, and integration with workflow (harmonization). Finally, the authors explore the immediate need to develop consensus-driven standards: rules of engagement between people, products, and entities that interact in these systems to ensure interoperability, data sharing, replicability, and scale of PE innovations.


Asunto(s)
Educación Médica , Medicina , Humanos , Educación Continua , Escolaridad , Aprendizaje
5.
Acad Med ; 99(4S Suppl 1): S7-S13, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38109659

RESUMEN

ABSTRACT: Previous eras of assessment in medical education have been defined by how assessment is done, from knowledge exams popularized in the 1960s to the emergence of work-based assessment in the 1990s to current efforts to integrate multiple types and sources of performance data through programmatic assessment. Each of these eras was a response to why assessment was performed (e.g., assessing medical knowledge with exams; assessing communication, professionalism, and systems competencies with work-based assessment). Despite the evolution of assessment eras, current evidence highlights the graduation of trainees with foundational gaps in the ability to provide high-quality care to patients presenting with common problems, and training program leaders report they graduate trainees they would not trust to care for themselves or their loved ones. In this article, the authors argue that the next era of assessment should be defined by why assessment is done: to ensure high-quality, equitable care. Assessment should place focus on demanding graduates possess the knowledge, skills, attitudes, and adaptive expertise to meet the needs of all patients and ensuring that graduates are able to do this in an equitable fashion. The authors explore 2 patient-focused assessment approaches that could help realize the promise of this envisioned era: entrustable professional activities (EPAs) and resident sensitive quality measures (RSQMs)/TRainee Attributable and Automatable Care Evaluations in Real-time (TRACERs). These examples illustrate how the envisioned next era of assessment can leverage existing and new data to provide precision education assessment that focuses on providing formative and summative feedback to trainees in a manner that seeks to ensure their learning outcomes prepare them to ensure high-quality, equitable patient outcomes.


Asunto(s)
Internado y Residencia , Calidad de la Atención de Salud , Humanos , Curriculum , Educación Basada en Competencias , Atención al Paciente , Competencia Clínica , Educación de Postgrado en Medicina
6.
Acad Med ; 99(4S Suppl 1): S30-S34, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38113440

RESUMEN

ABSTRACT: Precision education (PE) uses personalized educational interventions to empower trainees and improve learning outcomes. While PE has the potential to represent a paradigm shift in medical education, a theoretical foundation to guide the effective implementation of PE strategies has not yet been described. Here, the authors introduce a theoretical foundation for the implementation of PE, integrating key learning theories with the digital tools that allow them to be operationalized. Specifically, the authors describe how the master adaptive learner (MAL) model, transformative learning theory, and self-determination theory can be harnessed in conjunction with nudge strategies and audit and feedback dashboards to drive learning and meaningful behavior change. The authors also provide practical examples of these theories and tools in action by describing precision interventions already in use at one academic medical center, concretizing PE's potential in the current clinical environment. These examples illustrate how a firm theoretical grounding allows educators to most effectively tailor PE interventions to fit individual learners' needs and goals, facilitating efficient learning and ultimately improving patient and health system outcomes.


Asunto(s)
Educación Médica , Aprendizaje , Humanos , Educación Basada en Competencias , Autonomía Personal , Competencia Clínica
7.
Perspect Med Educ ; 12(1): 149-159, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37215538

RESUMEN

Competency-based medical education (CBME) is an outcomes-based approach to education and assessment that focuses on what competencies trainees need to learn in order to provide effective patient care. Despite this goal of providing quality patient care, trainees rarely receive measures of their clinical performance. This is problematic because defining a trainee's learning progression requires measuring their clinical performance. Traditional clinical performance measures (CPMs) are often met with skepticism from trainees given their poor individual-level attribution. Resident-sensitive quality measures (RSQMs) are attributable to individuals, but lack the expeditiousness needed to deliver timely feedback and can be difficult to automate at scale across programs. In this eye opener, the authors present a conceptual framework for a new type of measure - TRainee Attributable & Automatable Care Evaluations in Real-time (TRACERs) - attuned to both automation and trainee attribution as the next evolutionary step in linking education to patient care. TRACERs have five defining characteristics: meaningful (for patient care and trainees), attributable (sufficiently to the trainee of interest), automatable (minimal human input once fully implemented), scalable (across electronic health records [EHRs] and training environments), and real-time (amenable to formative educational feedback loops). Ideally, TRACERs optimize all five characteristics to the greatest degree possible. TRACERs are uniquely focused on measures of clinical performance that are captured in the EHR, whether routinely collected or generated using sophisticated analytics, and are intended to complement (not replace) other sources of assessment data. TRACERs have the potential to contribute to a national system of high-density, trainee-attributable, patient-centered outcome measures.


Asunto(s)
Educación de Postgrado en Medicina , Internado y Residencia , Humanos , Evaluación Educacional , Aprendizaje , Retroalimentación
8.
Acad Med ; 98(7): 775-781, 2023 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-37027222

RESUMEN

Medical schools and residency programs are increasingly incorporating personalization of content, pathways, and assessments to align with a competency-based model. Yet, such efforts face challenges involving large amounts of data, sometimes struggling to deliver insights in a timely fashion for trainees, coaches, and programs. In this article, the authors argue that the emerging paradigm of precision medical education (PME) may ameliorate some of these challenges. However, PME lacks a widely accepted definition and a shared model of guiding principles and capacities, limiting widespread adoption. The authors propose defining PME as a systematic approach that integrates longitudinal data and analytics to drive precise educational interventions that address each individual learner's needs and goals in a continuous, timely, and cyclical fashion, ultimately improving meaningful educational, clinical, or system outcomes. Borrowing from precision medicine, they offer an adapted shared framework. In the P4 medical education framework, PME should (1) take a proactive approach to acquiring and using trainee data; (2) generate timely personalized insights through precision analytics (including artificial intelligence and decision-support tools); (3) design precision educational interventions (learning, assessment, coaching, pathways) in a participatory fashion, with trainees at the center as co-producers; and (4) ensure interventions are predictive of meaningful educational, professional, or clinical outcomes. Implementing PME will require new foundational capacities: flexible educational pathways and programs responsive to PME-guided dynamic and competency-based progression; comprehensive longitudinal data on trainees linked to educational and clinical outcomes; shared development of requisite technologies and analytics to effect educational decision-making; and a culture that embraces a precision approach, with research to gather validity evidence for this approach and development efforts targeting new skills needed by learners, coaches, and educational leaders. Anticipating pitfalls in the use of this approach will be important, as will ensuring it deepens, rather than replaces, the interaction of trainees and their coaches.


Asunto(s)
Educación Médica , Internado y Residencia , Humanos , Inteligencia Artificial , Aprendizaje , Curriculum , Competencia Clínica
9.
Acad Med ; 98(9): 1018-1021, 2023 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-36940395

RESUMEN

PROBLEM: Reviewing residency application narrative components is time intensive and has contributed to nearly half of applications not receiving holistic review. The authors developed a natural language processing (NLP)-based tool to automate review of applicants' narrative experience entries and predict interview invitation. APPROACH: Experience entries (n = 188,500) were extracted from 6,403 residency applications across 3 application cycles (2017-2019) at 1 internal medicine program, combined at the applicant level, and paired with the interview invitation decision (n = 1,224 invitations). NLP identified important words (or word pairs) with term frequency-inverse document frequency, which were used to predict interview invitation using logistic regression with L1 regularization. Terms remaining in the model were analyzed thematically. Logistic regression models were also built using structured application data and a combination of NLP and structured data. Model performance was evaluated on never-before-seen data using area under the receiver operating characteristic and precision-recall curves (AUROC, AUPRC). OUTCOMES: The NLP model had an AUROC of 0.80 (vs chance decision of 0.50) and AUPRC of 0.49 (vs chance decision of 0.19), showing moderate predictive strength. Phrases indicating active leadership, research, or work in social justice and health disparities were associated with interview invitation. The model's detection of these key selection factors demonstrated face validity. Adding structured data to the model significantly improved prediction (AUROC 0.92, AUPRC 0.73), as expected given reliance on such metrics for interview invitation. NEXT STEPS: This model represents a first step in using NLP-based artificial intelligence tools to promote holistic residency application review. The authors are assessing the practical utility of using this model to identify applicants screened out using traditional metrics. Generalizability must be determined through model retraining and evaluation at other programs. Work is ongoing to thwart model "gaming," improve prediction, and remove unwanted biases introduced during model training.


Asunto(s)
Internado y Residencia , Humanos , Procesamiento de Lenguaje Natural , Inteligencia Artificial , Selección de Personal , Liderazgo
10.
Acad Med ; 98(2): 180-187, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-36538695

RESUMEN

The transition from undergraduate medical education (UME) to graduate medical education (GME) constitutes a complex system with important implications for learner progression and patient safety. The transition is currently dysfunctional, requiring students and residency programs to spend significant time, money, and energy on the process. Applications and interviews continue to increase despite stable match rates. Although many in the medical community acknowledge the problems with the UME-GME transition and learners have called for prompt action to address these concerns, the underlying causes are complex and have defied easy fixes. This article describes the work of the Coalition for Physician Accountability's Undergraduate Medical Education to Graduate Medical Education Review Committee (UGRC) to apply a quality improvement approach and systems thinking to explore the underlying causes of dysfunction in the UME-GME transition. The UGRC performed a root cause analysis using the 5 whys and an Ishikawa (or fishbone) diagram to deeply explore problems in the UME-GME transition. The root causes of problems identified include culture, costs and limited resources, bias, systems, lack of standards, and lack of alignment. Using the principles of systems thinking (components, connections, and purpose), the UGRC considered interactions among the root causes and developed recommendations to improve the UME-GME transition. Several of the UGRC's recommendations stemming from this work are explained. Sustained monitoring will be necessary to ensure interventions move the process forward to better serve applicants, programs, and the public good.


Asunto(s)
Educación de Pregrado en Medicina , Internado y Residencia , Humanos , Análisis de Causa Raíz , Educación de Postgrado en Medicina , Estudiantes
11.
Acad Med ; 98(3): 337-341, 2023 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-36484555

RESUMEN

PROBLEM: Residency program directors use clerkship grades for high-stakes selection decisions despite substantial variability in grading systems and distributions. The authors apply clustering techniques from data science to identify groups of schools for which grading distributions were statistically similar in the internal medicine clerkship. APPROACH: Grading systems (e.g., honors/pass/fail) and distributions (i.e., percent of students in each grade tier) were tabulated for the internal medicine clerkship at U.S. MD-granting medical schools by manually reviewing Medical Student Performance Evaluations (MSPEs) in the 2019 and 2020 residency application cycles. Grading distributions were analyzed using k-means cluster analysis, with the optimal number of clusters selected using model fit indices. OUTCOMES: Among the 145 medical schools with available MSPE data, 64 distinct grading systems were reported. Among the 135 schools reporting a grading distribution, the median percent of students receiving the highest and lowest tier grade was 32% (range: 2%-66%) and 2% (range: 0%-91%), respectively. Four clusters was the most optimal solution (η 2 = 0.8): cluster 1 (45% [highest grade tier]-45% [middle tier]-10% [lowest tier], n = 64 [47%] schools), cluster 2 (25%-30%-45%, n = 40 [30%] schools), cluster 3 (20%-75%-5%, n = 25 [19%] schools), and cluster 4 (15%-25%-25%-25%-10%, n = 6 [4%] schools). The findings suggest internal medicine clerkship grading systems may be more comparable across institutions than previously thought. NEXT STEPS: The authors will prospectively review reported clerkship grading approaches across additional specialties and are conducting a mixed-methods analysis, incorporating a sequential explanatory model, to interview stakeholder groups on the use of the patterns identified.


Asunto(s)
Prácticas Clínicas , Estudiantes de Medicina , Humanos , Evaluación Educacional/métodos , Facultades de Medicina , Ciencia de los Datos
12.
Acad Med ; 98(2): 158-161, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-35263298

RESUMEN

The transition from medical student to resident is a pivotal step in the medical education continuum. For applicants, successfully obtaining a residency position is the actualization of a dream after years of training and has life-changing professional and financial implications. These high stakes contribute to a residency application and Match process in the United States that is increasingly complex and dysfunctional, and that does not effectively serve applicants, residency programs, or the public good. In July 2020, the Coalition for Physician Accountability (Coalition) formed the Undergraduate Medical Education-Graduate Medical Education Review Committee (UGRC) to critically assess the overall transition to residency and offer recommendations to solve the growing challenges in the system. In this Invited Commentary, the authors reflect on their experience as the trainee representatives on the UGRC. They emphasize the importance of trainee advocacy in medical education change efforts; reflect on opportunities, concerns, and tensions with the final UGRC recommendations (released in August 2021); discuss factors that may constrain implementation; and call for the medical education community-and the Coalition member organizations in particular-to accelerate fully implementing the UGRC recommendations. By seizing the momentum created by the UGRC, the medical education community can create a reimagined transition to residency that reshapes its approach to training a more diverse, competent, and growth-oriented physician workforce.


Asunto(s)
Educación de Pregrado en Medicina , Educación Médica , Internado y Residencia , Humanos , Estados Unidos , Educación de Postgrado en Medicina , Evaluación Educacional
13.
J Gen Intern Med ; 37(9): 2230-2238, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35710676

RESUMEN

BACKGROUND: Residents receive infrequent feedback on their clinical reasoning (CR) documentation. While machine learning (ML) and natural language processing (NLP) have been used to assess CR documentation in standardized cases, no studies have described similar use in the clinical environment. OBJECTIVE: The authors developed and validated using Kane's framework a ML model for automated assessment of CR documentation quality in residents' admission notes. DESIGN, PARTICIPANTS, MAIN MEASURES: Internal medicine residents' and subspecialty fellows' admission notes at one medical center from July 2014 to March 2020 were extracted from the electronic health record. Using a validated CR documentation rubric, the authors rated 414 notes for the ML development dataset. Notes were truncated to isolate the relevant portion; an NLP software (cTAKES) extracted disease/disorder named entities and human review generated CR terms. The final model had three input variables and classified notes as demonstrating low- or high-quality CR documentation. The ML model was applied to a retrospective dataset (9591 notes) for human validation and data analysis. Reliability between human and ML ratings was assessed on 205 of these notes with Cohen's kappa. CR documentation quality by post-graduate year (PGY) was evaluated by the Mantel-Haenszel test of trend. KEY RESULTS: The top-performing logistic regression model had an area under the receiver operating characteristic curve of 0.88, a positive predictive value of 0.68, and an accuracy of 0.79. Cohen's kappa was 0.67. Of the 9591 notes, 31.1% demonstrated high-quality CR documentation; quality increased from 27.0% (PGY1) to 31.0% (PGY2) to 39.0% (PGY3) (p < .001 for trend). Validity evidence was collected in each domain of Kane's framework (scoring, generalization, extrapolation, and implications). CONCLUSIONS: The authors developed and validated a high-performing ML model that classifies CR documentation quality in resident admission notes in the clinical environment-a novel application of ML and NLP with many potential use cases.


Asunto(s)
Razonamiento Clínico , Documentación , Registros Electrónicos de Salud , Humanos , Aprendizaje Automático , Procesamiento de Lenguaje Natural , Reproducibilidad de los Resultados , Estudios Retrospectivos
14.
J Gen Intern Med ; 37(3): 507-512, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-33945113

RESUMEN

BACKGROUND: Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening its reliability. OBJECTIVE: Our goal was to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool. DESIGN, PARTICIPANTS, AND MAIN MEASURES: The Revised-IDEA assessment tool was developed by four clinician educators through iterative review of admission notes written by medicine residents and fellows and subsequently piloted with additional faculty to ensure response process validity. A random sample of 252 notes from July 2014 to June 2017 written by 30 trainees across several chief complaints was rated. Three raters rated 20% of the notes to demonstrate internal structure validity. A quality cut-off score was determined using Hofstee standard setting. KEY RESULTS: The Revised-IDEA assessment tool includes the same four domains as the IDEA assessment tool with more detailed descriptive prompts, new Likert scale anchors, and a score range of 0-10. Intraclass correlation was high for the notes rated by three raters, 0.84 (95% CI 0.74-0.90). Scores ≥6 were determined to demonstrate high-quality clinical reasoning documentation. Only 53% of notes (134/252) were high-quality. CONCLUSIONS: The Revised-IDEA assessment tool is reliable and easy to use for feedback on clinical reasoning documentation in resident and fellow admission notes with descriptive anchors that facilitate a shared mental model for feedback.


Asunto(s)
Competencia Clínica , Razonamiento Clínico , Documentación , Retroalimentación , Humanos , Modelos Psicológicos , Reproducibilidad de los Resultados
16.
Acad Med ; 96(11S): S54-S61, 2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34348383

RESUMEN

PURPOSE: Residency programs face overwhelming numbers of residency applications, limiting holistic review. Artificial intelligence techniques have been proposed to address this challenge but have not been created. Here, a multidisciplinary team sought to develop and validate a machine learning (ML)-based decision support tool (DST) for residency applicant screening and review. METHOD: Categorical applicant data from the 2018, 2019, and 2020 residency application cycles (n = 8,243 applicants) at one large internal medicine residency program were downloaded from the Electronic Residency Application Service and linked to the outcome measure: interview invitation by human reviewers (n = 1,235 invites). An ML model using gradient boosting was designed using training data (80% of applicants) with over 60 applicant features (e.g., demographics, experiences, academic metrics). Model performance was validated on held-out data (20% of applicants). Sensitivity analysis was conducted without United States Medical Licensing Examination (USMLE) scores. An interactive DST incorporating the ML model was designed and deployed that provided applicant- and cohort-level visualizations. RESULTS: The ML model areas under the receiver operating characteristic and precision recall curves were 0.95 and 0.76, respectively; these changed to 0.94 and 0.72, respectively, with removal of USMLE scores. Applicants' medical school information was an important driver of predictions-which had face validity based on the local selection process-but numerous predictors contributed. Program directors used the DST in the 2021 application cycle to select 20 applicants for interview that had been initially screened out during human review. CONCLUSIONS: The authors developed and validated an ML algorithm for predicting residency interview offers from numerous application elements with high performance-even when USMLE scores were removed. Model deployment in a DST highlighted its potential for screening candidates and helped quantify and mitigate biases existing in the selection process. Further work will incorporate unstructured textual data through natural language processing methods.


Asunto(s)
Técnicas de Apoyo para la Decisión , Internado y Residencia , Aprendizaje Automático , Selección de Personal/métodos , Criterios de Admisión Escolar , Humanos , Estados Unidos
17.
J Grad Med Educ ; 13(3): 355-370, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34178261

RESUMEN

BACKGROUND: Calls to reform the US resident selection process are growing, given increasing competition and inefficiencies of the current system. Though numerous reforms have been proposed, they have not been comprehensively cataloged. OBJECTIVE: This scoping review was conducted to characterize and categorize literature proposing systems-level reforms to the resident selection process. METHODS: Following Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines, searches of Embase, MEDLINE, Scopus, and Web of Science databases were performed for references published from January 2005 to February 2020. Articles were included if they proposed reforms that were applicable or generalizable to all applicants, medical schools, or residency programs. An inductive approach to qualitative content analysis was used to generate codes and higher-order categories. RESULTS: Of 10 407 unique references screened, 116 met our inclusion criteria. Qualitative analysis generated 34 codes that were grouped into 14 categories according to the broad stages of resident selection: application submission, application review, interviews, and the Match. The most commonly proposed reforms were implementation of an application cap (n = 28), creation of a standardized program database (n = 21), utilization of standardized letters of evaluation (n = 20), and pre-interview screening (n = 13). CONCLUSIONS: This scoping review collated and categorized proposed reforms to the resident selection process, developing a common language and framework to facilitate national conversations and change.


Asunto(s)
Internado y Residencia , Atención a la Salud , Tamizaje Masivo
18.
Obstet Gynecol ; 137(1): 164-169, 2021 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-33278296

RESUMEN

Holistic review of residency applications is touted as the gold standard for selection, yet vast application numbers leave programs reliant on screening using filters such as United States Medical Licensing Examination scores that do not reliably predict resident performance and may threaten diversity. Applicants struggle to identify which programs to apply to, and devote attention to these processes throughout most of the fourth year, distracting from their clinical education. In this perspective, educators across the undergraduate and graduate medical education continuum propose new models for student-program compatibility based on design thinking sessions with stakeholders in obstetrics and gynecology education from a broad range of training environments. First, we describe a framework for applicant-program compatibility based on applicant priorities and program offerings, including clinical training, academic training, practice setting, residency culture, personal life, and professional goals. Second, a conceptual model for applicant screening based on metrics, experiences, attributes, and alignment with program priorities is presented that might facilitate holistic review. We call for design and validation of novel metrics, such as situational judgment tests for professionalism. Together, these steps could improve the transparency, efficiency and fidelity of the residency application process. The models presented can be adapted to the priorities and values of other specialties.


Asunto(s)
Ginecología/educación , Internado y Residencia , Obstetricia/educación , Selección de Personal/métodos , Humanos , Solicitud de Empleo , Aplicaciones Móviles , Modelos Teóricos
19.
Acad Med ; 96(1): 50-55, 2021 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-32910007

RESUMEN

The 2019 novel coronavirus (COVID-19) pandemic has led to dramatic changes in the 2020 residency application cycle, including halting away rotations and delaying the application timeline. These stressors are laid on top of a resident selection process already under duress with exploding application and interview numbers-the latter likely to be exacerbated with the widespread shift to virtual interviewing. Leveraging their trainee perspective, the authors propose enforcing a cap on the number of interviews that applicants may attend through a novel interview ticket system (ITS). Specialties electing to participate in the ITS would select an evidence-based, specialty-specific interview cap. Applicants would then receive unique electronic tickets-equal in number to the cap-that would be given to participating programs at the time of an interview, when the tickets would be marked as used. The system would be self-enforcing and would ensure each interview represents genuine interest between applicant and program, while potentially increasing the number of interviews-and thus match rate-for less competitive applicants. Limitations of the ITS and alternative approaches for interview capping, including an honor code system, are also discussed. Finally, in the context of capped interview numbers, the authors emphasize the need for transparent preinterview data from programs to inform applicants and their advisors on which interviews to attend, learning from prior experiences and studies on virtual interviewing, adherence to best practices for interviewing, and careful consideration of how virtual interviews may shift inequities in the resident selection process.


Asunto(s)
COVID-19/epidemiología , Educación de Postgrado en Medicina/métodos , Internado y Residencia/organización & administración , Pandemias , Selección de Personal , Estudiantes de Medicina/estadística & datos numéricos , Humanos
20.
J Grad Med Educ ; 12(5): 611-614, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33149831

RESUMEN

BACKGROUND: There is emerging evidence that learners may be suboptimally prepared for the expectations of residency. In order to address these concerns, many medical schools are implementing residency preparation courses (RPCs). OBJECTIVE: We aimed to determine trainees' perceptions of their transition to residency and whether they felt that they benefited from participation in an RPC. METHODS: All residents and fellows at the University of Michigan (n = 1292) received an electronic survey in July 2018 that queried respondents on demographics, whether medical school had prepared them for intern year, and whether they had participated in an RPC. RESULTS: The response rate was 44% (563 of 1292) with even distribution across gender and postgraduate years (PGYs). Most (78%, 439 of 563) felt that medical school prepared them well for intern year. There were no differences in reported preparedness for intern year across PGY, age, gender, or specialty. Overall, 28% (156 of 563) of respondents participated in an RPC and endorsed feeling prepared for intern year, which was more than RPC non-participants (85% [133 of 156] vs 70% [306 of 439], P = .029). Participation in longer RPCs was also associated with higher perceived preparedness for residency. CONCLUSIONS: This study found that residents from multiple specialties reported greater preparedness for residency if they participated in a medical school fourth-year RPC, with greater perceptions of preparedness for longer duration RPCs, which may help to bridge the medical school to residency gap.


Asunto(s)
Curriculum , Educación de Pregrado en Medicina/métodos , Internado y Residencia , Becas , Femenino , Humanos , Masculino , Michigan , Facultades de Medicina , Estudiantes de Medicina/psicología , Encuestas y Cuestionarios
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...