Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 73
Filtrar
1.
Acad Med ; 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38579263

RESUMO

PURPOSE: Medical education should prepare learners for complex and evolving work, and should ideally include the Master Adaptive Learner (MAL) model-meta-learning skills for continuous self-regulated learning. This study aimed to measure obstetrics and gynecology (OB/GYN) residents' MAL attributes, assess associations with burnout and resilience, and explore learning task associations with MAL. METHOD: OB/GYN residents were surveyed electronically at an in-training examination in January 2022. The survey included demographic information, the 2-item Maslach Burnout Inventory, the 2-item Connor-Davidson Resilience Scale, 4 MAL items (e.g., "I take every opportunity to learn new things"), and questions about training and learning experiences. RESULTS: Of 5,761 residents, 3,741 respondents (65%) were included. A total of 1,478 of 3,386 (39%) demonstrated burnout (responded positive for burnout on emotional exhaustion or depersonalization items). The mean (SD) Connor-Davidson Resilience Scale score was 6.4 (1.2) of a total possible score of 8. The mean (SD) MAL score was 16.3 (2.8) of a total possible score of 20. The MAL score was inversely associated with burnout, with lower MAL scores for residents with (mean [SD] MAL score, 16.5 [2.4]) vs without (mean [SD], 16.0 [2.3]) burnout (P < .001). Higher MAL scores were associated with higher resilience (R = 0.29, P < .001). Higher MAL scores were associated with the statement, "I feel that I was well prepared for my first year of residency" (R = 0.19, P < .001) and a plan to complete subspecialty training after residency (mean [SD] of 16.6 [2.4] for "yes" and 16.2 [2.4] for "no," P < .001). CONCLUSIONS: Residents who scored higher on MAL showed more resilience and less burnout. Whether less resilient, burned-out residents did not have the agency to achieve MAL status or whether MAL behaviors filled the resiliency reservoir and protected against burnout is not clear.

2.
Perspect Med Educ ; 13(1): 250-254, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38680196

RESUMO

The use of the p-value in quantitative research, particularly its threshold of "P < 0.05" for determining "statistical significance," has long been a cornerstone of statistical analysis in research. However, this standard has been increasingly scrutinized for its potential to mislead findings, especially when the practical significance, the number of comparisons, or the suitability of statistical tests are not properly considered. In response to controversy around use of p-values, the American Statistical Association published a statement in 2016 that challenged the research community to abandon the term "statistically significant". This stance has been echoed by leading scientific journals to urge a significant reduction or complete elimination in the reliance on p-values when reporting results. To provide guidance to researchers in health professions education, this paper provides a succinct overview of the ongoing debate regarding the use of p-values and the definition of p-values. It reflects on the controversy by highlighting the common pitfalls associated with p-value interpretation and usage, such as misinterpretation, overemphasis, and false dichotomization between "significant" and "non-significant" results. This paper also outlines specific recommendations for the effective use of p-values in statistical reporting including the importance of reporting effect sizes, confidence intervals, the null hypothesis, and conducting sensitivity analyses for appropriate interpretation. These considerations aim to guide researchers toward a more nuanced and informative use of p-values.


Assuntos
Projetos de Pesquisa , Humanos , Interpretação Estatística de Dados , Projetos de Pesquisa/normas , Projetos de Pesquisa/tendências , Projetos de Pesquisa/estatística & dados numéricos
3.
Acad Med ; 99(5): 518-523, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38285547

RESUMO

PROBLEM: Competency-based medical education is increasingly regarded as a preferred framework for physician training, but implementation is limited. U.S. residency programs remain largely time based, with variable assessments and limited opportunities for individualization. Gaps in graduates' readiness for unsupervised care have been noted across specialties. Logistical barriers and regulatory requirements constrain movement toward competency-based, time-variable (CBTV) graduate medical education (GME), despite its theoretical benefits. APPROACH: The authors describe a vision for CBTV-GME and an implementation model that can be applied across specialties. Termed "Promotion in Place" (PIP), the model relies on enhanced assessment, clear criteria for advancement, and flexibility to adjust individuals' responsibilities and time in training based on demonstrated competence. PIP allows a resident's graduation to be advanced or delayed accordingly. Residents deemed competent for early graduation can transition to attending physician status within their training institution and benefit from a period of "sheltered independence" until the standard graduation date. Residents who need extended time to achieve competency have graduation delayed to incorporate additional targeted education. OUTCOMES: A proposal to pilot the PIP model of CBTV-GME received funding through the American Medical Association's "Reimagining Residency" initiative in 2019. Ten of 46 residency programs in a multihospital system expressed interest and pursued initial planning. Seven programs withdrew for reasons including program director transitions, uncertainty about resident reactions, and the COVID-19 pandemic. Three programs petitioned their specialty boards for exemptions from time-based training. One program was granted the needed exemption and launched a PIP pilot, now in year 4, demonstrating the feasibility of implementing this model. Implementation tools and templates are described. NEXT STEPS: Larger-scale implementation with longer-term assessment is needed to evaluate the impact and generalizability of this CBTV-GME model.


Assuntos
COVID-19 , Competência Clínica , Educação Baseada em Competências , Educação de Pós-Graduação em Medicina , Internato e Residência , Humanos , Educação de Pós-Graduação em Medicina/métodos , Educação Baseada em Competências/métodos , Estados Unidos , COVID-19/epidemiologia , SARS-CoV-2 , Fatores de Tempo , Modelos Educacionais
4.
Pediatrics ; 153(1)2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-38105696

RESUMO

Between 0.25% and 3% of admissions to the NICU, PICU, and PCICU receive cardiopulmonary resuscitation (CPR). Most CPR events occur in patients <1 year old. The incidence of CPR is 10 times higher in the NICU than at birth. Therefore, optimizing the approach to CPR in hospitalized neonates and infants is important. At birth, the resuscitation of newborns is performed according to neonatal resuscitation guidelines. In older infants and children, resuscitation is performed according to pediatric resuscitation guidelines. Neonatal and pediatric guidelines differ in several important ways. There are no published recommendations to guide the transition from neonatal to pediatric guidelines. Therefore, hospitalized neonates and infants can be resuscitated using neonatal guidelines, pediatric guidelines, or a hybrid approach. This report summarizes the current neonatal and pediatric resuscitation guidelines, considers how to apply them to hospitalized neonates and infants, and identifies knowledge gaps and future priorities. The lack of strong scientific data makes it impossible to provide definitive recommendations on when to transition from neonatal to pediatric resuscitation guidelines. Therefore, it is up to health care teams and institutions to decide if neonatal or pediatric guidelines are the best choice in a given location or situation, considering local circumstances, health care team preferences, and resource limitations.


Assuntos
Reanimação Cardiopulmonar , Serviços Médicos de Emergência , Lactente , Criança , Recém-Nascido , Humanos , Estados Unidos , Idoso , Ressuscitação , American Heart Association , Tratamento de Emergência , Academias e Institutos
5.
Med Decis Making ; 43(6): 680-691, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37401184

RESUMO

BACKGROUND: For the representative problem of prostate cancer grading, we sought to simultaneously model both the continuous nature of the case spectrum and the decision thresholds of individual pathologists, allowing quantitative comparison of how they handle cases at the borderline between diagnostic categories. METHODS: Experts and pathology residents each rated a standardized set of prostate cancer histopathological images on the International Society of Urological Pathologists (ISUP) scale used in clinical practice. They diagnosed 50 histologic cases with a range of malignancy, including intermediate cases in which clear distinction was difficult. We report a statistical model showing the degree to which each individual participant can separate the cases along the latent decision spectrum. RESULTS: The slides were rated by 36 physicians in total: 23 ISUP pathologists and 13 residents. As anticipated, the cases showed a full continuous range of diagnostic severity. Cases ranged along a logit scale consistent with the consensus rating (Consensus ISUP 1: mean -0.93 [95% confidence interval {CI} -1.10 to -0.78], ISUP 2: -0.19 logits [-0.27 to -0.12]; ISUP 3: 0.56 logits [0.06-1.06]; ISUP 4 1.24 logits [1.10-1.38]; ISUP 5: 1.92 [1.80-2.04]). The best raters were able to meaningfully discriminate between all 5 ISUP categories, showing intercategory thresholds that were quantifiably precise and meaningful. CONCLUSIONS: We present a method that allows simultaneous quantification of both the confusability of a particular case and the skill with which raters can distinguish the cases. IMPLICATIONS: The technique generalizes beyond the current example to other clinical situations in which a diagnostician must impose an ordinal rating on a biological spectrum. HIGHLIGHTS: Question: How can we quantify skill in visual diagnosis for cases that sit at the border between 2 ordinal categories-cases that are inherently difficult to diagnose?Findings: In this analysis of pathologists and residents rating prostate biopsy specimens, decision-aligned response models are calculated that show how pathologists would be likely to classify any given case on the diagnostic spectrum. Decision thresholds are shown to vary in their location and precision.Significance: Improving on traditional measures such as kappa and receiver-operating characteristic curves, this specialization of item response models allows better individual feedback to both trainees and pathologists, including better quantification of acceptable decision variation.


Assuntos
Neoplasias da Próstata , Masculino , Humanos , Gradação de Tumores , Incerteza , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/patologia , Modelos Estatísticos , Patologistas
6.
Perspect Med Educ ; 12(1): 282-293, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37520509

RESUMO

Coaching is proposed as a means of improving the learning culture of medicine. By fostering trusting teacher-learner relationships, learners are encouraged to embrace feedback and make the most of failure. This paper posits that a cultural shift is necessary to fully harness the potential of coaching in graduate medical education. We introduce the deliberately developmental organization framework, a conceptual model focusing on three core dimensions: developmental communities, developmental aspirations, and developmental practices. These dimensions broaden the scope of coaching interactions. Implementing this organizational change within graduate medical education might be challenging, yet we argue that embracing deliberately developmental principles can embed coaching into everyday interactions and foster a culture in which discussing failure to maximize learning becomes acceptable. By applying the dimensions of developmental communities, aspirations, and practices, we present a six-principle roadmap towards transforming graduate medical education training programs into deliberately developmental organizations.

7.
Med Teach ; 45(6): 565-573, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36862064

RESUMO

The use of Artificial Intelligence (AI) in medical education has the potential to facilitate complicated tasks and improve efficiency. For example, AI could help automate assessment of written responses, or provide feedback on medical image interpretations with excellent reliability. While applications of AI in learning, instruction, and assessment are growing, further exploration is still required. There exist few conceptual or methodological guides for medical educators wishing to evaluate or engage in AI research. In this guide, we aim to: 1) describe practical considerations involved in reading and conducting studies in medical education using AI, 2) define basic terminology and 3) identify which medical education problems and data are ideally-suited for using AI.


Assuntos
Inteligência Artificial , Educação Médica , Humanos , Reprodutibilidade dos Testes
8.
Acad Med ; 98(11): 1251-1260, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-36972129

RESUMO

Competency-based medical education (CBME) requires a criterion-referenced approach to assessment. However, despite best efforts to advance CBME, there remains an implicit, and at times, explicit, demand for norm-referencing, particularly at the junction of undergraduate medical education (UME) and graduate medical education (GME). In this manuscript, the authors perform a root cause analysis to determine the underlying reasons for continued norm-referencing in the context of the movement toward CBME. The root cause analysis consisted of 2 processes: (1) identification of potential causes and effects organized into a fishbone diagram and (2) identification of the 5 whys. The fishbone diagram identified 2 primary drivers: the false notion that measures such as grades are truly objective and the importance of different incentives for different key constituents. From these drivers, the importance of norm-referencing for residency selection was identified as a critical component. Exploration of the 5 whys further detailed the reasons for continuation of norm-referenced grading to facilitate selection, including the need for efficient screening in residency selection, dependence upon rank-order lists, perception that there is a best outcome to the match, lack of trust between residency programs and medical schools, and inadequate resources to support progression of trainees. Based on these findings, the authors argue that the implied purpose of assessment in UME is primarily stratification for residency selection. Because stratification requires comparison, a norm-referenced approach is needed. To advance CBME, the authors recommend reconsideration of the approach to assessment in UME to maintain the purpose of selection while also advancing the purpose of rendering a competency decision. Changing the approach will require a collaboration between national organizations, accrediting bodies, GME programs, UME programs, students, and patients/societies. Details are provided regarding the specific approaches required of each key constituent group.


Assuntos
Educação Médica , Internato e Residência , Humanos , Faculdades de Medicina , Análise de Causa Fundamental , Educação Baseada em Competências , Educação de Pós-Graduação em Medicina , Competência Clínica
9.
J Contin Educ Health Prof ; 43(1): 52-59, 2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36849429

RESUMO

ABSTRACT: The information systems designed to support clinical care have evolved separately from those that support health professions education. This has resulted in a considerable digital divide between patient care and education, one that poorly serves practitioners and organizations, even as learning becomes ever more important to both. In this perspective, we advocate for the enhancement of existing health information systems so that they intentionally facilitate learning. We describe three well-regarded frameworks for learning that can point toward how health care information systems can best evolve to support learning. The Master Adaptive Learner model suggests ways that the individual practitioner can best organize their activities to ensure continual self-improvement. The PDSA cycle similarly proposes actions for improvement but at a health care organization's workflow level. Senge's Five Disciplines of the Learning Organization, a more general framework from the business literature, serves to further inform how disparate information and knowledge flows can be managed for continual improvement. Our main thesis holds that these types of learning frameworks should inform the design and integration of information systems serving the health professions. An underutilized mediator of educational improvement is the ubiquitous electronic health record. The authors list learning analytic opportunities, including potential modifications of learning management systems and the electronic health record, that would enhance health professions education and support the shared goal of delivering high-quality evidence-based health care.


Assuntos
Registros Eletrônicos de Saúde , Aprendizagem , Humanos , Ocupações em Saúde , Conhecimento
10.
Med Educ Online ; 28(1): 2178913, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36821373

RESUMO

Graduate medical education (GME) and Clinical Competency Committees (CCC) have been evolving to monitor trainee progression using competency-based medical education principles and outcomes, though evidence suggests CCCs fall short of this goal. Challenges include that evaluation data are often incomplete, insufficient, poorly aligned with performance, conflicting or of unknown quality, and CCCs struggle to organize, analyze, visualize, and integrate data elements across sources, collection methods, contexts, and time-periods, which makes advancement decisions difficult. Learning analytics have significant potential to improve competence committee decision making, yet their use is not yet commonplace. Learning analytics (LA) is the interpretation of multiple data sources gathered on trainees to assess academic progress, predict future performance, and identify potential issues to be addressed with feedback and individualized learning plans. What distinguishes LA from other educational approaches is systematic data collection and advanced digital interpretation and visualization to inform educational systems. These data are necessary to: 1) fully understand educational contexts and guide improvements; 2) advance proficiency among stakeholders to make ethical and accurate summative decisions; and 3) clearly communicate methods, findings, and actionable recommendations for a range of educational stakeholders. The ACGME released the third edition CCC Guidebook for Programs in 2020 and the 2021 Milestones 2.0 supplement of the Journal of Graduate Medical Education (JGME Supplement) presented important papers that describe evaluation and implementation features of effective CCCs. Principles of LA underpin national GME outcomes data and training across specialties; however, little guidance currently exists on how GME programs can use LA to improve the CCC process. Here we outline recommendations for implementing learning analytics for supporting decision making on trainee progress in two areas: 1) Data Quality and Decision Making, and 2) Educator Development.


Assuntos
Internato e Residência , Humanos , Competência Clínica , Educação de Pós-Graduação em Medicina , Educação Baseada em Competências , Aprendizagem
11.
Acad Med ; 98(1): 88-97, 2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36576770

RESUMO

PURPOSE: Assessing expertise using psychometric models usually yields a measure of ability that is difficult to generalize to the complexity of diagnoses in clinical practice. However, using an item response modeling framework, it is possible to create a decision-aligned response model that captures a clinician's decision-making behavior on a continuous scale that fully represents competing diagnostic possibilities. In this proof-of-concept study, the authors demonstrate the necessary statistical conceptualization of this model using a specific electrocardiogram (ECG) example. METHOD: The authors collected a range of ECGs with elevated ST segments due to either ST-elevation myocardial infarction (STEMI) or pericarditis. Based on pilot data, 20 ECGs were chosen to represent a continuum from "definitely STEMI" to "definitely pericarditis," including intermediate cases in which the diagnosis was intentionally unclear. Emergency medicine and cardiology physicians rated these ECGs on a 5-point scale ("definitely STEMI" to "definitely pericarditis"). The authors analyzed these ratings using a graded response model showing the degree to which each participant could separate the ECGs along the diagnostic continuum. The authors compared these metrics with the discharge diagnoses noted on chart review. RESULTS: Thirty-seven participants rated the ECGs. As desired, the ECGs represented a range of phenotypes, including cases where participants were uncertain in their diagnosis. The response model showed that participants varied both in their propensity to diagnose one condition over another and in where they placed the thresholds between the 5 diagnostic categories. The most capable participants were able to meaningfully use all categories, with precise thresholds between categories. CONCLUSIONS: The authors present a decision-aligned response model that demonstrates the confusability of a particular ECG and the skill with which a clinician can distinguish 2 diagnoses along a continuum of confusability. These results have broad implications for testing and for learning to manage uncertainty in diagnosis.


Assuntos
Cardiologia , Infarto do Miocárdio com Supradesnível do Segmento ST , Humanos , Infarto do Miocárdio com Supradesnível do Segmento ST/diagnóstico , Incerteza , Arritmias Cardíacas , Eletrocardiografia/métodos
12.
Adv Health Sci Educ Theory Pract ; 27(5): 1383-1400, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36414880

RESUMO

Adaptive expertise represents the combination of both efficient problem-solving for clinical encounters with known solutions, as well as the ability to learn and innovate when faced with a novel challenge. Fostering adaptive expertise requires careful approaches to instructional design to emphasize deeper, more effortful learning. These teaching strategies are time-intensive, effortful, and challenging to implement in health professions education curricula. The authors are educators whose missions encompass the medical education continuum, from undergraduate through to organizational learning. Each has grappled with how to promote adaptive expertise development in their context. They describe themes drawn from educational experiences at these various learner levels to illustrate strategies that may be used to cultivate adaptive expertise.At Vanderbilt University School of Medicine, a restructuring of the medical school curriculum provided multiple opportunities to use specific curricular strategies to foster adaptive expertise development. The advantage for students in terms of future learning had to be rationalized against assessments that are more short-term in nature. In a consortium of emergency medicine residency programs, a diversity of instructional approaches was deployed to foster adaptive expertise within complex clinical learning environments. Here the value of adaptive expertise approaches must be balanced with the efficiency imperative in clinical care. At Mayo Clinic, an existing continuous professional development program was used to orient the entire organization towards an adaptive expertise mindset, with each individual making a contribution to the shift.The different contexts illustrate both the flexibility of the adaptive expertise conceptualization and the need to customize the educational approach to the developmental stage of the learner. In particular, an important benefit of teaching to adaptive expertise is the opportunity to influence individual professional identity formation to ensure that clinicians of the future value deeper, more effortful learning strategies throughout their careers.


Assuntos
Educação Médica , Humanos , Currículo , Aprendizagem , Resolução de Problemas , Estudantes
13.
J Gen Intern Med ; 37(9): 2280-2290, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35445932

RESUMO

Assessing residents and clinical fellows is a high-stakes activity. Effective assessment is important throughout training so that identified areas of strength and weakness can guide educational planning to optimize outcomes. Assessment has historically been underemphasized although medical education oversight organizations have strengthened requirements in recent years. Growing acceptance of competency-based medical education and its logical extension to competency-based time-variable (CB-TV) graduate medical education (GME) further highlights the importance of implementing effective evidence-based approaches to assessment. The Clinical Competency Committee (CCC) has emerged as a key programmatic structure in graduate medical education. In the context of launching a multi-specialty pilot of CB-TV GME in our health system, we have examined several program's CCC processes and reviewed the relevant literature to propose enhancements to CCCs. We recommend that all CCCs fulfill three core goals, regularly applied to every GME trainee: (1) discern and describe the resident's developmental status to individualize education, (2) determine readiness for unsupervised practice, and (3) foster self-assessment ability. We integrate the literature and observations from GME program CCCs in our institutions to evaluate how current CCC processes support or undermine these goals. Obstacles and key enablers are identified. Finally, we recommend ways to achieve the stated goals, including the following: (1) assess and promote the development of competency in all trainees, not just outliers, through a shared model of assessment and competency-based advancement; (2) strengthen CCC assessment processes to determine trainee readiness for independent practice; and (3) promote trainee reflection and informed self-assessment. The importance of coaching for competency, robust workplace-based assessments, feedback, and co-production of individualized learning plans are emphasized. Individual programs and their CCCs must strengthen assessment tools and frameworks to realize the potential of competency-oriented education.


Assuntos
Competência Clínica , Internato e Residência , Educação Baseada em Competências , Educação de Pós-Graduação em Medicina , Humanos , Autoavaliação (Psicologia)
15.
Acad Med ; 97(4): 593-602, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-35086115

RESUMO

PURPOSE: Using electrocardiogram (ECG) interpretation as an example of a widely taught diagnostic skill, the authors conducted a systematic review and meta-analysis to demonstrate how research evidence on instruction in diagnosis can be synthesized to facilitate improvement of educational activities (instructional modalities, instructional methods, and interpretation approaches), guide the content and specificity of such activities, and provide direction for research. METHOD: The authors searched PubMed/MEDLINE, Embase, Cochrane CENTRAL, PsycInfo, CINAHL, ERIC, and Web of Science databases through February 21, 2020, for empirical investigations of ECG interpretation training enrolling medical students, residents, or practicing physicians. They appraised study quality with the Medical Education Research Study Quality Instrument and pooled standardized mean differences (SMDs) using random effects meta-analysis. RESULTS: Of 1,002 articles identified, 59 were included (enrolling 17,251 participants). Among 10 studies comparing instructional modalities, 8 compared computer-assisted and face-to-face instruction, with pooled SMD 0.23 (95% CI, 0.09, 0.36) indicating a small, statistically significant difference favoring computer-assisted instruction. Among 19 studies comparing instructional methods, 5 evaluated individual versus group training (pooled SMD -0.35 favoring group study [95% CI, -0.06, -0.63]), 4 evaluated peer-led versus faculty-led instruction (pooled SMD 0.38 favoring peer instruction [95% CI, 0.01, 0.74]), and 4 evaluated contrasting ECG features (e.g., QRS width) from 2 or more diagnostic categories versus routine examination of features within a single ECG or diagnosis (pooled SMD 0.23 not significantly favoring contrasting features [95% CI, -0.30, 0.76]). Eight studies compared ECG interpretation approaches, with pooled SMD 0.92 (95% CI, 0.48, 1.37) indicating a large, statistically significant effect favoring more systematic interpretation approaches. CONCLUSIONS: Some instructional interventions appear to improve learning in ECG interpretation; however, many evidence-based instructional strategies are insufficiently investigated. The findings may have implications for future research and design of training to improve skills in ECG interpretation and other types of visual diagnosis.


Assuntos
Instrução por Computador , Educação Médica , Médicos , Estudantes de Medicina , Instrução por Computador/métodos , Eletrocardiografia , Humanos
16.
Acad Med ; 97(4): 603-615, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-33913438

RESUMO

PURPOSE: To identify features of instruments, test procedures, study design, and validity evidence in published studies of electrocardiogram (ECG) skill assessments. METHOD: The authors conducted a systematic review, searching MEDLINE, Embase, Cochrane CENTRAL, PsycINFO, CINAHL, ERIC, and Web of Science databases in February 2020 for studies that assessed the ECG interpretation skill of physicians or medical students. Two authors independently screened articles for inclusion and extracted information on test features, study design, risk of bias, and validity evidence. RESULTS: The authors found 85 eligible studies. Participants included medical students (42 studies), postgraduate physicians (48 studies), and practicing physicians (13 studies). ECG selection criteria were infrequently reported: 25 studies (29%) selected single-diagnosis or straightforward ECGs; 5 (6%) selected complex cases. ECGs were selected by generalists (15 studies [18%]), cardiologists (10 studies [12%]), or unspecified experts (4 studies [5%]). The median number of ECGs per test was 10. The scoring rubric was defined by 2 or more experts in 32 studies (38%), by 1 expert in 5 (6%), and using clinical data in 5 (6%). Scoring was performed by a human rater in 34 studies (40%) and by computer in 7 (8%). Study methods were appraised as low risk of selection bias in 16 studies (19%), participant flow bias in 59 (69%), instrument conduct and scoring bias in 20 (24%), and applicability problems in 56 (66%). Evidence of test score validity was reported infrequently, namely evidence of content (39 studies [46%]), internal structure (11 [13%]), relations with other variables (10 [12%]), response process (2 [2%]), and consequences (3 [4%]). CONCLUSIONS: ECG interpretation skill assessments consist of idiosyncratic instruments that are too short, composed of items of obscure provenance, with incompletely specified answers, graded by individuals with underreported credentials, yielding scores with limited interpretability. The authors suggest several best practices.


Assuntos
Médicos , Estudantes de Medicina , Atenção à Saúde , Eletrocardiografia , Humanos , Pesquisadores
17.
Teach Learn Med ; 34(2): 167-177, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34000944

RESUMO

CONSTRUCT: For assessing the skill of visual diagnosis such as radiograph interpretation, competency standards are often developed in an ad hoc method, with a poorly delineated connection to the target clinical population. BACKGROUND: Commonly used methods to assess for competency in radiograph interpretation are subjective and potentially biased due to a small sample size of cases, subjective evaluations, or include an expert-generated case-mix versus a representative sample from the clinical field. Further, while digital platforms are available to assess radiograph interpretation skill against an objective standard, they have not adopted a data-driven competency standard which informs educators and the public that a physician has achieved adequate mastery to enter practice where they will be making high-stakes clinical decisions. APPROACH: Operating on a purposeful sample of radiographs drawn from the clinical domain, we adapted the Ebel Method, an established standard setting method, to ascertain a defensible, clinically relevant mastery learning competency standard for the skill of radiograph interpretation as a model for deriving competency thresholds in visual diagnosis. Using a previously established digital platform, emergency physicians interpreted pediatric musculoskeletal extremity radiographs. Using one-parameter item response theory, these data were used to categorize radiographs by interpretation difficulty terciles (i.e. easy, intermediate, hard). A panel of emergency physicians, orthopedic surgeons, and plastic surgeons rated each radiograph with respect to clinical significance (low, medium, high). These data were then used to create a three-by-three matrix where radiographic diagnoses were categorized by interpretation difficulty and significance. Subsequently, a multidisciplinary panel that included medical and parent stakeholders determined acceptable accuracy for each of the nine cells. An overall competency standard was derived from the weighted sum. Finally, to examine consequences of implementing this standard, we reported on the types of diagnostic errors that may occur by adhering to the derived competency standard. FINDINGS: To determine radiograph interpretation difficulty scores, 244 emergency physicians interpreted 1,835 pediatric musculoskeletal extremity radiographs. Analyses of these data demonstrated that the median interpretation difficulty rating of the radiographs was -1.8 logits (IQR -4.1, 3.2), with a significant difference of difficulty across body regions (p < 0.0001). Physician review classified the radiographs as 1,055 (57.8%) as low, 424 (23.1%) medium or 356 (19.1%) high clinical significance. The multidisciplinary panel suggested a range of acceptable scores between cells in the three-by-three table of 76% to 95% and the sum of equal-weighted scores resulted in an overall performance-based competency score of 85.5% accuracy. Of the 14.5% diagnostic interpretation errors that may occur at the bedside if this competency standard were implemented, 9.8% would be in radiographs of low-clinical significance, while 2.5% and 2.3% would be in radiographs of medium or high clinical significance, respectively. CONCLUSION(S): This study's novel integration of radiograph selection and a standard setting method could be used to empirically drive evidence-based competency standard for radiograph interpretation and can serve as a model for deriving competency thresholds for clinical tasks emphasizing visual diagnosis.


Assuntos
Serviço Hospitalar de Emergência , Médicos , Criança , Erros de Diagnóstico , Humanos , Radiografia
18.
AEM Educ Train ; 5(2): e10592, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33898916

RESUMO

OBJECTIVES: Using a sample of pediatric chest radiographs (pCXR) taken to rule out pneumonia, we obtained diagnostic interpretations from physicians and used learning analytics to determine the radiographic variables and participant review processes that predicted for an incorrect diagnostic interpretation. METHODS: This was a prospective cross-sectional study. A convenience sample of frontline physicians with a range of experience levels interpreted 200 pCXR presented using a customized online radiograph presentation platform. Participants were asked to determine absence or presence (with respective location) of pneumonia. The pCXR were categorized for specific image-based variables potentially associated with interpretation difficulty. We also generated heat maps displaying the locations of diagnostic error among normal pCXR. Finally, we compared image review processes in participants with higher versus lower levels of clinical experience. RESULTS: We enrolled 83 participants (20 medical students, 40 postgraduate trainees, and 23 faculty) and obtained 12,178 case interpretations. Variables that predicted for increased pCXR interpretation difficulty were pneumonia versus no pneumonia (ß = 8.7, 95% confidence interval [CI] = 7.4 to 10.0), low versus higher visibility of pneumonia (ß = -2.2, 95% CI = -2.7 to -1.7), nonspecific lung pathology (ß = 0.9, 95% CI = 0.40 to 1.5), localized versus multifocal pneumonia (ß = -0.5, 95% CI = -0.8 to -0.1), and one versus two views (ß = 0.9, 95% CI = 0.01 to 1.9). A review of diagnostic errors identified that bony structures, vessels in the perihilar region, peribronchial thickening, and thymus were often mistaken for pneumonia. Participants with lower experience were less accurate when they reviewed one of two available views (p < 0.0001), and accuracy of those with higher experience increased with increased confidence in their response (p < 0.0001). CONCLUSIONS: Using learning analytics, we identified actionable learning opportunities for pCXR interpretation, which can be used to allow for a customized weighting of which cases to practice. Furthermore, experienced-novice comparisons revealed image review processes that were associated with greater diagnostic accuracy, providing additional insight into skill development of image interpretation.

19.
Adv Health Sci Educ Theory Pract ; 26(3): 881-912, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33646468

RESUMO

Visual diagnosis of radiographs, histology and electrocardiograms lends itself to deliberate practice, facilitated by large online banks of cases. Which cases to supply to which learners in which order is still to be worked out, with there being considerable potential for adapting the learning. Advances in statistical modeling, based on an accumulating learning curve, offer methods for more effectively pairing learners with cases of known calibrations. Using demonstration radiograph and electrocardiogram datasets, the advantages of moving from traditional regression to multilevel methods for modeling growth in ability or performance are demonstrated, with a final step of integrating case-level item-response information based on diagnostic grouping. This produces more precise individual-level estimates that can eventually support learner adaptive case selection. The progressive increase in model sophistication is not simply statistical but rather brings the models into alignment with core learning principles including the importance of taking into account individual differences in baseline skill and learning rate as well as the differential interaction with cases of varying diagnosis and difficulty. The developed approach can thus give researchers and educators a better basis on which to anticipate learners' pathways and individually adapt their future learning.


Assuntos
Benchmarking , Curva de Aprendizado , Competência Clínica , Avaliação Educacional , Humanos , Modelos Estatísticos
20.
JAMA Intern Med ; 181(5): 722-723, 2021 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-33523118
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA