Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 149
Filter
Add more filters

Country/Region as subject
Publication year range
1.
J Pediatr ; 274: 114183, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38964439

ABSTRACT

OBJECTIVE: To examine the effectiveness of an education intervention for reducing physician diagnostic error in identifying pediatric burn and bruise injuries suspicious for abuse, and to determine case-specific variables associated with an increased risk of diagnostic error. STUDY DESIGN: This was a multicenter, prospective, cross-sectional study. A convenience sample of pediatricians and other front-line physicians who treat acutely injured children in the United States and Canada were eligible for participation. Using a web-based education and assessment platform, physicians deliberately practiced with a spectrum of 300 pediatric burn and bruise injury image-based cases. Participants were asked if there was a suspicion for abuse present or absent, were given corrective feedback after every case, and received summative diagnostic performance overall (accuracy), suspicion for abuse present (sensitivity), and absent (specificity). RESULTS: Of the 93/137 (67.9%) physicians who completed all 300 cases, there was a significant reduction in diagnostic error (initial 16.7%, final 1.6%; delta -15.1%; 95% CI -13.5, -16.7), sensitivity error (initial 11.9%, final 0.7%; delta -11.2%; 95% CI -9.8, -12.5), and specificity error (initial 23.3%, final 6.6%; delta -16.7%; 95% CI -14.8, -18.6). Based on 35 627 case interpretations, variables associated with diagnostic error included patient age, sex, skin color, mechanism of injury, and size and pattern of injury. CONCLUSIONS: The education intervention substantially reduced diagnostic error in differentiating the presence vs absence of a suspicion for abuse in children with burn and bruise injuries. Several case-based variables were associated with diagnostic error, and these data can be used to close specific skill gaps in this clinical domain.

2.
Med Educ ; 58(1): 164-170, 2024 01.
Article in English | MEDLINE | ID: mdl-37495269

ABSTRACT

BACKGROUND: Despite the constant presence of change and innovation in health professions education (HPE), there has been relatively little theoretical modelling of such change, the experiences of change, the ideology associated with change or the unexpected consequences of change. In this paper, the authors explore theoretical approaches to the adoption of innovations in HPE as a way of mapping a broader theoretical landscape of change. METHOD: The authors, HPE researchers with an interest in technology adoption and systemic change, present a narrative review of the literature based on a series of thought experiments regarding how communities and individuals respond to the introduction of new ideas or methods. This research investigates the stages of innovation adoption, from the emergence and hype around new ideas to the concrete experiences of early adopters. RESULTS: When an innovation first emerges, there is often little concrete information available to inform potential adopters, leaving it susceptible to hype, both positive and negative. This can be described using the Gartner Hype Cycle model, albeit with important caveats. Once the adoption of an innovation gets underway, early adopter user experiences can inform those that follow. This can be described using Rogers' diffusion of innovation model, again with caveats. Notably, neither model goes beyond the point of single point-in-time, yes/no, individual adoption. Other approaches, such as learning curve theory, are needed to track uptake and maintenance by individuals over time. SIGNIFICANCE: This expanded theoretical base, while still somewhat instrumentalist, combined with complementary theoretical perspectives can afford opportunities to better explore reasons for variance, volunteerism and resistance to change. In summary, change is complicated and nuanced, and better models and theories are needed to understand and work meaningfully with change in HPE. To that end, the authors seek to encourage richer and more thoughtful research and scholarly thinking about change and a more nuanced approach to the pursuit of change in HPE as a whole.


Subject(s)
Diffusion of Innovation , Health Occupations , Humans , Health Occupations/education
3.
Med Teach ; 46(4): 471-485, 2024 04.
Article in English | MEDLINE | ID: mdl-38306211

ABSTRACT

Changes in digital technology, increasing volume of data collection, and advances in methods have the potential to unleash the value of big data generated through the education of health professionals. Coupled with this potential are legitimate concerns about how data can be used or misused in ways that limit autonomy, equity, or harm stakeholders. This consensus statement is intended to address these issues by foregrounding the ethical imperatives for engaging with big data as well as the potential risks and challenges. Recognizing the wide and ever evolving scope of big data scholarship, we focus on foundational issues for framing and engaging in research. We ground our recommendations in the context of big data created through data sharing across and within the stages of the continuum of the education and training of health professionals. Ultimately, the goal of this statement is to support a culture of trust and quality for big data research to deliver on its promises for health professions education (HPE) and the health of society. Based on expert consensus and review of the literature, we report 19 recommendations in (1) framing scholarship and research through research, (2) considering unique ethical practices, (3) governance of data sharing collaborations that engage stakeholders, (4) data sharing processes best practices, (5) the importance of knowledge translation, and (6) advancing the quality of scholarship through multidisciplinary collaboration. The recommendations were modified and refined based on feedback from the 2022 Ottawa Conference attendees and subsequent public engagement. Adoption of these recommendations can help HPE scholars share data ethically and engage in high impact big data scholarship, which in turn can help the field meet the ultimate goal: high-quality education that leads to high-quality healthcare.


Subject(s)
Big Data , Health Occupations , Information Dissemination , Humans , Health Occupations/education , Consensus
4.
Ann Emerg Med ; 81(4): 413-426, 2023 04.
Article in English | MEDLINE | ID: mdl-36774204

ABSTRACT

STUDY OBJECTIVE: Because number-based standards are increasingly controversial, the objective of this study was to derive a performance-based competency standard for the image interpretation task of point-of-care ultrasound (POCUS). METHODS: This was a prospective study. Operating on a clinically-relevant sample of POCUS images, we adapted the Ebel standard-setting method to derive a performance benchmark in 4 diverse pediatric POCUS applications: soft tissue, lung, cardiac and focused assessment with sonography in trauma (FAST). In Phase I (difficulty calibration), cases were categorized into interpretation difficulty terciles (easy, intermediate, hard) using emergency physician-derived data. In Phase II (significance), a 4-person expert panel categorized cases as low, medium, or high clinical significance. In Phase III (standard setting), a 3x3 matrix was created, categorizing cases by difficulty and significance, and a 6-member panel determined acceptable accuracy for each of the 9 cells. An overall competency standard was derived from the weighted sum. RESULTS: We obtained data from 379 emergency physicians resulting in 67,093 interpretations and a median of 184 (interquartile range, 154, 190) interpretations per case. There were 78 (19.5%) easy, 272 (68.0%) medium, and 50 (12.5%) hard-to-interpret cases, and 237 (59.3%) low, 65 (16.3%) medium, and 98 (24.5%) cases of high clinical significance across the 4 POCUS applications. The panel determined an overall performance-based competency score of 85.0% for lung, 89.5% for cardiac, 90.5% for soft tissue, and 92.7% for FAST. CONCLUSION: This research provides a transparent chain of evidence that derived clinically relevant competency standards for POCUS image interpretation.


Subject(s)
Physicians , Point-of-Care Systems , Humans , Child , Prospective Studies , Ultrasonography/methods , Emergency Service, Hospital
5.
Med Teach ; 45(6): 565-573, 2023 06.
Article in English | MEDLINE | ID: mdl-36862064

ABSTRACT

The use of Artificial Intelligence (AI) in medical education has the potential to facilitate complicated tasks and improve efficiency. For example, AI could help automate assessment of written responses, or provide feedback on medical image interpretations with excellent reliability. While applications of AI in learning, instruction, and assessment are growing, further exploration is still required. There exist few conceptual or methodological guides for medical educators wishing to evaluate or engage in AI research. In this guide, we aim to: 1) describe practical considerations involved in reading and conducting studies in medical education using AI, 2) define basic terminology and 3) identify which medical education problems and data are ideally-suited for using AI.


Subject(s)
Artificial Intelligence , Education, Medical , Humans , Reproducibility of Results
6.
J Gen Intern Med ; 37(9): 2280-2290, 2022 07.
Article in English | MEDLINE | ID: mdl-35445932

ABSTRACT

Assessing residents and clinical fellows is a high-stakes activity. Effective assessment is important throughout training so that identified areas of strength and weakness can guide educational planning to optimize outcomes. Assessment has historically been underemphasized although medical education oversight organizations have strengthened requirements in recent years. Growing acceptance of competency-based medical education and its logical extension to competency-based time-variable (CB-TV) graduate medical education (GME) further highlights the importance of implementing effective evidence-based approaches to assessment. The Clinical Competency Committee (CCC) has emerged as a key programmatic structure in graduate medical education. In the context of launching a multi-specialty pilot of CB-TV GME in our health system, we have examined several program's CCC processes and reviewed the relevant literature to propose enhancements to CCCs. We recommend that all CCCs fulfill three core goals, regularly applied to every GME trainee: (1) discern and describe the resident's developmental status to individualize education, (2) determine readiness for unsupervised practice, and (3) foster self-assessment ability. We integrate the literature and observations from GME program CCCs in our institutions to evaluate how current CCC processes support or undermine these goals. Obstacles and key enablers are identified. Finally, we recommend ways to achieve the stated goals, including the following: (1) assess and promote the development of competency in all trainees, not just outliers, through a shared model of assessment and competency-based advancement; (2) strengthen CCC assessment processes to determine trainee readiness for independent practice; and (3) promote trainee reflection and informed self-assessment. The importance of coaching for competency, robust workplace-based assessments, feedback, and co-production of individualized learning plans are emphasized. Individual programs and their CCCs must strengthen assessment tools and frameworks to realize the potential of competency-oriented education.


Subject(s)
Clinical Competence , Internship and Residency , Competency-Based Education , Education, Medical, Graduate , Humans , Self-Assessment
7.
Adv Health Sci Educ Theory Pract ; 27(5): 1383-1400, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36414880

ABSTRACT

Adaptive expertise represents the combination of both efficient problem-solving for clinical encounters with known solutions, as well as the ability to learn and innovate when faced with a novel challenge. Fostering adaptive expertise requires careful approaches to instructional design to emphasize deeper, more effortful learning. These teaching strategies are time-intensive, effortful, and challenging to implement in health professions education curricula. The authors are educators whose missions encompass the medical education continuum, from undergraduate through to organizational learning. Each has grappled with how to promote adaptive expertise development in their context. They describe themes drawn from educational experiences at these various learner levels to illustrate strategies that may be used to cultivate adaptive expertise.At Vanderbilt University School of Medicine, a restructuring of the medical school curriculum provided multiple opportunities to use specific curricular strategies to foster adaptive expertise development. The advantage for students in terms of future learning had to be rationalized against assessments that are more short-term in nature. In a consortium of emergency medicine residency programs, a diversity of instructional approaches was deployed to foster adaptive expertise within complex clinical learning environments. Here the value of adaptive expertise approaches must be balanced with the efficiency imperative in clinical care. At Mayo Clinic, an existing continuous professional development program was used to orient the entire organization towards an adaptive expertise mindset, with each individual making a contribution to the shift.The different contexts illustrate both the flexibility of the adaptive expertise conceptualization and the need to customize the educational approach to the developmental stage of the learner. In particular, an important benefit of teaching to adaptive expertise is the opportunity to influence individual professional identity formation to ensure that clinicians of the future value deeper, more effortful learning strategies throughout their careers.


Subject(s)
Education, Medical , Humans , Curriculum , Learning , Problem Solving , Students
8.
Teach Learn Med ; 34(2): 167-177, 2022.
Article in English | MEDLINE | ID: mdl-34000944

ABSTRACT

CONSTRUCT: For assessing the skill of visual diagnosis such as radiograph interpretation, competency standards are often developed in an ad hoc method, with a poorly delineated connection to the target clinical population. BACKGROUND: Commonly used methods to assess for competency in radiograph interpretation are subjective and potentially biased due to a small sample size of cases, subjective evaluations, or include an expert-generated case-mix versus a representative sample from the clinical field. Further, while digital platforms are available to assess radiograph interpretation skill against an objective standard, they have not adopted a data-driven competency standard which informs educators and the public that a physician has achieved adequate mastery to enter practice where they will be making high-stakes clinical decisions. APPROACH: Operating on a purposeful sample of radiographs drawn from the clinical domain, we adapted the Ebel Method, an established standard setting method, to ascertain a defensible, clinically relevant mastery learning competency standard for the skill of radiograph interpretation as a model for deriving competency thresholds in visual diagnosis. Using a previously established digital platform, emergency physicians interpreted pediatric musculoskeletal extremity radiographs. Using one-parameter item response theory, these data were used to categorize radiographs by interpretation difficulty terciles (i.e. easy, intermediate, hard). A panel of emergency physicians, orthopedic surgeons, and plastic surgeons rated each radiograph with respect to clinical significance (low, medium, high). These data were then used to create a three-by-three matrix where radiographic diagnoses were categorized by interpretation difficulty and significance. Subsequently, a multidisciplinary panel that included medical and parent stakeholders determined acceptable accuracy for each of the nine cells. An overall competency standard was derived from the weighted sum. Finally, to examine consequences of implementing this standard, we reported on the types of diagnostic errors that may occur by adhering to the derived competency standard. FINDINGS: To determine radiograph interpretation difficulty scores, 244 emergency physicians interpreted 1,835 pediatric musculoskeletal extremity radiographs. Analyses of these data demonstrated that the median interpretation difficulty rating of the radiographs was -1.8 logits (IQR -4.1, 3.2), with a significant difference of difficulty across body regions (p < 0.0001). Physician review classified the radiographs as 1,055 (57.8%) as low, 424 (23.1%) medium or 356 (19.1%) high clinical significance. The multidisciplinary panel suggested a range of acceptable scores between cells in the three-by-three table of 76% to 95% and the sum of equal-weighted scores resulted in an overall performance-based competency score of 85.5% accuracy. Of the 14.5% diagnostic interpretation errors that may occur at the bedside if this competency standard were implemented, 9.8% would be in radiographs of low-clinical significance, while 2.5% and 2.3% would be in radiographs of medium or high clinical significance, respectively. CONCLUSION(S): This study's novel integration of radiograph selection and a standard setting method could be used to empirically drive evidence-based competency standard for radiograph interpretation and can serve as a model for deriving competency thresholds for clinical tasks emphasizing visual diagnosis.


Subject(s)
Emergency Service, Hospital , Physicians , Child , Diagnostic Errors , Humans , Radiography
9.
Med Teach ; 44(3): 276-286, 2022 03.
Article in English | MEDLINE | ID: mdl-34686101

ABSTRACT

INTRODUCTION: The American Medical Association formed the Accelerating Change in Medical Education Consortium through grants to effect change in medical education. The dissemination of educational innovations through scholarship was a priority. The objective of this study was to explore the patterns of collaboration of educational innovation through the consortium's publications. METHOD: Publications were identified from grantee schools' semi-annual reports. Each publication was coded for the number of citations, Altmetric score, domain of scholarship, and collaboration with other institutions. Social network analysis explored relationships at the midpoint and end of the grant. RESULTS: Over five years, the 32 Consortium institutions produced 168 publications, ranging from 38 papers from one institution to no manuscripts from another. The two most common domains focused on health system science (92 papers) and competency-based medical education (30 papers). Articles were published in 54 different journals. Forty percent of publications involved more than one institution. Social network analysis demonstrated rich publishing relationships within the Consortium members as well as beyond the Consortium schools. In addition, there was growth of the network connections and density over time. CONCLUSION: The Consortium fostered a scholarship network disseminating a broad range of educational innovations through publications of individual school projects and collaborations.


Subject(s)
Education, Medical , Social Network Analysis , American Medical Association , Fellowships and Scholarships , Financing, Organized , Humans , United States
10.
BMC Med Educ ; 22(1): 200, 2022 Mar 23.
Article in English | MEDLINE | ID: mdl-35321706

ABSTRACT

BACKGROUND: The COVID-19 pandemic is unprecedented in terms of the extent and rapidity of the disruption forced upon formal clinical education, most notably the extensive transition of clinical skills learning to interactive video-based clinical education. METHODS: In a phenomenologic study, we used thematic analysis to explore the COVID-19 disruption to clinical training and understand processes relating to adaptation in a large academic medical center. We conducted semi-structured interviews with 14 clinical teachers and 16 trainees representing all levels of clinical learning. Interviews occurred within the initial three months of the crisis, and data were analyzed following a thematic analysis coding process. RESULTS: We constructed eight themes synthesizing our participants' perceptions of the immediate unanticipated disruption, noting in the process their alignment with a change management framework. These included: urgency in adapting, with an obvious imperative for change; overcoming inconsistent involvement and support through the formation of self-organized frontline coalitions; attempts to develop strategy and vision via initially reactive but eventually consistent communication; empowering a volunteer army through co-creation and a flattened hierarchy; and efforts to sustain improvement and positive momentum with celebration of trial, error, and growth. The majority of participants found positive outcomes resulting from the tumultuous change process. Moreover, they were now more readily accepting of change, and tolerant of the ambiguous and iterative nature inherent in the education change process. Many anticipated that some innovation would, or would at least deserve to, continue post- crisis. CONCLUSIONS: The COVID-19 pandemic afforded an opportunity to study the content and process of change during an active crisis. In this case of clinical education, our findings provide insight into the ways an academic medical system adapts to unanticipated circumstances. We found alignment with broader organizational change management models and that, compared with crisis management models (and their shorter term focus on resolving such crises), stakeholders self-organized in a reliable manner that carries the potential advantage of preserving such beneficial change.


Subject(s)
COVID-19 , COVID-19/epidemiology , Clinical Competence , Educational Status , Humans , Learning , Pandemics
11.
J Emerg Med ; 62(4): 524-533, 2022 04.
Article in English | MEDLINE | ID: mdl-35282940

ABSTRACT

BACKGROUND: Pediatric musculoskeletal (pMSK) radiograph interpretations are common, but the specific radiograph features at risk of incorrect diagnosis are relatively unknown. OBJECTIVE: We determined the radiograph factors that resulted in diagnostic interpretation challenges for emergency physicians (EPs) reviewing pMSK radiographs. METHODS: EPs interpreted 1850 pMSK radiographs via a web-based platform and we derived interpretation difficulty scores for each radiograph in 13 body regions using one-parameter item response theory. We compared the difficulty scores by presence or absence of a fracture and, where applicable, by fracture location and morphology; significance was adjusted for multiple comparisons. An expert panel reviewed the 65 most commonly misdiagnosed fracture-negative radiographs to identify imaging features mistaken for fractures. RESULTS: We included data from 244 EPs, which resulted in 185,653 unique interpretations. For elbow, forearm, wrist, femur, knee, and tibia-fibula radiographs, those without a fracture had higher interpretation difficulty scores relative to those with a fracture; the opposite was true for the hand, pelvis, foot, and ankle radiographs (p < 0.004 for all comparisons). The descriptive review demonstrated that specific normal anatomy, overlapping bones, and external artefact from muscle or skin folds were often mistaken for fractures. There was a significant difference in difficulty score by anatomic locations of the fracture in the elbow, pelvis, and ankle (p < 0.004 for all comparisons). Ankle and elbow growth plate, fibular avulsion, and humerus condylar fractures were more difficult to diagnose than other fracture patterns (p < 0.004 for all comparisons). CONCLUSIONS: We identified actionable learning opportunities in pMSK radiograph interpretation for EPs.


Subject(s)
Elbow Joint , Humeral Fractures , Physicians , Child , Diagnostic Errors , Humans , Radiography
12.
Pediatr Emerg Care ; 38(2): e849-e855, 2022 Feb 01.
Article in English | MEDLINE | ID: mdl-35100784

ABSTRACT

OBJECTIVES: Using an education and assessment tool, we examined the number of cases necessary to achieve a performance benchmark in image interpretation of pediatric soft tissue, cardiac, lung, and focused assessment with sonography for trauma (FAST) point-of-care ultrasound (POCUS) applications. We also determined interpretation difficulty scores to derive which cases provided the greatest diagnostic challenges. METHODS: Pediatric emergency physicians participated in web-based pediatric POCUS courses sponsored by their institution as a credentialing priority. Participants deliberately practiced cases until they achieved diagnostic interpretation scores of combined 90% accuracy, sensitivity, and specificity. RESULTS: Of the 463 who enrolled, 379 (81.9%) completed cases. The median (interquartile range) number of cases required to achieve the performance benchmark for soft tissue was 94 (68-128); cardiac, 128 (86-201); lung, 87 (25-118); and FAST, 93 (68-133) (P < 0001). Specifically, cases completed to achieve benchmark were higher for cardiac relative to other applications (P < 0.0001 for all comparisons). In soft tissue cases, a foreign body was more difficult to diagnose than cobblestoning and hypoechoic collections (P = 0.036). Poor cardiac function and abnormal ventricles were more difficult to interpret with accuracy than normal (P < 0.0001) or pericardial effusion cases (P = 0.01). The absence of lung sliding was significantly more difficult to interpret than normal lung cases (P = 0.028). The interpretation difficulty of various FAST imaging findings was not significantly different. CONCLUSIONS: There was a significant variation in number of cases required to reach a performance benchmark. We also identified the specific applications and imaging findings that demonstrated the greatest diagnostic challenges. These data may inform future credentialing guidelines and POCUS learning interventions.


Subject(s)
Focused Assessment with Sonography for Trauma , Point-of-Care Systems , Child , Heart , Humans , Point-of-Care Testing , Ultrasonography
13.
Prehosp Emerg Care ; 25(6): 822-831, 2021.
Article in English | MEDLINE | ID: mdl-33054522

ABSTRACT

BACKGROUND: In most states, prehospital professionals (PHPs) are mandated reporters of suspected abuse but cite a lack of training as a challenge to recognizing and reporting physical abuse. We developed a learning platform for the visual diagnosis of pediatric abusive versus non-abusive burn and bruise injuries and examined the amount and rate of skill acquisition. METHODS: This was a prospective cross-sectional study of PHPs participating in an online educational intervention containing 114 case vignettes. PHPs indicated whether they believed a case was concerning for abuse and would report a case to child protection services. Participants received feedback after submitting a response, permitting deliberate practice of the cases. We describe learning curves, overall accuracy, sensitivity (diagnosis of abusive injuries) and specificity (diagnosis of non-abusive injuries) to determine the amount of learning. We performed multivariable regression analysis to identify specific demographic and case variables associated with a correct case interpretation. After completing the educational intervention, PHPs completed a self-efficacy survey on perceived gains in their ability to recognize cutaneous signs of abuse and report to social services. RESULTS: We enrolled 253 PHPs who completed all the cases; 158 (63.6%) emergency medical technicians (EMT), 95 (36.4%) advanced EMT and paramedics. Learning curves demonstrated that, with one exception, there was an increase in learning for participants throughout the educational intervention. Mean diagnostic accuracy increased by 4.9% (95% CI 3.2, 6.7), and the mean final diagnostic accuracy, sensitivity, and specificity were 82.1%, 75.4%, and 85.2%, respectively. There was an increased odds of getting a case correct for bruise versus burn cases (OR = 1.4; 95% CI 1.3, 1.5); if the PHP was an Advanced EMT/Paramedic (OR = 1.3; 95% CI 1.1, 1.4) ; and, if the learner indicated prior training in child abuse (OR = 1.2; 95% CI 1.0, 1.3). Learners indicated increased comfort in knowing which cases should be reported and interpreting exams in children with cutaneous injuries with a median Likert score of 5 out of 6 (IQR 5, 6). CONCLUSION: An online module utilizing deliberate practice led to measurable skill improvement among PHPs for differentiating abusive from non-abusive burn and bruise injuries.


Subject(s)
Child Abuse , Emergency Medical Services , Emergency Medical Technicians , Child , Child Abuse/diagnosis , Cross-Sectional Studies , Emergency Medical Technicians/education , Humans , Prospective Studies
14.
Adv Health Sci Educ Theory Pract ; 26(3): 881-912, 2021 08.
Article in English | MEDLINE | ID: mdl-33646468

ABSTRACT

Visual diagnosis of radiographs, histology and electrocardiograms lends itself to deliberate practice, facilitated by large online banks of cases. Which cases to supply to which learners in which order is still to be worked out, with there being considerable potential for adapting the learning. Advances in statistical modeling, based on an accumulating learning curve, offer methods for more effectively pairing learners with cases of known calibrations. Using demonstration radiograph and electrocardiogram datasets, the advantages of moving from traditional regression to multilevel methods for modeling growth in ability or performance are demonstrated, with a final step of integrating case-level item-response information based on diagnostic grouping. This produces more precise individual-level estimates that can eventually support learner adaptive case selection. The progressive increase in model sophistication is not simply statistical but rather brings the models into alignment with core learning principles including the importance of taking into account individual differences in baseline skill and learning rate as well as the differential interaction with cases of varying diagnosis and difficulty. The developed approach can thus give researchers and educators a better basis on which to anticipate learners' pathways and individually adapt their future learning.


Subject(s)
Benchmarking , Learning Curve , Clinical Competence , Educational Measurement , Humans , Models, Statistical
15.
Med Teach ; 43(sup2): S7-S16, 2021 07.
Article in English | MEDLINE | ID: mdl-34291715

ABSTRACT

In 2010, several key works in medical education predicted the changes necessary to train modern physicians to meet current and future challenges in health care, including the standardization of learning outcomes paired with individualized learning processes. The reframing of a medical expert as a flexible, adaptive team member and change agent, effective within a larger system and responsive to the community's needs, requires a new approach to education: competency-based medical education (CBME). CBME is an outcomes-based developmental approach to ensuring each trainee's readiness to advance through stages of training and continue to grow in unsupervised practice. Implementation of CBME with fidelity is a complex and challenging endeavor, demanding a fundamental shift in organizational culture and investment in appropriate infrastructure. This paper outlines how member schools of the American Medical Association Accelerating Change in Medical Education Consortium developed and implemented CBME, including common challenges and successes. Critical supporting factors include adoption of the master adaptive learner construct, longitudinal views of learner development, coaching, and a supportive learning environment.


Subject(s)
Education, Medical, Undergraduate , Education, Medical , Clinical Competence , Competency-Based Education , Organizational Culture
16.
J Emerg Nurs ; 47(2): 313-320, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33546884

ABSTRACT

INTRODUCTION: Electrocardiogram interpretation is an essential skill for emergency and critical care nurses and physicians. There remains a gap in standardized curricula and evaluation strategies used to achieve and assess competence in electrocardiogram interpretation. The purpose of this study was to develop an importance ranking of the 120 American Heart Association electrocardiogram diagnostic labels with interdisciplinary perspectives to inform curriculum development. METHODS: Data for this mixed methods study were collected through focus groups and individual semi-structured interviews. A card sort was used to assign relative importance scores to all 120 American Heart Association electrocardiogram diagnostic labels. Thematic analysis was used for qualitative data on participants' rationale for the rankings. RESULTS: The 18 participants included 6 emergency and critical care registered nurses, 5 cardiologists, and 7 emergency medicine physicians. The 5 diagnoses chosen as the most important by all disciplines were ventricular tachycardia, ventricular fibrillation, atrial fibrillation, complete heart block, and normal electrocardiogram. The "top 20" diagnoses by each discipline were also reported. Qualitative thematic content analysis revealed that participants from all 3 disciplines identified skill in electrocardiogram interpretation as clinically imperative and acknowledged the importance of recognizing normal, life threatening, and time-sensitive electrocardiogram rhythms. Additional qualitative themes, identified by individual disciplines, were reported. DISCUSSION: This mixed-methods approach provided valuable interdisciplinary perspectives concerning electrocardiogram curriculum case selection and prioritization. Study findings can provide a foundation for emergency and critical care educators to create local ECG educational programs. Further work is recommended to validate the list amongst a larger population of emergency and critical care frontline nurses and physicians.


Subject(s)
Cardiology/education , Electrocardiography/classification , Emergency Medicine/education , Emergency Nursing/education , Clinical Competence , Curriculum , Focus Groups , Humans
17.
Adv Health Sci Educ Theory Pract ; 25(4): 877-903, 2020 10.
Article in English | MEDLINE | ID: mdl-32140874

ABSTRACT

Models for diagnostic reasoning in radiology have been based on the observed behaviors of experienced radiologists but have not directly focused on the thought processes of novices as they improve their accuracy of image interpretation. By collecting think-aloud verbal reports, the current study was designed to investigate differences in specific thought processes between medical students (novices) as they learn and radiologists (experts), so that we can better design future instructional environments. Seven medical students and four physicians with radiology training were asked to interpret and diagnose pediatric elbow radiographs where fracture is suspected. After reporting their diagnosis of a case, they were given immediate feedback. Participants were asked to verbalize their thoughts while completing the diagnosis and while they reflected on the provided feedback. The protocol analysis of their verbalizations showed that participants used some combination of four processes to interpret the case: gestalt interpretation, purposeful search, rule application, and reasoning from a prior case. All types of processes except reasoning from a prior case were applied significantly more frequently by experts. Further, gestalt interpretation was used with higher frequency in abnormal cases while purposeful search was used more often for normal cases. Our assessment of processes could help guide the design of instructional environments with well-curated image banks and analytics to facilitate the novice's journey to expertise in image interpretation.


Subject(s)
Clinical Reasoning , Education, Medical/methods , Radiology/education , Clinical Competence , Cognition , Female , Humans , Learning , Male , Young Adult
18.
Teach Learn Med ; 32(4): 410-421, 2020.
Article in English | MEDLINE | ID: mdl-32397923

ABSTRACT

THEORY: Learning in digital environments allows the collection of inexpensive, fine-grained process data across a large population of learners. Intentional design of the data collection can enable iterative testing of an instructional design. In this study, we propose that across a population of learners the information from multiple choice question responses can help to identify which design features are associated with positive learner engagement. Hypothesis: We hypothesized that, within an online module that presents serial knowledge content, measures of click-level behavior will show sufficient, but variable, association with a test-measure so as to potentially guide instructional design. Method: The Aquifer online learning platform employs interactive approaches to enable effective learning of health professions content. A multidisciplinary focus group of experts identified potential learning analytic measures within an Aquifer learning module, including: hyperlinks clicked (yes/no), magnify buttons clicked (yes/no), expert advice links clicked (yes/no), and time spent on each page (seconds). Learning analytics approaches revealed which click-level data was correlated with the subsequent relevant Case MCQ. We report regression coefficients where the dependent variable is student accuracy on the Case MCQ as a general indicator of successful engagement. Results: Clicking hyperlinks, magnifying images, clicking "expert" links, and spending >100 seconds on each page were learning analytic measures and were positively correlated with Case MCQ success; rushing through pages (<20 seconds) was inversely correlated with success. Conversely, for some measures, we failed to find expected associations. Conclusions: In online learning environments, the wealth of process data available offers insights for instructional designers to iteratively hone the effectiveness of learning. Learning analytic measures of engagement can provide feedback as to which interaction elements are effective.


Subject(s)
Education, Distance/organization & administration , Education, Medical, Undergraduate/organization & administration , Information Dissemination/methods , Students, Medical/statistics & numerical data , Curriculum/standards , Education, Distance/methods , Education, Medical/organization & administration , Education, Medical, Undergraduate/methods , Humans , Organizational Innovation
19.
Med Teach ; 42(2): 196-203, 2020 02.
Article in English | MEDLINE | ID: mdl-31595825

ABSTRACT

Purpose: Compare time (speed) and product quality goals in a surgical procedural task.Methods: Secondary school students participating in a medical simulation-based training activity participated in a randomized experiment. Each participant completed eight repetitions of a blood vessel ligation. Once, between repetitions four and five, each participant received a randomly-assigned speed goal or quality goal. Outcomes included time and leak-free ligatures.Results: 80 students participated. The speed-goal group performed 18% faster on the final repetition than the quality-goal group, with adjusted fold change (FC) 0.82 (95% confidence interval [CI], 0.71, 0.94; p = 0.01). Conversely, the speed-goal group had fewer high-quality (leak-free) ligatures (odds ratio [OR] 0.36 [95% CI, 0.22, 0.58; p < 0.001]). For the quality-goal group, leaky ligatures took longer post-intervention than leak-free ligatures (FC 1.09 [95% CI, 1.02, 1.17; p = 0.01]), whereas average times for leaky and leak-free ligatures were similar for the speed-goal group (FC 0.97 [95% CI, 0.91, 1.04; p = 0.38]). For a given performance time, the speed-goal group had more leaks post-intervention than the quality-goal group (OR 3.35 [95% CI, 1.58, 7.10; p = 0.002]).Conclusions: Speed and quality goals promote different learning processes and outcomes among novices. Use of both speed and quality goals may facilitate more effective and efficient learning.


Subject(s)
Goals , Quality of Health Care , Vascular Surgical Procedures/education , Vascular Surgical Procedures/standards , Adolescent , Blood Vessels , Clinical Competence , Female , Humans , Learning , Male , Schools , Simulation Training , Students , Task Performance and Analysis , Time , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL