Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
1.
Syst Rev ; 13(1): 131, 2024 May 14.
Article in English | MEDLINE | ID: mdl-38745201

ABSTRACT

BACKGROUND: The current paradigm of competency-based medical education and learner-centredness requires learners to take an active role in their training. However, deliberate and planned continual assessment and performance improvement is hindered by the fragmented nature of many medical training programs. Attempts to bridge this continuity gap between supervision and feedback through learner handover have been controversial. Learning plans are an alternate educational tool that helps trainees identify their learning needs and facilitate longitudinal assessment by providing supervisors with a roadmap of their goals. Informed by self-regulated learning theory, learning plans may be the answer to track trainees' progress along their learning trajectory. The purpose of this study is to summarise the literature regarding learning plan use specifically in undergraduate medical education and explore the student's role in all stages of learning plan development and implementation. METHODS: Following Arksey and O'Malley's framework, a scoping review will be conducted to explore the use of learning plans in undergraduate medical education. Literature searches will be conducted using multiple databases by a librarian with expertise in scoping reviews. Through an iterative process, inclusion and exclusion criteria will be developed and a data extraction form refined. Data will be analysed using quantitative and qualitative content analyses. DISCUSSION: By summarising the literature on learning plan use in undergraduate medical education, this study aims to better understand how to support self-regulated learning in undergraduate medical education. The results from this project will inform future scholarly work in competency-based medical education at the undergraduate level and have implications for improving feedback and supporting learners at all levels of competence. SCOPING REVIEW REGISTRATION: Open Science Framework osf.io/wvzbx.


Subject(s)
Education, Medical, Undergraduate , Learning , Education, Medical, Undergraduate/methods , Humans , Clinical Competence , Competency-Based Education/methods
2.
Med Teach ; 46(4): 471-485, 2024 04.
Article in English | MEDLINE | ID: mdl-38306211

ABSTRACT

Changes in digital technology, increasing volume of data collection, and advances in methods have the potential to unleash the value of big data generated through the education of health professionals. Coupled with this potential are legitimate concerns about how data can be used or misused in ways that limit autonomy, equity, or harm stakeholders. This consensus statement is intended to address these issues by foregrounding the ethical imperatives for engaging with big data as well as the potential risks and challenges. Recognizing the wide and ever evolving scope of big data scholarship, we focus on foundational issues for framing and engaging in research. We ground our recommendations in the context of big data created through data sharing across and within the stages of the continuum of the education and training of health professionals. Ultimately, the goal of this statement is to support a culture of trust and quality for big data research to deliver on its promises for health professions education (HPE) and the health of society. Based on expert consensus and review of the literature, we report 19 recommendations in (1) framing scholarship and research through research, (2) considering unique ethical practices, (3) governance of data sharing collaborations that engage stakeholders, (4) data sharing processes best practices, (5) the importance of knowledge translation, and (6) advancing the quality of scholarship through multidisciplinary collaboration. The recommendations were modified and refined based on feedback from the 2022 Ottawa Conference attendees and subsequent public engagement. Adoption of these recommendations can help HPE scholars share data ethically and engage in high impact big data scholarship, which in turn can help the field meet the ultimate goal: high-quality education that leads to high-quality healthcare.


Subject(s)
Big Data , Health Occupations , Information Dissemination , Humans , Health Occupations/education , Consensus
3.
Article in English | MEDLINE | ID: mdl-38010576

ABSTRACT

First impressions can influence rater-based judgments but their contribution to rater bias is unclear. Research suggests raters can overcome first impressions in experimental exam contexts with explicit first impressions, but these findings may not generalize to a workplace context with implicit first impressions. The study had two aims. First, to assess if first impressions affect raters' judgments when workplace performance changes. Second, whether explicitly stating these impressions affects subsequent ratings compared to implicitly-formed first impressions. Physician raters viewed six videos where learner performance either changed (Strong to Weak or Weak to Strong) or remained consistent. Raters were assigned two groups. Group one (n = 23, Explicit) made a first impression global rating (FIGR), then scored learners using the Mini-CEX. Group two (n = 22, Implicit) scored learners at the end of the video solely with the Mini-CEX. For the Explicit group, in the Strong to Weak condition, the FIGR (M = 5.94) was higher than the Mini-CEX Global rating (GR) (M = 3.02, p < .001). In the Weak to Strong condition, the FIGR (M = 2.44) was lower than the Mini-CEX GR (M = 3.96 p < .001). There was no difference between the FIGR and the Mini-CEX GR in the consistent condition (M = 6.61, M = 6.65 respectively, p = .84). There were no statistically significant differences in any of the conditions when comparing both groups' Mini-CEX GR. Therefore, raters adjusted their judgments based on the learners' performances. Furthermore, raters who made their first impressions explicit showed similar rater bias to raters who followed a more naturalistic process.

4.
J Contin Educ Health Prof ; 43(3): 155-163, 2023.
Article in English | MEDLINE | ID: mdl-37638679

ABSTRACT

INTRODUCTION: Evaluation of quality improvement programs shows variable impact on physician performance often neglecting to examine how implementation varies across contexts and mechanisms that affect uptake. Realist evaluation enables the generation, refinement, and testing theories of change by unpacking what works for whom under what circumstances and why. This study used realist methods to explore relationships between outcomes, mechanisms (resources and reasoning), and context factors of a national multisource feedback (MSF) program. METHODS: Linked data for 50 physicians were examined to determine relationships between action plan completion status (outcomes), MSF ratings, MSF comments and prescribing data (resource mechanisms), a report summarizing the conversation between a facilitator and physician (reasoning mechanism), and practice risk factors (context). Working backward from outcomes enabled exploration of similarities and differences in mechanisms and context. RESULTS: The derived model showed that the completion status of plans was influenced by interaction of resource and reasoning mechanisms with context mediating the relationships. Two patterns were emerged. Physicians who implemented all their plans within six months received feedback with consistent messaging, reviewed data ahead of facilitation, coconstructed plan(s) with the facilitator, and had fewer risks to competence (dyscompetence). Physicians who were unable to implement any plans had data with fewer repeated messages and did not incorporate these into plans, had difficult plans, or needed to involve others and were physician-led, and were at higher risk for dyscompetence. DISCUSSION: Evaluation of quality improvement initiatives should examine program outcomes taking into consideration the interplay of resources, reasoning, and risk factors for dyscompetence.

5.
Med Teach ; 45(9): 1054-1060, 2023 09.
Article in English | MEDLINE | ID: mdl-37262177

ABSTRACT

PURPOSE: The transition towards Competency-Based Medical Education at the Cumming School of Medicine was accelerated by the reduced clinical time caused by the COVID-19 pandemic. The purpose of this study was to define a standard protocol for setting Entrustable Professional Activity (EPA) achievement thresholds and examine their feasibility within the clinical clerkship. METHODS: Achievement thresholds for each of the 12 AFMC EPAs for graduating Canadian medical students were set by using sequential rounds of revision by three consecutive groups of stakeholders and evaluation experts. Structured communication was guided by a modified Delphi technique. The feasibility/consequence models of these EPAs were then assessed by tracking their completion by the graduating class of 2021. RESULTS: The threshold-setting process resulted in set EPA achievement levels ranging from 1 to 8 across the 12 AFMC EPAs. Estimates were stable after the first round for 9 of 12 EPAs. 96.27% of EPAs were successfully completed by clerkship students despite the shortened clinical period. Feasibility was predicted by the slowing rate of EPA accumulation overtime during the clerkship. CONCLUSION: The process described led to consensus on EPA achievement thresholds. Successful completion of the assigned thresholds was feasible within the shortened clerkship.[Box: see text].


Subject(s)
COVID-19 , Internship and Residency , Students, Medical , Humans , Pandemics , Canada , Clinical Competence , COVID-19/epidemiology , Competency-Based Education/methods
6.
Can Med Educ J ; 13(4): 62-67, 2022 Aug.
Article in English | MEDLINE | ID: mdl-36091740

ABSTRACT

Assessment drives learning. However, when it comes to high-stakes examinations (e.g., for licensure or certification), these assessments of learning may be seen as unnecessary hurdles by some. Licensing clinical skills assessment in particular have come under fire over the years. Recently, assessments such as the Medical Council of Canada Qualifying Examination Part II, a clinical skills objective structured clinical examination, have been permanently cancelled. The authors explore potential consequences of this cancellation including those that are inadvertent and undesirable. Future next steps for clinical skills assessment are explored.


L'évaluation est le moteur de l'apprentissage. Cependant, lorsqu'il s'agit d'examens à enjeux élevés (par exemple, pour l'obtention du titre de licencié ou la certification), ces évaluations de l'apprentissage peuvent être perçues comme inutiles par certains. L'évaluation des compétences cliniques pour l'obtention du titre de licencié, en particulier, a été critiquée au fil des ans. Récemment, des évaluations comme l'examen d'aptitude du Conseil médical du Canada, partie II, un examen clinique objectif structuré permettant d'évaluer les compétences cliniques, ont été définitivement retirées. Les auteurs explorent les conséquences potentielles de l'annulation de ces évaluations incluant celles non intentionnelles et indésirables, ainsi que des perspectives sur l'évaluation des habiletés cliniques.

7.
Med Teach ; 44(6): 672-678, 2022 06.
Article in English | MEDLINE | ID: mdl-35021934

ABSTRACT

INTRODUCTION: As competency-based curricula get increasing attention in postgraduate medical education, Entrustable Professional Activities (EPAs) are gaining in popularity. The aim of this survey was to determine the use of EPAs in anesthesiology training programs across Europe and North America. METHODS: A survey was developed and distributed to anesthesiology residency training program directors in Switzerland, Germany, Austria, Netherlands, USA and Canada. A convergent design mixed-methods approach was used to analyze both quantitative and qualitative data. RESULTS: The survey response rate was 38% (108 of 284). Seven percent of respondents used EPAs for making entrustment decisions. Fifty-three percent of institutions have not implemented any specific system to make such decisions. The majority of respondents agree that EPAs should become an integral part of the training of residents in anesthesiology as they are universal and easy to use. CONCLUSION: Although recommended by several national societies, EPAs are used in few anesthesiology training programs. Over half of responding programs have no specific system for making entrustment decisions. Although several countries are adopting or planning to adopt EPAs and national societies are recommending the use of EPAs as a framework in their competency-based programs, few are yet using these to make "competence" decisions.


Subject(s)
Anesthesiology , Internship and Residency , Anesthesiology/education , Clinical Competence , Competency-Based Education/methods , Curriculum , Humans , Surveys and Questionnaires
8.
J Contin Educ Health Prof ; 42(4): 243-248, 2022 10 01.
Article in English | MEDLINE | ID: mdl-34609355

ABSTRACT

INTRODUCTION: A new multisource feedback (MSF) program was specifically designed to support physician quality improvement (QI) around the CanMEDS roles of Collaborator , Communicator , and Professional . Quantitative ratings and qualitative comments are collected from a sample of physician colleagues, co-workers (C), and patients (PT). These data are supplemented with self-ratings and given back to physicians in individualized reports. Each physician reviews the report with a trained feedback facilitator and creates one-to-three action plans for QI. This study explores how the content of the four aforementioned multisource feedback program components supports the elicitation and translation of feedback into a QI plan for change. METHODS: Data included survey items, rater comments, a portion of facilitator reports, and action plans components for 159 physicians. Word frequency queries were used to identify common words and explore relationships among data sources. RESULTS: Overlap between high frequency words in surveys and rater comments was substantial. The language used to describe goals in physician action plans was highly related to respondent comments, but less so to survey items. High frequency words in facilitator reports related heavily to action plan content. DISCUSSION: All components of the program relate to one another indicating that each plays a part in the process. Patterns of overlap suggest unique functions conducted by program components. This demonstration of coherence across components of this program is one piece of evidence that supports the program's validity.


Subject(s)
Clinical Competence , Physicians , Humans , Feedback , Surveys and Questionnaires , Quality Improvement
9.
Acad Med ; 97(3): 436-443, 2022 03 01.
Article in English | MEDLINE | ID: mdl-34380930

ABSTRACT

PURPOSE: Physicians are expected to provide compassionate, error-free care while navigating systemic challenges and organizational demands. Many are burning out. While organizations are scrambling to address the burnout crisis, physicians often resist interventions aimed at enhancing their wellness and building their resilience. The purpose of this research was to empirically study this phenomenon. METHOD: Constructivist grounded theory was used to inform the iterative data collection and analysis process. In spring 2018, 22 faculty physicians working in Canada participated in semistructured interviews to discuss their experiences of wellness and burnout, their perceptions of wellness initiatives, and how their experiences and perceptions influence their uptake of the rapidly proliferating strategies aimed at nurturing their resilience. Themes were identified using constant comparative analysis. RESULTS: Participants suggested that the values of compassion espoused by health care organizations do not extend to physicians, and they described feeling dehumanized by professional values steeped in an invincibility myth in which physicians are expected to be "superhuman" and "sacrifice everything" for medicine. Participants described that professional values and organizational norms impeded work-life balance, hindered personal and professional fulfillment, and discouraged disclosure of struggles. In turn, participants seemed to resist wellness and resilience-building interventions focused on fixing individuals rather than broader systemic, organizational, and professional issues. Participants perceived that efforts aimed at building individual resilience are futile without changes in professional values and sustained organizational support. CONCLUSIONS: Findings suggest that professional and organizational norms and expectations trigger feelings of dehumanization for some physicians. These feelings likely exacerbate burnout and may partly explain physicians' resistance to resilience-building strategies. Mitigating burnout and developing and sustaining a resilient physician workforce will require both individual resistance to problematic professional values and an institutional commitment to creating a culture of compassion for patients and physicians alike.


Subject(s)
Burnout, Professional , Medicine , Physicians , Burnout, Professional/prevention & control , Burnout, Psychological , Humans , Work-Life Balance
10.
Acad Med ; 97(5): 747-757, 2022 05 01.
Article in English | MEDLINE | ID: mdl-34753858

ABSTRACT

PURPOSE: Progress testing is an increasingly popular form of assessment in which a comprehensive test is administered to learners repeatedly over time. To inform potential users, this scoping review aimed to document barriers, facilitators, and potential outcomes of the use of written progress tests in higher education. METHOD: The authors followed Arksey and O'Malley's scoping review methodology to identify and summarize the literature on progress testing. They searched 6 databases (Academic Search Complete, CINAHL, ERIC, Education Source, MEDLINE, and PsycINFO) on 2 occasions (May 22, 2018, and April 21, 2020) and included articles written in English or French and pertaining to written progress tests in higher education. Two authors screened articles for the inclusion criteria (90% agreement), then data extraction was performed by pairs of authors. Using a snowball approach, the authors also screened additional articles identified from the included reference lists. They completed a thematic analysis through an iterative process. RESULTS: A total of 104 articles were included. The majority of progress tests used a multiple-choice and/or true-or-false question format (95, 91.3%) and were administered 4 times a year (38, 36.5%). The most documented source of validity evidence was internal consistency (38, 36.5%). Four major themes were identified: (1) barriers and challenges to the implementation of progress testing (e.g., need for additional resources); (2) established collaboration as a facilitator of progress testing implementation; (3) factors that increase the acceptance of progress testing (e.g., formative use); and (4) outcomes and consequences of progress test use (e.g., progress testing contributes to an increase in knowledge). CONCLUSIONS: Progress testing appears to have a positive impact on learning, and there is significant validity evidence to support its use. Although progress testing is resource- and time-intensive, strategies such as collaboration with other institutions may facilitate its use.


Subject(s)
Delivery of Health Care , Knowledge , Humans
11.
Med Teach ; 43(7): 780-787, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34020576

ABSTRACT

Health care revolves around trust. Patients are often in a position that gives them no other choice than to trust the people taking care of them. Educational programs thus have the responsibility to develop physicians who can be trusted to deliver safe and effective care, ultimately making a final decision to entrust trainees to graduate to unsupervised practice. Such entrustment decisions deserve to be scrutinized for their validity. This end-of-training entrustment decision is arguably the most important one, although earlier entrustment decisions, for smaller units of professional practice, should also be scrutinized for their validity. Validity of entrustment decisions implies a defensible argument that can be analyzed in components that together support the decision. According to Kane, building a validity argument is a process designed to support inferences of scoring, generalization across observations, extrapolation to new instances, and implications of the decision. A lack of validity can be caused by inadequate evidence in terms of, according to Messick, content, response process, internal structure (coherence) and relationship to other variables, and in misinterpreted consequences. These two leading frameworks (Kane and Messick) in educational and psychological testing can be well applied to summative entrustment decision-making. The authors elaborate the types of questions that need to be answered to arrive at defensible, well-argued summative decisions regarding performance to provide a grounding for high-quality safe patient care.


Subject(s)
Internship and Residency , Physicians , Clinical Competence , Competency-Based Education , Decision Making , Humans , Trust
12.
Med Teach ; 43(7): 737-744, 2021 Jul.
Article in English | MEDLINE | ID: mdl-33989100

ABSTRACT

With the rapid uptake of entrustable professional activties and entrustment decision-making as an approach in undergraduate and graduate education in medicine and other health professions, there is a risk of confusion in the use of new terminologies. The authors seek to clarify the use of many words related to the concept of entrustment, based on existing literature, with the aim to establish logical consistency in their use. The list of proposed definitions includes independence, autonomy, supervision, unsupervised practice, oversight, general and task-specific trustworthiness, trust, entrust(ment), entrustable professional activity, entrustment decision, entrustability, entrustment-supervision scale, retrospective and prospective entrustment-supervision scales, and entrustment-based discussion. The authors conclude that a shared understanding of the language around entrustment is critical to strengthen bridges among stages of training and practice, such as undergraduate medical education, graduate medical education, and continuing professional development. Shared language and understanding provide the foundation for consistency in interpretation and implementation across the educational continuum.


Subject(s)
Education, Medical, Undergraduate , Internship and Residency , Clinical Competence , Competency-Based Education , Education, Medical, Graduate , Prospective Studies , Retrospective Studies
13.
Adv Health Sci Educ Theory Pract ; 26(3): 1133-1156, 2021 08.
Article in English | MEDLINE | ID: mdl-33566199

ABSTRACT

Understanding which factors can impact rater judgments in assessments is important to ensure quality ratings. One such factor is whether prior performance information (PPI) about learners influences subsequent decision making. The information can be acquired directly, when the rater sees the same learner, or different learners over multiple performances, or indirectly, when the rater is provided with external information about the same learner prior to rating a performance (i.e., learner handover). The purpose of this narrative review was to summarize and highlight key concepts from multiple disciplines regarding the influence of PPI on subsequent ratings, discuss implications for assessment and provide a common conceptualization to inform research. Key findings include (a) assimilation (rater judgments are biased towards the PPI) occurs with indirect PPI and contrast (rater judgments are biased away from the PPI) with direct PPI; (b) negative PPI appears to have a greater effect than positive PPI; (c) when viewing multiple performances, context effects of indirect PPI appear to diminish over time; and (d) context effects may occur with any level of target performance. Furthermore, some raters are not susceptible to context effects, but it is unclear what factors are predictive. Rater expertise and training do not consistently reduce effects. Making raters more accountable, providing specific standards and reducing rater cognitive load may reduce context effects. Theoretical explanations for these findings will be discussed.


Subject(s)
Clinical Competence , Educational Measurement , Humans , Judgment , Observer Variation , Research Personnel
14.
Adv Health Sci Educ Theory Pract ; 26(1): 199-214, 2021 03.
Article in English | MEDLINE | ID: mdl-32577927

ABSTRACT

Learner handover (LH), the process of sharing of information about learners between faculty supervisors, allows for longitudinal assessment fundamental in the competency-based education model. However, the potential to bias future assessments has been raised as a concern. The purpose of this study is to determine whether prior performance information such as LH influences the assessment of learners in the clinical context. Between December 2017 and June 2018, forty-two faculty members and final-year residents from the Department of Medicine at the University of Ottawa were assigned to one of three study groups through quasi-randomisation, taking into account gender, speciality and rater experience. In a counter-balanced design, each group received either positive, negative or no LH prior to watching six simulated learner-patient encounter videos. Participants rated each video using the mini-CEX and completed a questionnaire on the raters' general impressions of LH. A significant difference in the mean mini-CEX competency scale scores between the negative (M = 5.29) and positive (M = 5.97) LH groups (P < .001, d = 0.81) was noted. Similar findings were found for the single overall clinical competence ratings. In the post-study questionnaire, 22/28 (78%) of participants had correctly deduced the purpose of the study and 14/28 (50%) felt LH did not influence their assessment. LH influenced mini-CEX scores despite raters' awareness of the potential for bias. These results suggest that LH could influence a rater's performance assessment and careful consideration of the potential implications of LH is required.


Subject(s)
Clinical Competence/standards , Educational Measurement/standards , Internship and Residency/organization & administration , Observer Variation , Adult , Canada , Competency-Based Education , Educational Measurement/methods , Female , Humans , Internship and Residency/standards , Male , Middle Aged , Sex Factors
15.
Can Med Educ J ; 11(6): e46-e53, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33349753

ABSTRACT

BACKGROUND: Prior studies have shown that most conference submissions fail to be published. Understanding factors that facilitate publication may be of benefit to authors. Using data from the Canadian Conference on Medical Education (CCME), our goal was to identify characteristics of conference submissions that predict the likelihood of publication with a specific focus on the utility of peer-review ratings. METHODS: Study characteristics (scholarship type, methodology, population, sites, institutions) from all oral abstracts from 2011-2015 and peer-review ratings for 2014-2015 were extracted by two raters. Publication data was obtained using online database searches. The impact of variables on publication success was analyzed using logistic regressions. RESULTS: In total, 953 oral abstracts were reviewed from 2011 to 2015. Overall, the publication rate was 30.5% (291/953). Of 531 abstracts with peer-review ratings, between 2014 and 2015, 162 (31%) were published. Of the nine analyzed variables, those associated with a greater odds of publication were: multiple vs. single institutions (odds ratio (OR) = 1.72), post-graduate research vs. others (OR=1.81) and peer-review ratings (OR=1.60). Factors with decreased odds of publication were curriculum development (OR=0.17) and innovation vs. others (OR=0.22). CONCLUSION: Similar to other studies, the publication rate of CCME presentations is low. However, peer ratings were predictive of publication success suggesting that ratings could be a useful form of feedback to authors.


CONTEXTE: Des études ont montré que la plupart des résumés soumis pour présentations orales ne sont pas ultérieurement publiés. Il pourrait être utile aux auteurs de comprendre les facteurs qui favorisent la publication. À l'aide de données provenant de la Conférence canadienne sur l'éducation médicale (CCÉM), notre objectif était d'identifier les caractéristiques des résumés permettant de prédire les chances de publication et en particulier l'utilité des cotes attribuées par les réviseurs. MÉTHODOLOGIE: Les caractéristiques des études (type de projet d'érudition, méthodologie, population, établissements, institutions) de tous les résumés de présentation orale soumis pour les conférences de 2011 à 2015 et les cotes attribuées par les réviseurs entre 2014 et 2015 ont été extraites par deux évaluateurs. On a obtenu des données de publication en faisant des recherches dans des bases de données en ligne. L'effet des variables sur le potentiel de publication a été examiné à l'aide de régressions logistiques. RÉSULTATS: Au total, 953 résumés ont été révisé des années 2011 à 2015. Le taux de publication était de 30.5% (291/953) en somme. Des 531 résumés ayant été évalués des pairs, entre 2014 et 2015, 162 (31 %) ont été publiés. Parmi les neuf variables analysées, celles qui ont été associées à un nombre élevé de chances de publication étaient les suivantes : projet multi-institutionnel par rapport à institution unique (risque relatif (RR) = 1,72), travaux de recherche post-graduée par rapport à d'autres types (RR = 1,81) et présence de cotes attribuées par les réviseurs (RR = 1,6). Les facteurs associés à des moindres chances de publication étaient les suivants : articles portant sur le développement de cursus (RR = 0,17) et les innovations, par rapport à d'autres (RR = 0,22). CONCLUSION: Comme ce fut le cas pour d'autres études, le taux de publication à la suite d'une présentation au CCME est faible. Cependant, les cotes attribuées par les réviseurs permettaient de prédire les chances de publication ce qui semble indiquer que les cotes pourraient constituer une forme de rétroaction utile aux auteurs.

16.
Acad Med ; 94(7): 1050-1057, 2019 07.
Article in English | MEDLINE | ID: mdl-30946129

ABSTRACT

PURPOSE: Learner handover (LH) is the sharing of information about trainees between faculty supervisors. This scoping review aimed to summarize key concepts across disciplines surrounding the influence of prior performance information (PPI) on current performance ratings and implications for LH in medical education. METHOD: The authors used the Arksey and O'Malley framework to systematically select and summarize the literature. Cross-disciplinary searches were conducted in six databases in 2017-2018 for articles published after 1969. To represent PPI relevant to LH in medical education, eligible studies included within-subject indirect PPI for work-type performance and rating of an individual current performance. Quantitative and thematic analyses were conducted. RESULTS: Of 24,442 records identified through database searches and 807 through other searches, 23 articles containing 24 studies were included. Twenty-two studies (92%) reported an assimilation effect (current ratings were biased toward the direction of the PPI). Factors modifying the effect of PPI were observed, with larger effects for highly polarized PPI, negative (vs positive) PPI, and early (vs subsequent) performances. Specific standards, rater motivation, and certain rater characteristics mitigated context effects, whereas increased rater processing demands heightened them. Mixed effects were seen with nature of the performance and with rater expertise and training. CONCLUSIONS: PPI appears likely to influence ratings of current performance, and an assimilation effect is seen with indirect PPI. Whether these findings generalize to medical education is unknown, but they should be considered by educators wanting to implement LH. Future studies should explore PPI in medical education contexts and real-world settings.


Subject(s)
Educational Measurement/standards , Observer Variation , Work Performance/education , Educational Measurement/methods , Humans , Motivation , Time Factors , Work Performance/standards
17.
Acad Med ; 94(1): 25-30, 2019 01.
Article in English | MEDLINE | ID: mdl-30113362

ABSTRACT

After many years in the making, an increasing number of postgraduate medical education (PGME) training programs in North America are now adopting a competency-based medical education (CBME) framework based on entrustable professional activities (EPAs) that, in turn, encompass a larger number of competencies and training milestones. Following the lead of PGME, CBME is now being incorporated into undergraduate medical education (UME) in an attempt to improve integration across the medical education continuum and to facilitate a smooth transition from clerkship to residency by ensuring that all graduates are ready for indirect supervision of required EPAs on day one of residency training. The Association of Faculties of Medicine of Canada recently finalized its list of 12 EPAs, which closely parallels the list of 13 EPAs published earlier by the Association of American Medical Colleges, and defines the "core" EPAs that are an expectation of all medical school graduates.In this article, the authors focus on important, practical considerations for the transition to CBME that they feel have not been adequately addressed in the existing literature. They suggest that the transition to CBME should not threaten diversity in UME or require a major curricular upheaval. However, each UME program must make important decisions that will define its version of CBME, including which terminology to use when describing the construct being evaluated, which rating tools and raters to include in the assessment program, and how to make promotion decisions based on all of the available data on EPAs.


Subject(s)
Clinical Competence , Competency-Based Education/organization & administration , Curriculum , Education, Medical, Undergraduate/organization & administration , Educational Measurement/methods , Students, Medical/psychology , Adult , Canada , Female , Humans , Male , North America , Young Adult
18.
Med Teach ; 41(5): 569-577, 2019 05.
Article in English | MEDLINE | ID: mdl-30299196

ABSTRACT

Despite the increased emphasis on the use of workplace-based assessment in competency-based education models, there is still an important role for the use of multiple choice questions (MCQs) in the assessment of health professionals. The challenge, however, is to ensure that MCQs are developed in a way to allow educators to derive meaningful information about examinees' abilities. As educators' needs for high-quality test items have evolved so has our approach to developing MCQs. This evolution has been reflected in a number of ways including: the use of different stimulus formats; the creation of novel response formats; the development of new approaches to problem conceptualization; and the incorporation of technology. The purpose of this narrative review is to provide the reader with an overview of how our understanding of the use of MCQs in the assessment of health professionals has evolved to better measure clinical reasoning and to improve both efficiency and item quality.


Subject(s)
Education, Medical, Undergraduate , Educational Measurement/methods , Cognition , Competency-Based Education , Computer-Assisted Instruction/methods , Humans
19.
BMC Med Educ ; 18(1): 302, 2018 Dec 11.
Article in English | MEDLINE | ID: mdl-30537960

ABSTRACT

BACKGROUND: Physicians in training must achieve a high degree of proficiency in performing physical examinations and must strive to become experts in the field. Concerns are emerging about physicians' abilities to perform these basic skills, essential for clinical decision making. Learning at the bedside has the potential to support skill acquisition through deliberate practice. Previous skills improvement programs, targeted at teaching physical examinations, have been successful at increasing the frequency of performing and teaching physical examinations. It remains unclear what barriers might persist after such program implementation. This study explores residents' and physicians' perceptions of physical examinations teaching at the bedside following the implementation of a new structured bedside curriculum: What are the potentially persisting barriers and proposed solutions for improvement? METHODS: The study used a constructivist approach using a qualitative inductive thematic analysis that was oriented to construct an understanding of the barriers and facilitators of physical examination teaching in the context of a new bedside curriculum. Participants took part in individual interviews and subsequently focus groups. Transcripts were coded and themes were identified. RESULTS: Data analyses yielded three main themes: (1) the culture of teaching physical examination at the bedside is shaped and threatened by the lack of hospital support, physicians' motivation and expertise, residents' attitudes and dependence on technology, (2) the hospital environment makes bedside teaching difficult because of its chaotic nature, time constraints and conflicting responsibilities, and finally (3) structured physical examination curricula create missed opportunities in being restrictive and pose difficulties in identifying patients with findings. CONCLUSIONS: Despite the implementation of a structured bedside curriculum for physical examination teaching, our study suggests that cultural, environmental and curriculum-related barriers remain important issues to be addressed. Institutions wishing to develop and implement similar bedside curricula should prioritize recruitment of expert clinical teachers, recognizing their time and efforts. Teaching should be delivered in a protected environment, away from clinical duties, and with patients with real findings. Physicians must value teaching and learning of physical examination skills, with multiple hands-on opportunities for direct role modeling, coaching, observation and deliberate practice. Ideally, clinical teachers should master the art of combining both patient care and educational activities.


Subject(s)
Clinical Competence/standards , Curriculum , Education, Medical, Graduate , Internship and Residency , Physical Examination/standards , Point-of-Care Testing/standards , Adult , Attitude of Health Personnel , Female , Focus Groups , Humans , Male , Qualitative Research
20.
Acad Med ; 93(6): 829-832, 2018 06.
Article in English | MEDLINE | ID: mdl-29538109

ABSTRACT

There exists an assumption that improving medical education will improve patient care. While seemingly logical, this premise has rarely been investigated. In this Invited Commentary, the authors propose the use of big data to test this assumption. The authors present a few example research studies linking education and patient care outcomes and argue that using big data may more easily facilitate the process needed to investigate this assumption. The authors also propose that collaboration is needed to link educational and health care data. They then introduce a grassroots initiative, inclusive of universities in one Canadian province and national licensing organizations that are working together to collect, organize, link, and analyze big data to study the relationship between pedagogical approaches to medical training and patient care outcomes. While the authors acknowledge the possible challenges and issues associated with harnessing big data, they believe that the benefits supersede these. There is a need for medical education research to go beyond the outcomes of training to study practice and clinical outcomes as well. Without a coordinated effort to harness big data, policy makers, regulators, medical educators, and researchers are left with sometimes costly guesses and assumptions about what works and what does not. As the social, time, and financial investments in medical education continue to increase, it is imperative to understand the relationship between education and health outcomes.


Subject(s)
Big Data , Education, Medical/statistics & numerical data , Needs Assessment , Outcome Assessment, Health Care/statistics & numerical data , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...