Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 455
1.
Perspect Med Educ ; 13(1): 160-168, 2024.
Article En | MEDLINE | ID: mdl-38464960

Introduction: We must ensure, through rigorous assessment that physicians have the evidence-based medicine (EBM) skills to identify and apply the best available information to their clinical work. However, there is limited guidance on how to assess EBM competency. With a better understanding of their current role in EBM education, Health Sciences Librarians (HSLs), as experts, should be able to contribute to the assessment of medical student EBM competence. The purpose of this study is to explore the HSLs perspective on EBM assessment practices, both current state and potential future activities. Methods: We conducted focus groups with librarians from across the United States to explore their perceptions of assessing EBM competence in medical students. Participants had been trained to be raters of EBM competence as part of a novel Objective Structured Clinical Examination (OSCE). This OSCE was just the starting point and the discussion covered topics of current EBM assessment and possibility for expanded responsibilities at their own institutions. We used a reflexive thematic analysis approach to construct themes from our conversations. Results: We constructed eight themes in four broad categories that influence the success of librarians being able to engage in effective assessment of EBM: administrative, curricular, medical student, and librarian. Conclusion: Our results inform medical school leadership by pointing out the modifiable factors that enable librarians to be more engaged in conducting effective assessment. They highlight the need for novel tools, like EBM OSCEs, that can address multiple barriers and create opportunities for deeper integration of librarians into assessment processes.


Librarians , Students, Medical , Humans , United States , Evidence-Based Medicine , Curriculum , Focus Groups
2.
Med Teach ; : 1-8, 2024 Mar 15.
Article En | MEDLINE | ID: mdl-38489473

INTRODUCTION: Clinical reasoning skills are essential for decision-making. Current assessment methods are limited when testing clinical reasoning and management of uncertainty. This study evaluates the reliability, validity and acceptability of Practicum Script, an online simulation-based programme, for developing medical students' clinical reasoning skills using real-life cases. METHODS: In 2020, we conducted an international, multicentre pilot study using 20 clinical cases with 2457 final-year medical students from 21 schools worldwide. Psychometric analysis was performed (n = 1502 students completing at least 80% of cases). Classical estimates of reliability for three test domains (hypothesis generation, hypothesis argumentation and knowledge application) were calculated using Cronbach's alpha and McDonald's omega coefficients. Validity evidence was obtained by confirmatory factor analysis (CFA) and measurement alignment (MA). Items from the knowledge application domain were analysed using cognitive diagnostic modelling (CDM). Acceptability was evaluated by an anonymous student survey. RESULTS: Reliability estimates were high with narrow confidence intervals. CFA revealed acceptable goodness-of-fit indices for the proposed three-factor model. CDM analysis demonstrated good absolute test fit and high classification accuracy estimates. Student survey responses showed high levels of acceptability. CONCLUSION: Our findings suggest that Practicum Script is a useful resource for strengthening students' clinical reasoning skills and ability to manage uncertainty.

4.
Adv Health Sci Educ Theory Pract ; 29(1): 147-172, 2024 Mar.
Article En | MEDLINE | ID: mdl-37347458

There is an expectation that health professions schools respond to priority societal health needs. This expectation is largely based on the underlying assumption that schools are aware of the priority needs in their communities. This paper demonstrates how open-access, pan-national health data can be used to create a reliable health index to assist schools in identifying societal needs and advance social accountability in health professions education. Using open-access data, a psychometric evaluation was conducted to examine the reliability and validity of the Canadian Health Indicators Framework (CHIF) conceptual model. A non-linear confirmatory factor analysis (CFA) on 67 health indicators, at the health-region level (n = 97) was used to assess the model fit of the hypothesized 10-factor model. Reliability analysis using McDonald's Omega were conducted, followed by Pearson's correlation coefficients. Findings from the non-linear CFA rejected the original conceptual model structure of the CHIF. Exploratory post hoc analyses were conducted using modification indices and parameter constraints to improve model fit. A final 5-factor multidimensional model demonstrated superior fit, reducing the number of indicators from 67 to 32. The 5-factors included: Health Conditions (8-indicators); Health Functions (6-indicators); Deaths (5-indicators); Non-Medical Health Determinants (7-indicators); and Community & Health System Characteristics (6-indicators). All factor loadings were statistically significant (p < 0.001) and demonstrated excellent internal consistency ( ω >0.95). Many schools struggle to identify and measure socially accountable outcomes. The process highlighted in this paper and the indices developed serve as starting points to allow schools to leverage open-access data as an initial step in identifying societal needs.


Schools , Social Responsibility , Humans , Psychometrics , Reproducibility of Results , Canada , Health Occupations , Surveys and Questionnaires
5.
Med Teach ; 46(1): 140-146, 2024 01.
Article En | MEDLINE | ID: mdl-37463405

High-value care is what patients deserve and what healthcare professionals should deliver. However, it is not what happens much of the time. Quality improvement master Dr. Don Berwick argued more than two decades ago that American healthcare needs an escape fire, which is a new way of seeing and acting in a crisis situation. While coined in the U.S. context, the analogy applies in other Western healthcare contexts as well. Therefore, in this paper, the authors revisit Berwick's analogy, arguing that medical education can, and should, provide the spark for such an escape fire across the globe. They assert that medical education can achieve this by fully embracing competency-based medical education (CBME) as a way to place medicine's focus on the patient. CBME targets training outcomes that prepare graduates to optimize patient care. The authors use the escape fire analogy to argue that medical educators must drop long-held approaches and tools; treat CBME implementation as an adaptive challenge rather than a technical fix; demand genuine, rich discussions and engagement about the path forward; and, above all, center the patient in all they do.


Competency-Based Education , Education, Medical , Humans , Health Personnel , Delivery of Health Care , Health Facilities
6.
Med Teach ; : 1-7, 2023 Oct 02.
Article En | MEDLINE | ID: mdl-37783205

In programmes of assessment with both high and low-stakes assessments, the inclusion of open-ended long answer questions in the high-stakes examination can contribute to driving deeper learning among students. However, in larger institutions, this would generate a seemingly insurmountable marking workload. In this study, we use a focused ethnographic approach to explore how such a marking endeavour can be tackled efficiently and pragmatically. In marking parties, examiners come together to individually mark student papers. This study focuses on marking parties for two separate tasks assessing written clinical communication in medical school finals at Southampton, UK. Data collected included field notes from 21.3 h of marking parties, details of demographics and clinical and educational experience of examiners, examiners' written answers to an open-ended post-marking party questionnaire, an in-depth interview and details of the actual marks assigned during the marking parties. In a landscape of examiners who are busy clinicians and rarely interact with each other educationally, marking parties represent a spontaneous and sustainable community of practice, with functions extending beyond the mere marking of exams. These include benchmarking, learning, managing biases and exam development. Despite the intensity of the work, marking parties built camaraderie and were considered fun and motivating.

7.
Patient Relat Outcome Meas ; 14: 193-212, 2023.
Article En | MEDLINE | ID: mdl-37448975

Reliability and measurement error are measurement properties that quantify the influence of specific sources of variation, such as raters, type of machine, or time, on the score of the individual measurement. Several designs can be chosen to assess reliability and measurement error of a measurement. Differences in design are due to specific choices about which sources of variation are varied over the repeated measurements in stable patients, which potential sources of variation are kept stable (ie, restricted), and about whether or not the entire measurement instrument (or measurement protocol) was repeated or only part of it. We explain how these choices determine how intraclass correlation coefficients and standard errors of measurement formulas are built for different designs by using Venn diagrams. Strategies for improving the measurement are explained, and recommendations for reporting the essentials of these studies are described. We hope that this paper will facilitate the understanding and improve the design, analysis, and reporting of future studies on reliability and measurement error of measurements.

8.
Acad Med ; 98(9): 1083-1092, 2023 09 01.
Article En | MEDLINE | ID: mdl-37146237

PURPOSE: In health professions education (HPE), the effect of assessments on student motivation for learning and its consequences have been largely neglected. This is problematic because assessments can hamper motivation and psychological well-being. The research questions guiding this review were: How do assessments affect student motivation for learning in HPE? What outcomes does this lead to in which contexts? METHOD: In October 2020, the authors searched PubMed, Embase, APA PsycInfo, ERIC, CINAHL, and Web of Science Core Collection for "assessments" AND "motivation" AND "health professions education/students." Empirical papers or literature reviews investigating the effect of assessments on student motivation for learning in HPE using quantitative, qualitative, or mixed methods from January 1, 2010, to October 29, 2020, were included. The authors chose the realist synthesis method for data analysis to study the intended and unintended consequences of this complex topic. Assessments were identified as stimulating autonomous or controlled motivation using sensitizing concepts from self-determination theory and data on context-mechanism-outcome were extracted. RESULTS: Twenty-four of 15,291 articles were ultimately included. Assessments stimulating controlled motivation seemed to have negative outcomes. An example of an assessment that stimulates controlled motivation is one that focuses on factual knowledge (context), which encourages studying only for the assessment (mechanism) and results in surface learning (outcome). Assessments stimulating autonomous motivation seemed to have positive outcomes. An example of an assessment that stimulates autonomous motivation is one that is fun (context), which through active learning (mechanism) leads to higher effort and better connection with the material (outcome). CONCLUSIONS: These findings indicate that students strategically learned what was expected to appear in assessments at the expense of what was needed in practice. Therefore, health professions educators should rethink their assessment philosophy and practices and introduce assessments that are relevant to professional practice and stimulate genuine interest in the content.


Motivation , Students , Humans , Health Occupations/education , Clinical Competence
9.
Diagnosis (Berl) ; 10(3): 249-256, 2023 08 01.
Article En | MEDLINE | ID: mdl-36916145

OBJECTIVES: The organization of medical knowledge is reflected in language and can be studied from the viewpoints of semantics and prototype theory. The purpose of this study is to analyze student verbalizations during an Objective Structured Clinical Examination (OSCE) and correlate them with test scores and final medical degree (MD) scores. We hypothesize that students whose verbalizations are semantically richer and closer to the disease prototype will show better academic performance. METHODS: We conducted a single-center study during a year 6 (Y6) high-stakes OSCE where one probing intervention was included at the end of the exam to capture students' reasoning about one of the clinical cases. Verbalizations were transcribed and coded. An assessment panel categorized verbalizations regarding their semantic value (Weak, Good, Strong). Semantic categories and prototypical elements were compared with OSCE, case-based exam and global MD scores. RESULTS: Students with Semantic 'Strong' verbalizations displayed higher OSCE, case-based exam and MD scores, while the use of prototypical elements was associated with higher OSCE and MD scores. CONCLUSIONS: Semantic competence and verbalizations matching the disease prototype may identify students with better organization of medical knowledge. This work provides empirical groundwork for future research on language analysis to support assessment decisions.


Students, Medical , Humans , Semantics , Pilot Projects , Language , Knowledge
10.
Acad Psychiatry ; 47(2): 134-142, 2023 Apr.
Article En | MEDLINE | ID: mdl-36224504

OBJECTIVE: Entrustable professional activities (EPAs) are used as clinical activities in postgraduate psychiatry training in Australasia. This study aimed to explore psychiatry trainees' perceptions of the impact of EPAs on their motivation and learning. METHODS: A constructivist grounded theory approach was used to conceptualize the impact of EPAs on trainees' motivation and learning. A purposive sample of trainees was recruited from across New Zealand. Semi-structured individual interviews were used for data collection and continued until theoretical saturation was reached. RESULTS: The impact of EPAs on learning was mediated by the trainee's appraisals of subjective control, value, and the costs of engaging with EPAs. When appraisals were positive, EPAs encouraged a focus on particular learning needs and structured learning with the supervisor. However, when appraisals were negative, EPAs encouraged a superficial approach to learning. Trainee appraisals and their subsequent impact on motivation and learning were most affected by EPA granularity, alignment of EPAs with clinical practice, and the supervisor's conscientiousness in their approach to EPAs. CONCLUSIONS: To stimulate learning, EPAs must be valued by both trainees and supervisors as constituting a coherent work-based curriculum that encompasses the key fellowship competencies. If EPAs are to be effective as clinical tasks for learning, ongoing faculty development must be the leading priority.


Education, Medical , Internship and Residency , Humans , Competency-Based Education , Clinical Competence , Curriculum , Learning
11.
Am J Pharm Educ ; 87(3): ajpe9110, 2023 04.
Article En | MEDLINE | ID: mdl-36270661

Objectives. To explore the key factors that influence professional identity construction in fourth-year pharmacy students enrolled in a Doctor of Pharmacy program.Methods. A single-site instrumental case study of current fourth-year pharmacy students from the Leslie Dan Faculty of Pharmacy, University of Toronto, was used. Thirteen students participated in semistructured interviews. Poststructural social identity theories were used to analyze the data and identify themes that influence identity construction in pharmacy students.Results. Data analysis identified five overarching themes that influence pharmacy student professional identity construction: path to pharmacy, curriculum, environment, preceptors, and patient interactions. The Leslie Dan Faculty of Pharmacy curriculum prioritized the health care provider identity, which influenced the students desire to "become" clinicians. Based on their internalized health care provider identity, they rejected preceptors and practice environments that negatively impacted their ability to embody this identity.Conclusion. The findings of this study suggest that pharmacy students align themselves strongly with health care provider identities at the cost of other potentially relevant identities. Pharmacy education programs may benefit from curricular reforms that incorporate and legitimize multiple pharmacist identities to ensure a strong pharmacy workforce for the future.


Education, Pharmacy , Pharmacy , Students, Pharmacy , Humans , Education, Pharmacy/methods , Social Identification , Curriculum
12.
Med Teach ; 45(4): 433-441, 2023 04.
Article En | MEDLINE | ID: mdl-36306368

Multiple choice questions (MCQs) suffer from cueing, item quality and factual knowledge testing. This study presents a novel multimodal test containing alternative item types in a computer-based assessment (CBA) format, designated as Proxy-CBA. The Proxy-CBA was compared to a standard MCQ-CBA, regarding validity, reliability, standard error of measurement, and cognitive load, using a quasi-experimental crossover design. Biomedical students were randomized into two groups to sit a 65-item formative exam starting with the MCQ-CBA followed by the Proxy-CBA (group 1, n = 38), or the reverse (group 2, n = 35). Subsequently, a questionnaire on perceived cognitive load was taken, answered by 71 participants. Both CBA formats were analyzed according to parameters of the Classical Test Theory and the Rasch model. Compared to the MCQ-CBA, the Proxy-CBA had lower raw scores (p < 0.001, η2 = 0.276), higher reliability estimates (p < 0.001, η2 = 0.498), lower SEM estimates (p < 0.001, η2 = 0.807), and lower theta ability scores (p < 0.001, η2 = 0.288). The questionnaire revealed no significant differences between both CBA tests regarding perceived cognitive load. Compared to the MCQ-CBA, the Proxy-CBA showed increased reliability and a higher degree of validity with similar cognitive load, suggesting its utility as an alternative assessment format.


Educational Measurement , Students, Medical , Humans , Reproducibility of Results , Surveys and Questionnaires , Computers
13.
Teach Learn Med ; 35(5): 527-536, 2023.
Article En | MEDLINE | ID: mdl-35903923

Phenomenon: Social accountability has become a universal component in medical education. However, medical schools have little guidance for operationalizing and applying this concept in practice. This study explored institutional practices and administrative perceptions of social accountability in medical education. Approach: An online survey was distributed to a purposeful sample of English-speaking undergraduate medical school deans and program directors/leads from 245 institutions in 14 countries. The survey comprised of 38-items related to program mission statements, admission processes, curricular content, and educational outcomes. Survey items were developed using previous literature and categorized using a context-input-process-products (CIPP) evaluation model. Exploratory Factor Analysis (EFA) was used to assess the inter-relationship among survey items. Reliability and internal consistency of items were evaluated using McDonald's Omega. Findings: Results from 81 medical schools in 14 countries collected between February and June 2020 are presented. Institutional commonalities of social accountability were observed. However, our findings suggest programs focus predominately on educational inputs and processes, and not necessarily on outcomes. Findings from our EFA demonstrated excellent internal consistency and reliability. Four-factors were extracted: (1) selection and recruitment; (2) institutional mandates; (3) institutional activities; and (4) community awareness, accounting for 71% of the variance. McDonald's Omega reliability estimates for subscales ranged from 0.80-0.87. Insights: This study identified common practices of social accountability. While many medical schools expressed an institutional commitment to social accountability, their effects on the community remain unknown and not evaluated. Overall, this paper offers programs and educators a psychometrically supported tool to aid in the operationalization and reliability of evaluating social accountability.


Education, Medical , Schools, Medical , Humans , Reproducibility of Results , Curriculum , Social Responsibility
14.
Acad Med ; 98(3): 367-375, 2023 03 01.
Article En | MEDLINE | ID: mdl-36351056

PURPOSE: Traditional quality metrics do not adequately represent the clinical work done by residents and, thus, cannot be used to link residency training to health care quality. This study aimed to determine whether electronic health record (EHR) data can be used to meaningfully assess residents' clinical performance in pediatric emergency medicine using resident-sensitive quality measures (RSQMs). METHOD: EHR data for asthma and bronchiolitis RSQMs from Cincinnati Children's Hospital Medical Center, a quaternary children's hospital, between July 1, 2017, and June 30, 2019, were analyzed by ranking residents based on composite scores calculated using raw, unadjusted, and case-mix adjusted latent score models, with lower percentiles indicating a lower quality of care and performance. Reliability and associations between the scores produced by the 3 scoring models were compared. Resident and patient characteristics associated with performance in the highest and lowest tertiles and changes in residents' rank after case-mix adjustments were also identified. RESULTS: 274 residents and 1,891 individual encounters of bronchiolitis patients aged 0-1 as well as 270 residents and 1,752 individual encounters of asthmatic patients aged 2-21 were included in the analysis. The minimum reliability requirement to create a composite score was met for asthma data (α = 0.77), but not bronchiolitis (α = 0.17). The asthma composite scores showed high correlations ( r = 0.90-0.99) between raw, latent, and adjusted composite scores. After case-mix adjustments, residents' absolute percentile rank shifted on average 10 percentiles. Residents who dropped by 10 or more percentiles were likely to be more junior, saw fewer patients, cared for less acute and younger patients, or had patients with a longer emergency department stay. CONCLUSIONS: For some clinical areas, it is possible to use EHR data, adjusted for patient complexity, to meaningfully assess residents' clinical performance and identify opportunities for quality improvement.


Asthma , Emergency Medicine , Internship and Residency , Pediatric Emergency Medicine , Child , Humans , Quality Indicators, Health Care , Electronic Health Records , Reproducibility of Results , Clinical Competence
15.
Adv Simul (Lond) ; 7(1): 30, 2022 Sep 24.
Article En | MEDLINE | ID: mdl-36153603

BACKGROUND: Systematic reviews on simulation training effectiveness have pointed to the need to adhere to evidence-based instructional design (ID) guidelines. ID guidelines derive from sound cognitive theories and aim to optimize complex learning (integration of knowledge, skills, and attitudes) and learning transfer (application of acquired knowledge and skills in the workplace). The purpose of this study was to explore adherence to ID guidelines in simulation training programs for dealing with postpartum hemorrhage (PPH), a high-risk situation and the leading cause of maternal mortality worldwide. METHODS: A total of 40 raters analyzed simulation training programs as described in 32 articles. The articles were divided into four subsets of seven articles and one subset of four articles. Each subset was judged by seven to ten raters on adherence to ID guidelines. The 5-point Likert score rating scale was based on Merrill's First Principles of Instruction and included items relating to key ID features categorized into five subscales: authenticity, activation of prior knowledge, demonstration, application, and integration/transfer. The authors searched for articles published in English between January 2007 and March 2017 in PubMed, Eric, and Google Scholar and calculated the mean Likert-scale score, per subscale, and interrater reliability (IRR). RESULTS: The mean Likert-scale scores calculated for all subscales were < 3.00. For the number of raters used to judge the papers in this study (varying between 7 and 10), the IRR was found to be excellent for the authenticity and integration/transfer subscales, good-to-excellent for the activation of prior knowledge and application subscales, and fair-to-good for the demonstration subscale. CONCLUSION: The results demonstrate a paucity of the description of adherence to evidence-based ID guidelines in current simulation trainings for a high-risk situation such as PPH.

17.
BMC Med Educ ; 22(1): 567, 2022 Jul 23.
Article En | MEDLINE | ID: mdl-35869477

BACKGROUND: Collaborative learning is a group learning approach in which positive social interdependence within a group is key to better learning performance and future attitudes toward team practice. Recent attempts to replace a face-to-face environment with an online one have been developed using information communication technology. However, this raises the concern that online collaborative learning (OCL) may reduce positive social interdependence. Therefore, this study aimed to compare the degree of social interdependence in OCL with face-to-face environments and clarify aspects that affect social interdependence in OCL. METHODS: We conducted a crossover study comparing online and face-to-face collaborative learning environments in a clinical reasoning class using team-based learning for medical students (n = 124) in 2021. The participants were randomly assigned to two cohorts: Cohort A began in an online environment, while Cohort B began in a face-to-face environment. At the study's midpoint, the two cohorts exchanged the environments as a washout. The participants completed surveys using the social interdependence in collaborative learning scale (SOCS) to measure their perceived positive social interdependence before and after the class. Changes in the mean SOCS scores were compared using paired t-tests. Qualitative data related to the characteristics of the online environment were obtained from the focus groups and coded using thematic analysis. RESULTS: The matched-pair tests of SOCS showed significant progression between pre- and post-program scores in the online and face-to-face groups. There were no significant differences in overall SOCS scores between the two groups. Sub-analysis by subcategory showed significant improvement in boundary (discontinuities among individuals) and means interdependence (resources, roles, and tasks) in both groups, but outcome interdependence (goals and rewards) improved significantly only in the online group. Qualitative analysis revealed four major themes affecting social interdependence in OCL: communication, task-sharing process, perception of other groups, and working facilities. CONCLUSIONS: There is a difference in the communication styles of students in face-to-face and online environments, and these various influences equalize the social interdependence in a face-to-face and online environment.


Interdisciplinary Placement , Students, Medical , Cross-Over Studies , Focus Groups , Humans , Interdisciplinary Placement/methods , Learning
18.
BMC Med Educ ; 22(1): 409, 2022 May 28.
Article En | MEDLINE | ID: mdl-35643442

BACKGROUND: Programmatic assessment is increasingly being implemented within competency-based health professions education. In this approach a multitude of low-stakes assessment activities are aggregated into a holistic high-stakes decision on the student's performance. High-stakes decisions need to be of high quality. Part of this quality is whether an examiner perceives saturation of information when making a holistic decision. The purpose of this study was to explore the influence of narrative information in perceiving saturation of information during the interpretative process of high-stakes decision-making. METHODS: In this mixed-method intervention study the quality of the recorded narrative information was manipulated within multiple portfolios (i.e., feedback and reflection) to investigate its influence on 1) the perception of saturation of information and 2) the examiner's interpretative approach in making a high-stakes decision. Data were collected through surveys, screen recordings of the portfolio assessments, and semi-structured interviews. Descriptive statistics and template analysis were applied to analyze the data. RESULTS: The examiners perceived less frequently saturation of information in the portfolios with low quality of narrative feedback. Additionally, they mentioned consistency of information as a factor that influenced their perception of saturation of information. Even though in general they had their idiosyncratic approach to assessing a portfolio, variations were present caused by certain triggers, such as noticeable deviations in the student's performance and quality of narrative feedback. CONCLUSION: The perception of saturation of information seemed to be influenced by the quality of the narrative feedback and, to a lesser extent, by the quality of reflection. These results emphasize the importance of high-quality narrative feedback in making robust decisions within portfolios that are expected to be more difficult to assess. Furthermore, within these "difficult" portfolios, examiners adapted their interpretative process reacting on the intervention and other triggers by means of an iterative and responsive approach.


Competency-Based Education , Narration , Competency-Based Education/methods , Feedback , Humans , Surveys and Questionnaires
19.
Med Teach ; 44(8): 928-937, 2022 08.
Article En | MEDLINE | ID: mdl-35701165

INTRODUCTION: Programmatic assessment is an approach to assessment aimed at optimizing the learning and decision function of assessment. It involves a set of key principles and ground rules that are important for its design and implementation. However, despite its intuitive appeal, its implementation remains a challenge. The purpose of this paper is to gain a better understanding of the factors that affect the implementation process of programmatic assessment and how specific implementation challenges are managed across different programs. METHODS: An explanatory multiple case (collective) approach was used for this study. We identified 6 medical programs that had implemented programmatic assessment with variation regarding health profession disciplines, level of education and geographic location. We conducted interviews with a key faculty member from each of the programs and analyzed the data using inductive thematic analysis. RESULTS: We identified two major factors in managing the challenges and complexity of the implementation process: knowledge brokers and a strategic opportunistic approach. Knowledge brokers were the people who drove and designed the implementation process acting by translating evidence into practice allowing for real-time management of the complex processes of implementation. These knowledge brokers used a 'strategic opportunistic' or agile approach to recognize new opportunities, secure leadership support, adapt to the context and take advantage of the unexpected. Engaging in an overall curriculum reform process was a critical factor for a successful implementation of programmatic assessment. DISCUSSION: The study contributes to the understanding of the intricacies of implementation processes of programmatic assessment across different institutions. Managing opportunities, adaptive planning, awareness of context, were all critical aspects of thinking strategically and opportunistically in the implementation of programmatic assessment. Future research is needed to provide a more in-depth understanding of values and beliefs that underpin the assessment culture of an organization, and how such values may affect implementation.


Leadership , Learning , Faculty , Humans
20.
BMC Med Educ ; 22(1): 262, 2022 Apr 11.
Article En | MEDLINE | ID: mdl-35410217

BACKGROUND: Rubrics are frequently used to assess competencies in outcome-based medical education (OBE). The implementation of assessment systems using rubrics is usually realised through years of involvement in projects with various stakeholders. However, for countries or specialities new to OBE, faster and more simplified processes are required. In March 2019, Japan introduced nine competencies and generic rubrics of competencies for medical residents. We explored the local adaptation of these generic rubrics and its consequences for assessors. METHODS: The study followed three steps. First, we locally adapted the generic rubrics. This was followed by conducting mixed-method research to explore the effect of the local adaptation. In step two, we examined the correlations between the scores in the locally adapted assessment sheets for supervising doctors and generic rubrics. In step three, we conducted interviews with supervising doctors. The study was conducted in the General Internal Medicine Department of Nagoya University, Japan. In the first step, doctors in the Medical Education Center and other medical departments, clerks, and residents participated. Supervising doctors in the General Internal Medicine Department participated in the second and third steps. RESULTS: A locally adapted assessment system was developed and implemented in seven months. The scores of the generic rubrics and the adapted assessment tool completed by the supervising doctors showed good correlations in some items as opposed to others, assessed mainly with other tools. Participant interviews revealed that local adaptation decreased their cognitive load leading to consistent ratings, increased writing of comments, and promoting reflection on instruction. CONCLUSIONS: This adaptation process is a feasible way to begin the implementation of OBE. Local adaptation has advantages over direct use of generic rubrics.


Education, Medical , Physicians , Clinical Competence , Educational Measurement/methods , Humans , Writing
...