Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
1.
Ultrasound J ; 16(1): 19, 2024 Mar 05.
Article in English | MEDLINE | ID: mdl-38443723

ABSTRACT

BACKGROUND: Incorporating ultrasound into the clinical curriculum of undergraduate medical education has been limited by a need for faculty support. Without integration into the clinical learning environment, ultrasound skills become a stand-alone skill and may decline by the time of matriculation into residency. A less time intensive ultrasound curriculum is needed to preserve skills acquired in preclinical years. We aimed to create a self-directed ultrasound curriculum to support and assess students' ability to acquire ultrasound images and to utilize ultrasound to inform clinical decision-making. METHODS: Third year students completed the self-directed ultrasound curriculum during their required internal medicine clerkship. Students used Butterfly iQ+ portable ultrasound probes. The curriculum included online modules that focused on clinical application of ultrasound as well as image acquisition technique. Students were graded on image acquisition quality and setting, a patient write-up focused on clinical decision-making, and a multiple-choice quiz. Student feedback was gathered with an end-of-course survey. Faculty time was tracked. RESULTS: One hundred and ten students participated. Students averaged 1.79 (scale 0-2; SD = 0.21) on image acquisition, 78% (SD = 15%) on the quiz, and all students passed the patient write-up. Most reported the curriculum improved their clinical reasoning (72%), learning of pathophysiology (69%), and patient care (55%). Faculty time to create the curriculum was approximately 45 h. Faculty time to grade student assignments was 38.5 h per year. CONCLUSIONS: Students were able to demonstrate adequate image acquisition, use of ultrasound to aid in clinical decision-making, and interpretation of ultrasound pathology with no in-person faculty instruction. Additionally, students reported improved learning of pathophysiology, clinical reasoning, and rapport with patients. The self-directed curriculum required less faculty time than prior descriptions of ultrasound curricula in the clinical years and could be considered at institutions that have limited faculty support.

2.
Teach Learn Med ; : 1-11, 2024 Jan 23.
Article in English | MEDLINE | ID: mdl-38258421

ABSTRACT

PHENOMENON: Medical schools must equip future physicians to provide equitable patient care. The best approach, however, is mainly dependent on a medical school's context. Graduating students from our institution have reported feeling ill-equipped to care for patients from "different backgrounds" on the Association of American Medical Colleges' Graduation Questionnaire. We explored how medical students interpret "different patient backgrounds" and what they need to feel prepared to care for diverse patients. APPROACH: We conducted an exploratory qualitative case study using focus groups with 11, Year 2 (MS2) and Year 4 (MS4) medical students at our institution. Focus groups were recorded, transcribed, and coded using thematic analysis. We used Bobbie Harro's cycles of socialization and liberation to understand how the entire medical school experience, not solely the curriculum, informs how medical students learn to interact with all patients. FINDINGS: We organized our findings into four major themes to characterize students' medical education experience when learning to care for patients of different backgrounds: (1) Understandings of different backgrounds (prior to medical school); (2) Admissions process; (3) Curricular socialization; and (4) Co-curricular (or environmental) socialization. We further divided themes 2, 3, and 4 into two subthemes when learning how to care for patients of different backgrounds: (a) the current state and (b) proposed changes. We anticipate that following the proposed changes will help students feel more prepared to care for patients of differing backgrounds. INSIGHTS: Our findings show that preparing medical students to care for diverse patient populations requires a multitude of intentional changes throughout medical students' education. Using Harro's cycles of socialization and liberation as an analytic lens, we identified multiple places throughout medical students' educational experience that are barriers to learning how to care for diverse populations. We propose changes within medical students' education that build upon each other to adequately prepare students to care for patients of diverse backgrounds. Each proposed change culminates into a systemic shift within an academic institution and requires an intentional commitment by administration, faculty, admissions, curriculum, and student affairs.

3.
Acad Med ; 98(11S): S6-S9, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37983391

ABSTRACT

Although the wide-scale disruption precipitated by the COVID-19 pandemic has somewhat subsided, there are many questions about the implications of such disruptions for the road ahead. This year's Research in Medical Education (RIME) supplement may provide a window of insight. Now, more than ever, researchers are poised to question long-held assumptions while reimagining long-established legacies. Themes regarding the boundaries of professional identity, approaches to difficult conversations, challenges of power and hierarchy, intricacies of selection processes, and complexities of learning climates appear to be the most salient and critical to understand. In this commentary, the authors use the relationship between legacies and assumptions as a framework to gain a deeper understanding about the past, present, and future of RIME.


Subject(s)
COVID-19 , Education, Medical , Humans , Pandemics , COVID-19/epidemiology , Social Identification , Learning
4.
Clin Teach ; 20(6): e13623, 2023 12.
Article in English | MEDLINE | ID: mdl-37605795

ABSTRACT

INTRODUCTION: A benefit of a milestone or Entrustable Professional Activity (EPA) assessment framework is the ability to capture longitudinal performance with growth curves using multi-level modelling (MLM). Growth curves can inform curriculum design and individualised learning. Residency programmes have found growth curves to vary by resident and by milestone. Only one study has analysed medical students' growth curves for EPAs. Analysis of EPA growth curves is critical because no change in performance raises concerns for EPAs as an assessment framework. METHODS: Spencer Fox Eccles School of Medicine-University of Utah students' workplace-based assessment ratings for 7 EPAs were captured at 3 time-points in years 3-4 of AY2017-2018 to AY2020-2021. MLM was used to capture EPA growth curves and determine if variation in growth curves was explained by internal medicine (IM) clerkship order. FINDINGS: A curvilinear slope significantly captured 256 students' average ratings overtime for EPA1a-history-taking, EPA2-clinical reasoning, EPA3-diagnostics, EPA5-documentation and EPA6-presentation, and a linear slope significantly captured EPA9-teamwork ratings, p ≤ 0.001. Growth curves were steepest for EPA2-clinical reasoning and EPA3-diagnostics. Growth curves varied by students, p < 0.05 for all EPA ratings, but IM clerkship rotation order did not significantly explain the variance, p > 0.05. DISCUSSION: The increase in ratings from Year 3 to Year 4 provides validity evidence for use of EPAs in an assessment framework. Students may benefit from more curriculum/skills practice for EPA2-clinical reasoning and EPA3-diagnostics prior to year 3. Variation in student's growth curves is important for coaching and skill development; a one size fits all approach may not suffice.


Subject(s)
Education, Medical, Undergraduate , Internship and Residency , Students, Medical , Humans , Clinical Competence , Curriculum , Educational Measurement , Competency-Based Education
5.
Hosp Pediatr ; 13(2): 122-134, 2023 02 01.
Article in English | MEDLINE | ID: mdl-36625076

ABSTRACT

OBJECTIVE: To determine if the academic performance of students who worked on a longitudinal inpatient team in the pediatric clerkship differed from students on traditional teams. We hypothesized that working on the longitudinal team would be associated with improved performance. METHODS: We retrospectively identified students who rotated in the pediatric clerkship at a single institution from 2017 through 2021. We used multiple linear and multiple ordered logistic regression to examine whether working on a longitudinal inpatient team in which the majority of students work with the same senior resident and attending for the entire inpatient block and function without interns was associated with improved academic performance. RESULTS: We included data from 463 students, 316 in the longitudinal team group and 147 in the traditional team group. Working on the longitudinal team was associated with a higher inpatient preceptor rating (adjusted mean rating 3, 95% confidence interval [CI] 2.97 to 3.03 vs 2.85, 95% CI 2.81 to 2.90; P = .02; on a scale of 0 to 4) and an increased probability of achieving a higher final grade in the pediatric clerkship (adjusted probability of achieving honors 22%, 95% CI 17% to 28% vs 11%, 95% CI 6% to 16%; P = .003). These differences did not persist in the clerkship immediately after pediatrics. CONCLUSIONS: Compared with a traditional inpatient team, working on a longitudinal team was associated with achieving a higher preceptor rating and final pediatric clerkship grade. Implementing similar models within clinical clerkships may help foster optimal student performance.


Subject(s)
Academic Performance , Clinical Clerkship , Students, Medical , Humans , Child , Retrospective Studies , Inpatients
6.
J Surg Educ ; 80(2): 294-301, 2023 02.
Article in English | MEDLINE | ID: mdl-36266228

ABSTRACT

OBJECTIVE: Surgical clerkships frequently include oral exams to assess students' ability to critically analyze data and utilize clinical judgment during common scenarios. Limited guidance exists for the interpretation of oral exam score validity, thus making improvements difficult to target. We examined the development, administration, and scoring of a clerkship oral exam from a validity evidence framework. DESIGN: This was a retrospective study of a third-year, end-of-clerkship oral exam in obstetrics and gynecology (OBGYN). Content, response process, internal structure, and relationship to other variables validity evidence was collected and evaluated for 5 versions of the oral exam. SETTING: Albert Einstein College of Medicine, Bronx, New York City. PARTICIPANTS: Participants were 186 third-year medical students who completed the OBGYN clerkship in the academic year 2020 to 2021. RESULTS: The average number of objectives assessed per oral exam version were uniform, but the distribution of questions per Bloom's level of cognition was uneven. Student scores on all questions regardless of Bloom's level of cognition were >87%, and reliability (Cronbach's alpha) of item scores varied from 0.58 to 0.74. There was a moderate, positive correlation (Spearman's rho) between the oral exam scores and national shelf exam scores (0.35). There were low correlations between oral exam scores and (a) clinical performance ratings (0.14) and (b) formal presentation scores (-0.19). CONCLUSIONS: This study provides an example of how to examine the validity of oral exam scores for targeted improvements. Further modifications are needed before using scores for high stakes decisions. The authors provide recommendations for additional sources of validity evidence to collect in order to better meet the goals of any surgical clerkship oral exam.


Subject(s)
Clinical Clerkship , Gynecology , Obstetrics , Students, Medical , Humans , Gynecology/education , Obstetrics/education , Retrospective Studies , Reproducibility of Results , Educational Measurement , Clinical Competence
7.
Acad Med ; 98(1): 52-56, 2023 01 01.
Article in English | MEDLINE | ID: mdl-36576767

ABSTRACT

PROBLEM: Using pass/fail (P/F) course grades may motivate students to perform well enough to earn a passing grade, giving them a false sense of competence and not motivating them to remediate deficiencies. The authors explored whether adding a not yet pass (NYP) grade to a P/F scale would promote students' mastery orientation toward learning. APPROACH: The authors captured student outcomes and data on time and cost of implementing the NYP grade in 2021 at the University of Utah School of Medicine. One cohort of medical students, who had experienced both P/F and P/NYP/F scales in years 1 and 2, completed an adapted Achievement Goal Questionnaire-Revised (AGQ-R) in fall 2021 to measure how well the P/NYP/F grading scale compared with the P/F scale promoted mastery orientation and performance orientation goals. Students who received an NYP grade provided feedback on the NYP process. OUTCOMES: Students reported that the P/NYP/F scale increased their achievement of both mastery and performance orientation goals, with significantly higher ratings for mastery orientation goals than for performance orientation goals on the AGQ-R (response rate = 124/125 [99%], P ≤ .001, effect size = 0.31). Thirty-eight students received 48 NYP grades in 7 courses during 2021, and 3 (2%) failed a subsequent course after receiving an NYP grade. Most NYP students reported the NYP process enabled them to identify and correct a deficiency (32/36 [89%]) and made them feel supported (28/36 [78%]). The process was time intensive (897 hours total for 48 NYP grades), but no extra funding was budgeted. NEXT STEPS: The findings suggest mastery orientation can be increased with an NYP grade. Implementing a P/NYP/F grading scale for years 1 and/or 2 may help students transition to programmatic assessment or no grading later in medical school, which may better prepare graduates for lifelong learning.


Subject(s)
Goals , Students, Medical , Humans , Schools, Medical , Learning , Motivation
8.
MedEdPORTAL ; 18: 11286, 2022.
Article in English | MEDLINE | ID: mdl-36568035

ABSTRACT

Introduction: Literature suggests that the quality and rigor of health professions education (HPE) research can be elevated if the research is anchored in existing theories and frameworks. This critical skill is difficult for novice researchers to master. We created a workshop to introduce the practical application of theories and frameworks to HPE research. Methods: We conducted two 60- to 75-minute workshops, one in 2019 at an in-person national conference and another in 2021 during an online national education conference. After a brief role-play introduction, participants applied a relevant theory to a case scenario in small groups, led by facilitators with expertise in HPE research. The workshop concluded with a presentation on applying the lessons learned when preparing a scholarly manuscript. We conducted a postworkshop survey to measure self-reported achievement of objectives. Results: Fifty-five individuals participated in the in-person workshop, and approximately 150 people completed the online workshop. Sixty participants (30%) completed the postworkshop survey across both workshops. As a result of participating in the workshop, 80% of participants (32) indicated they could distinguish between frameworks and theories, and 86% (32) could apply a conceptual or theoretical framework to a research question. Strengths of the workshop included the small-group activity, access to expert facilitators, and the materials provided. Discussion: The workshop has been well received by participants and fills a gap in the existing resources available to HPE researchers and mentors. It can be replicated in multiple settings to model the application of conceptual and theoretical frameworks to HPE research.


Subject(s)
Health Occupations , Humans
9.
Med Sci Educ ; 32(5): 1045-1054, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36276764

ABSTRACT

Introduction: Assessment for learning has many benefits, but learners will still encounter high-stakes decisions about their performance throughout training. It is unknown if assessment for learning can be promoted with a combination model where scores from some assessments are factored into course grades and scores from other assessments are not used for course grading. Methods: At the University of Utah School of Medicine, year 1-2 medical students (MS) completed multiple-choice question quiz assessments and final examinations in six systems-based science courses. Quiz and final examination performance counted toward course grades for MS2017-MS2018. Starting with the MS2020 cohort, quizzes no longer counted toward course grades. Quiz, final examination, and Step 1 scores were compared between ungraded quiz and graded quiz cohorts with independent samples t-tests. Student and faculty feedback was collected. Results: Quiz performance was not different for the ungraded and graded cohorts (p = 0.173). Ungraded cohorts scored 4% higher on final examinations than graded cohorts (p ≤ 0.001, d = 0.88). Ungraded cohorts scored above the national average and 11 points higher on Step 1 compared to graded cohorts, who had scored below the national average (p ≤ 0.001, d = 0.64). During the study period, Step 1 scores increased by 2 points nationally. Student feedback was positive, and faculty felt it improved their relationship with students. Discussion: The change to ungraded quizzes did not negatively affect final examination or Step 1 performance, suggesting a combination of ungraded and graded assessments can effectively promote assessment for learning.

10.
Med Sci Educ ; 32(6): 1387-1395, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36277267

ABSTRACT

Purpose: Developing a professional identity requires learners to integrate themselves into the medical profession and take on the role of doctor. The impact of COVID-19 on medical education has been widely investigated, but little attention has been paid to the impact of students' professional identify formation (PIF). The goal of this study was to investigate the impact that the onset of the COVID-19 pandemic had on medical students' PIF. Materials and Methods: An embedded mixed-methods design was utilized. Focus groups were conducted with a subset of year 1-4 students and coded using thematic analysis. Year 1-2 students were surveyed about their professional identity integration in the spring of 2020. Responses were analyzed using descriptive statistics and Wilcoxon signed rank and Mann-Whitney U tests. Results: Qualitative data were organized into six themes that touched on losses and challenges, reflection, and reevaluation of the physician career. Roughly 50% of MS1s and MS2s reported a change in their professional identity integration, but this was not statistically significant. Conclusions: Medical education does not occur in isolation and is influenced by disruptive local and global events. Students perceived challenges when in-person community interaction and hands-on clinical experiences were interrupted. Additionally, students reflected upon their own role and their future career goals. Supplementary Information: The online version contains supplementary material available at 10.1007/s40670-022-01652-4.

11.
J Hosp Med ; 17(3): 176-180, 2022 03.
Article in English | MEDLINE | ID: mdl-35504586

ABSTRACT

Advanced practice providers (APPs) graduate from school with variable hospitalist experience. While hospitalist-specific onboarding is recommended for hospitalist APPs, no standard method currently exists to assess their readiness for practice. We created a 17-item instrument called the Cardin Hospitalist Advanced Practice Provider-Readiness Assessment (CHAPP-RA) to assess APPs'; readiness for practice using a milestones-based scale. We piloted CHAPP-RA at a single site where 11 APPs with varied experience were rated by 30 supervising physicians. Supervisors also provided global ratings for overall performance. We investigated the feasibility of CHAPP-RA and collected validity evidence for the interpretation of scores. The mean time to complete one CHAPP-RA was 10.5 min. Supervisors rated novice APPs lower than more experienced APPs, p ≤ .001. CHAPP-RA ratings also correlated strongly with global ratings. CHAPP-RA is feasible to implement and has initial validity evidence.


Subject(s)
Hospitalists , Humans , Pilot Projects
13.
Acad Med ; 96(11S): S39-S47, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34348369

ABSTRACT

PURPOSE: Innovation articles have their own submission category and guidelines in health professions education (HPE) journals, which suggests innovation might be a unique genre of scholarship. Yet, the requirements for innovation submissions vary among journals, suggesting ambiguity about the core content of this type of scholarship. To reduce this ambiguity, the researchers conducted a systematic overview to identify key features of innovation articles and evaluate their consistency in use across journals. Findings from this review may have implications for further development of innovation scholarship within HPE. METHOD: In this systematic overview, conducted in 2020, the researchers identified 13 HPE journals with innovation-type articles and used content analysis to identify key features from author guidelines and publications describing what editors look for in innovation articles. The researchers then audited a sample of 39 innovation articles (3/journal) published in 2019 to determine presence and consistency of 12 innovation features within and across HPE journals. Audit findings informed the researchers' evaluation of innovation as a genre in HPE. RESULTS: Findings show variability of innovation feature presence within and across journals. On average, articles included 7.8 of the 12 innovation features (SD 2.1, range 3-11). The most common features were description of: how the innovation was implemented (92%), a problem (90%), what was new or novel (79%), and data or outcomes (77%). On average, 5.5 (SD 1.5) out of 12 innovation features were consistently used in articles within each journal. CONCLUSIONS: The authors identified common features of innovation article types based on journal guidelines, but there was variability in presence and consistency of these features, suggesting HPE innovations are in an emerging state of genre development. The authors discuss potential reasons for variability within this article type and highlight the need for further discussion among authors, editors, and reviewers to improve clarity.


Subject(s)
Diffusion of Innovation , Health Occupations/education , Periodicals as Topic/trends , Publishing/trends , Editorial Policies , Humans
14.
Med Sci Educ ; 31(3): 1009-1014, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33747612

ABSTRACT

There are no program evaluation approaches designed for a crisis, such as the COVID-19 pandemic. It is critical to evaluate the educational impact of COVID-19 to keep administrators informed and guide decision-making. The authors used systems thinking to design an evaluation model. The evaluation results suggest complex interactions between individuals and course level changes due to COVID-19. Specifically, year 1-2 students found more education metrics lacking relative to year 3-4 students, faculty, and course directors. There was no consensus for the value of similar instructional/assessment adaptations. The evaluation model can be adapted by other medical schools to fit systems-based needs.

15.
J Grad Med Educ ; 13(1): 43-57, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33680301

ABSTRACT

BACKGROUND: In-training examinations (ITEs) are intended for low-stakes, formative assessment of residents' knowledge, but are increasingly used for high-stake purposes, such as to predict board examination failures. OBJECTIVE: The aim of this review was to investigate the relationship between performance on ITEs and board examination performance across medical specialties. METHODS: A search of the literature for studies assessing the strength of the relationship between ITE and board examination performance from January 2000 to March 2019 was completed. Results were categorized based on the type of statistical analysis used to determine the relationship between ITE performance and board examination performance. RESULTS: Of 1407 articles initially identified, 89 articles underwent full-text review, and 32 articles were included in this review. There was a moderate-strong relationship between ITE and board examination performance, and ITE scores significantly predict board examination scores for the majority of studies. Performing well on an ITE predicts a passing outcome for the board examination, but there is less evidence that performing poorly on an ITE will result in failing the associated specialty board examination. CONCLUSIONS: There is a moderate to strong correlation between ITE performance and subsequent performance on board examinations. That the predictive value for passing the board examination is stronger than the predictive value for failing calls into question the "common wisdom" that ITE scores can be used to identify "at risk" residents. The graduate medical education community should continue to exercise caution and restraint in using ITE scores for moderate to high-stakes decisions.


Subject(s)
Internship and Residency , Specialty Boards , Clinical Competence , Education, Medical, Graduate , Educational Measurement , Humans
17.
Med Teach ; 43(7): 853-855, 2021 Jul.
Article in English | MEDLINE | ID: mdl-32783676

ABSTRACT

There is widespread agreement that medical education should include multi-source, multi-method, and multi-purpose forms of assessment and thus should move towards cohesive systems of assessment. One possibility that fits comfortably with a system of assessment framework is to organize assessments around a competency based medical education model. However conceptually appealing a competency based medical education model is, discussions are sparse regarding the details of determining competence (or the pass/fail point) within each competency. In an effort to make discussions more concrete, we put forth three key issues relevant to implementation of competency-based assessment: (1) each competency is measured with multiple assessments, (2) not all assessments produce a score for a competency as a good portion of assessment in medical school is narrative, and (3) competence decisions re-occur as assessments cumulate. We agree there are a host of other issues to consider, but think the practical action-oriented issues we set forth will be helpful in putting form into what is now largely abstract discussions.


Subject(s)
Competency-Based Education , Education, Medical , Clinical Competence , Humans
18.
Perspect Med Educ ; 9(6): 343-349, 2020 12.
Article in English | MEDLINE | ID: mdl-32820415

ABSTRACT

INTRODUCTION: Work meaning has gained attention as an important contributor to physician job engagement and well-being but little is known about how faculty participation in medical school learning communities might influence this phenomena. Our study goals were to determine how physician faculty members may derive meaning from serving as mentors for longitudinal learning communities of medical students, to understand how that meaning may impact other areas of their work, and relate our findings to existing literature and theoretical frameworks. METHODS: The research team conducted, recorded, transcribed, and coded 25 semi-structured telephone interviews of faculty mentors from four US medical schools with curricular learning communities. The team used an iterative interview coding process to generate final themes and relate these themes to existing literature. RESULTS: The authors identified five themes of meaning faculty derive from participation as learning community mentors: "I am a better professional," "I am more connected," "I am rejuvenated," "I am contributing," and "I am honored." A sixth theme, "I am harmed," encompassed the negative aspects of the learning community faculty experience. The authors found that their identified themes related closely to the theoretical framework for pathways to meaningful work proposed by Rosso et al. DISCUSSION: The alignment of the themes we identified on the experience of learning community faculty to existing literature on work meaning corroborates the theoretical framework and deepens understanding of beneficial and harmful learning community effects on faculty. As learning communities become increasingly common within medical schools, this understanding may be important for leaders in academic medicine considering potential indirect benefits of this educational model.


Subject(s)
Faculty, Medical/psychology , Interprofessional Relations , Leadership , Students, Medical/psychology , Adult , Faculty, Medical/statistics & numerical data , Female , Humans , Interviews as Topic/methods , Learning , Male , Qualitative Research , Schools, Medical/organization & administration , Schools, Medical/statistics & numerical data , Students, Medical/statistics & numerical data , United States
19.
MedEdPORTAL ; 16: 10911, 2020 06 18.
Article in English | MEDLINE | ID: mdl-32656332

ABSTRACT

Introduction: Reviewing elements of a curriculum, such as courses and clerkships in medical school, is an essential part of the quality improvement process. Yet there is a gap in the literature in terms of actual rubrics for evaluating course quality in medical schools. Methods: This resource describes a course review process and rubric to evaluate course quality: A subcommittee of faculty members and students evaluates goals, content and delivery, assessment, feedback to students, grading, and student feedback for each course with the rubric. Course directors, Curriculum Committee members, and Curriculum Evaluation Subcommittee members were surveyed on their perception of the process. Results: A large majority of Curriculum Committee and Curriculum Evaluation Subcommittee members agreed that the review process was objective (100%), provided an evaluation of course quality (>95%), helped identify areas of improvement/strengths (>91%) and issues/concerns in the curriculum (>95%), helped them become more familiar with the curriculum (>90%), and was a catalyst for changes in a course (>77%). Course/clerkship directors had less agreement that the course review process was a catalyst for curriculum changes (46%) and that the process helped identify areas of improvement for a course (62%). Discussion: This curriculum evaluation process provides a resource for other institutions to use and/or modify for their own course evaluation process. All stakeholders in the review process agreed that the evaluation process was successful in identifying areas that worked and did not work in courses.


Subject(s)
Curriculum , Education, Medical , Humans , Quality Improvement , Schools, Medical
SELECTION OF CITATIONS
SEARCH DETAIL
...