Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Med Teach ; 43(7): 853-855, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32783676

RESUMO

There is widespread agreement that medical education should include multi-source, multi-method, and multi-purpose forms of assessment and thus should move towards cohesive systems of assessment. One possibility that fits comfortably with a system of assessment framework is to organize assessments around a competency based medical education model. However conceptually appealing a competency based medical education model is, discussions are sparse regarding the details of determining competence (or the pass/fail point) within each competency. In an effort to make discussions more concrete, we put forth three key issues relevant to implementation of competency-based assessment: (1) each competency is measured with multiple assessments, (2) not all assessments produce a score for a competency as a good portion of assessment in medical school is narrative, and (3) competence decisions re-occur as assessments cumulate. We agree there are a host of other issues to consider, but think the practical action-oriented issues we set forth will be helpful in putting form into what is now largely abstract discussions.


Assuntos
Educação Baseada em Competências , Educação Médica , Competência Clínica , Humanos
2.
J Gen Intern Med ; 34(6): 929-935, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30891692

RESUMO

BACKGROUND: Feedback is a critical element of graduate medical education. Narrative comments on evaluation forms are a source of feedback for residents. As a shared mental model for performance, milestone-based evaluations may impact narrative comments and resident perception of feedback. OBJECTIVE: To determine if milestone-based evaluations impacted the quality of faculty members' narrative comments on evaluations and, as an extension, residents' perception of feedback. DESIGN: Concurrent mixed methods study, including qualitative analysis of narrative comments and survey of resident perception of feedback. PARTICIPANTS: Seventy internal medicine residents and their faculty evaluators at the University of Utah. APPROACH: Faculty narrative comments from 248 evaluations pre- and post-milestone implementation were analyzed for quality and Accreditation Council for Graduate Medical Education competency by area of strength and area for improvement. Seventy residents were surveyed regarding quality of feedback pre- and post-milestone implementation. KEY RESULTS: Qualitative analysis of narrative comments revealed nearly all evaluations pre- and post-milestone implementation included comments about areas of strength but were frequently vague and not related to competencies. Few evaluations included narrative comments on areas for improvement, but these were of higher quality compared to areas of strength (p < 0.001). Overall resident perception of quality of narrative comments was low and did not change following milestone implementation (p = 0.562) for the 86% of residents (N = 60/70) who completed the pre- and post-surveys. CONCLUSIONS: The quality of narrative comments was poor, and there was no evidence of improved quality following introduction of milestone-based evaluations. Comments on areas for improvement were of higher quality than areas of strength, suggesting an area for targeted intervention. Residents' perception of feedback quality did not change following implementation of milestone-based evaluations, suggesting that in the post-milestone era, internal medicine educators need to utilize additional interventions to improve quality of feedback.


Assuntos
Estudos de Avaliação como Assunto , Retroalimentação Psicológica , Medicina Interna/normas , Internato e Residência/normas , Narração , Autoimagem , Adulto , Feminino , Humanos , Medicina Interna/métodos , Internato e Residência/métodos , Masculino , Inquéritos e Questionários/normas
3.
Teach Learn Med ; 31(4): 361-369, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30873878

RESUMO

Phenomenon: There is an abundance of literature on Entrustable Professional Activities (EPAs) in theory, but there are few studies on the EPAs in practice for undergraduate clinical education. In addition, little is known about the degree to which the EPAs are or are not aligned with physician assessors' performance schemas of the clerkship student. Investigating the degree to which physician assessors' performance schemas are already aligned with the activities described by the EPAs is critical for effective workplace assessment design. Approach: We sampled 1,032 areas of strength (strength) and areas for improvement (improvement) written evaluation comments by 423 physician assessors for clerkship students' performance in academic years 2014-15 and 2015-16 at the University of Utah School of Medicine. Two researchers independently categorized each comment by EPA and/or coded by non-EPA topic. The proportion of comment types was compared between strength comments and improvement comments with the Wilcoxon Signed-Rank Test. Findings: The most frequently mentioned EPAs in comments were about history gathering/physical exam, differential diagnosis, documentation, presentation, and interprofessional collaboration; few mentioned diagnostic tests, patient handovers, recognition of urgent patient care, and patient safety, and none mentioned orders/prescriptions and informed consent. The most frequent non-EPA topics were about medical knowledge, need to read more, learning attitude, work ethic, professionalism/maturity, and receptiveness to feedback. The proportion of comments aligned with an EPA only, a non-EPA topic only, or both an EPA and non-EPA topic was significantly different for clerkship students' strength compared to improvement. Insights: Physician assessors' performance schemas for clerkship students were aligned with EPAs to varying degrees depending on the specific EPA and whether describing strength or improvement. Of interest, the frequently mentioned non-EPA comments represented some of the competencies that contribute to effectively performing particular EPAs and are Accreditation Council for Graduate Medical Education (ACGME) core competencies (e.g., medical knowledge, professionalism), used in residency programs. Because physician assessors for undergraduate medical education often also participate in graduate medical education, the frequency of non-EPA topics aligned to ACGME competencies may suggest influence of graduate medical education evaluative frameworks on performance schemas for clerkship students; this could be important when considering implementation of EPAs in undergraduate medical education.


Assuntos
Competência Clínica/normas , Avaliação de Desempenho Profissional/métodos , Estudantes de Medicina , Estágio Clínico , Educação Baseada em Competências , Educação de Pós-Graduação em Medicina , Educação de Graduação em Medicina , Humanos
4.
Teach Learn Med ; 28(4): 347-352, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27700251

RESUMO

This Conversation Starters article presents a selected research abstract from the 2016 Association of American Medical Colleges Western Region Group on Educational Affairs annual spring meeting. The abstract is paired with the integrative commentary of three experts who shared their thoughts stimulated by the needs assessment study. These thoughts explore how the general theoretical mechanisms of transition may be integrated with cognitive load theory in order to design interventions and environments that foster transition.


Assuntos
Educação Médica/tendências , Comunicação , Humanos , Avaliação das Necessidades , Pesquisa
5.
Int Urogynecol J ; 24(10): 1615-22, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23575698

RESUMO

INTRODUCTION AND HYPOTHESIS: Our aim was to assess the impact of immediate preoperative laparoscopic warm-up using a simulator on intraoperative laparoscopic performance by gynecologic residents. METHODS: Eligible laparoscopic cases performed for benign, gynecologic indications were randomized to be performed with or without immediate preoperative warm-up. Residents randomized to warm-up performed a brief set of standardized exercises on a laparoscopic trainer immediately before surgery. Intraoperative performance was scored using previously validated global rating scales. Assessment was made immediately after surgery by attending faculty who were blinded to the warm-up randomization. RESULTS: We randomized 237 residents to 47 minor laparoscopic cases (adnexal/ tubal surgery) and 44 to major laparoscopic cases (hysterectomy). Overall, attendings rated upper-level resident performances (postgraduate year [PGY-3, 4]) significantly higher on global rating scales than lower-level resident performances (PGY-1, 2). Residents who performed warm-up exercises prior to surgery were rated significantly higher on all subscales within each global rating scale, irrespective of the difficulty of the surgery. Most residents felt that performing warm-up exercises helped their intraoperative performances. CONCLUSION: Performing a brief warm-up exercise before a major or minor laparoscopic procedure significantly improved the intraoperative performance of residents irrespective of the difficulty of the case.


Assuntos
Competência Clínica , Simulação por Computador , Internato e Residência , Laparoscopia/métodos , Período Pré-Operatório , Exercício de Aquecimento/psicologia , Adulto , Feminino , Procedimentos Cirúrgicos em Ginecologia/métodos , Humanos , Histerectomia , Masculino , Avaliação de Resultados em Cuidados de Saúde , Ovariectomia , Esterilização Tubária , Resultado do Tratamento
6.
Clin Teach ; 20(6): e13623, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37605795

RESUMO

INTRODUCTION: A benefit of a milestone or Entrustable Professional Activity (EPA) assessment framework is the ability to capture longitudinal performance with growth curves using multi-level modelling (MLM). Growth curves can inform curriculum design and individualised learning. Residency programmes have found growth curves to vary by resident and by milestone. Only one study has analysed medical students' growth curves for EPAs. Analysis of EPA growth curves is critical because no change in performance raises concerns for EPAs as an assessment framework. METHODS: Spencer Fox Eccles School of Medicine-University of Utah students' workplace-based assessment ratings for 7 EPAs were captured at 3 time-points in years 3-4 of AY2017-2018 to AY2020-2021. MLM was used to capture EPA growth curves and determine if variation in growth curves was explained by internal medicine (IM) clerkship order. FINDINGS: A curvilinear slope significantly captured 256 students' average ratings overtime for EPA1a-history-taking, EPA2-clinical reasoning, EPA3-diagnostics, EPA5-documentation and EPA6-presentation, and a linear slope significantly captured EPA9-teamwork ratings, p ≤ 0.001. Growth curves were steepest for EPA2-clinical reasoning and EPA3-diagnostics. Growth curves varied by students, p < 0.05 for all EPA ratings, but IM clerkship rotation order did not significantly explain the variance, p > 0.05. DISCUSSION: The increase in ratings from Year 3 to Year 4 provides validity evidence for use of EPAs in an assessment framework. Students may benefit from more curriculum/skills practice for EPA2-clinical reasoning and EPA3-diagnostics prior to year 3. Variation in student's growth curves is important for coaching and skill development; a one size fits all approach may not suffice.


Assuntos
Educação de Graduação em Medicina , Internato e Residência , Estudantes de Medicina , Humanos , Competência Clínica , Currículo , Avaliação Educacional , Educação Baseada em Competências
7.
Acad Med ; 98(1): 52-56, 2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36576767

RESUMO

PROBLEM: Using pass/fail (P/F) course grades may motivate students to perform well enough to earn a passing grade, giving them a false sense of competence and not motivating them to remediate deficiencies. The authors explored whether adding a not yet pass (NYP) grade to a P/F scale would promote students' mastery orientation toward learning. APPROACH: The authors captured student outcomes and data on time and cost of implementing the NYP grade in 2021 at the University of Utah School of Medicine. One cohort of medical students, who had experienced both P/F and P/NYP/F scales in years 1 and 2, completed an adapted Achievement Goal Questionnaire-Revised (AGQ-R) in fall 2021 to measure how well the P/NYP/F grading scale compared with the P/F scale promoted mastery orientation and performance orientation goals. Students who received an NYP grade provided feedback on the NYP process. OUTCOMES: Students reported that the P/NYP/F scale increased their achievement of both mastery and performance orientation goals, with significantly higher ratings for mastery orientation goals than for performance orientation goals on the AGQ-R (response rate = 124/125 [99%], P ≤ .001, effect size = 0.31). Thirty-eight students received 48 NYP grades in 7 courses during 2021, and 3 (2%) failed a subsequent course after receiving an NYP grade. Most NYP students reported the NYP process enabled them to identify and correct a deficiency (32/36 [89%]) and made them feel supported (28/36 [78%]). The process was time intensive (897 hours total for 48 NYP grades), but no extra funding was budgeted. NEXT STEPS: The findings suggest mastery orientation can be increased with an NYP grade. Implementing a P/NYP/F grading scale for years 1 and/or 2 may help students transition to programmatic assessment or no grading later in medical school, which may better prepare graduates for lifelong learning.


Assuntos
Objetivos , Estudantes de Medicina , Humanos , Faculdades de Medicina , Aprendizagem , Motivação
8.
Med Sci Educ ; 32(6): 1387-1395, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36277267

RESUMO

Purpose: Developing a professional identity requires learners to integrate themselves into the medical profession and take on the role of doctor. The impact of COVID-19 on medical education has been widely investigated, but little attention has been paid to the impact of students' professional identify formation (PIF). The goal of this study was to investigate the impact that the onset of the COVID-19 pandemic had on medical students' PIF. Materials and Methods: An embedded mixed-methods design was utilized. Focus groups were conducted with a subset of year 1-4 students and coded using thematic analysis. Year 1-2 students were surveyed about their professional identity integration in the spring of 2020. Responses were analyzed using descriptive statistics and Wilcoxon signed rank and Mann-Whitney U tests. Results: Qualitative data were organized into six themes that touched on losses and challenges, reflection, and reevaluation of the physician career. Roughly 50% of MS1s and MS2s reported a change in their professional identity integration, but this was not statistically significant. Conclusions: Medical education does not occur in isolation and is influenced by disruptive local and global events. Students perceived challenges when in-person community interaction and hands-on clinical experiences were interrupted. Additionally, students reflected upon their own role and their future career goals. Supplementary Information: The online version contains supplementary material available at 10.1007/s40670-022-01652-4.

9.
Med Sci Educ ; 32(5): 1045-1054, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36276764

RESUMO

Introduction: Assessment for learning has many benefits, but learners will still encounter high-stakes decisions about their performance throughout training. It is unknown if assessment for learning can be promoted with a combination model where scores from some assessments are factored into course grades and scores from other assessments are not used for course grading. Methods: At the University of Utah School of Medicine, year 1-2 medical students (MS) completed multiple-choice question quiz assessments and final examinations in six systems-based science courses. Quiz and final examination performance counted toward course grades for MS2017-MS2018. Starting with the MS2020 cohort, quizzes no longer counted toward course grades. Quiz, final examination, and Step 1 scores were compared between ungraded quiz and graded quiz cohorts with independent samples t-tests. Student and faculty feedback was collected. Results: Quiz performance was not different for the ungraded and graded cohorts (p = 0.173). Ungraded cohorts scored 4% higher on final examinations than graded cohorts (p ≤ 0.001, d = 0.88). Ungraded cohorts scored above the national average and 11 points higher on Step 1 compared to graded cohorts, who had scored below the national average (p ≤ 0.001, d = 0.64). During the study period, Step 1 scores increased by 2 points nationally. Student feedback was positive, and faculty felt it improved their relationship with students. Discussion: The change to ungraded quizzes did not negatively affect final examination or Step 1 performance, suggesting a combination of ungraded and graded assessments can effectively promote assessment for learning.

10.
MedEdPORTAL ; 18: 11286, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36568035

RESUMO

Introduction: Literature suggests that the quality and rigor of health professions education (HPE) research can be elevated if the research is anchored in existing theories and frameworks. This critical skill is difficult for novice researchers to master. We created a workshop to introduce the practical application of theories and frameworks to HPE research. Methods: We conducted two 60- to 75-minute workshops, one in 2019 at an in-person national conference and another in 2021 during an online national education conference. After a brief role-play introduction, participants applied a relevant theory to a case scenario in small groups, led by facilitators with expertise in HPE research. The workshop concluded with a presentation on applying the lessons learned when preparing a scholarly manuscript. We conducted a postworkshop survey to measure self-reported achievement of objectives. Results: Fifty-five individuals participated in the in-person workshop, and approximately 150 people completed the online workshop. Sixty participants (30%) completed the postworkshop survey across both workshops. As a result of participating in the workshop, 80% of participants (32) indicated they could distinguish between frameworks and theories, and 86% (32) could apply a conceptual or theoretical framework to a research question. Strengths of the workshop included the small-group activity, access to expert facilitators, and the materials provided. Discussion: The workshop has been well received by participants and fills a gap in the existing resources available to HPE researchers and mentors. It can be replicated in multiple settings to model the application of conceptual and theoretical frameworks to HPE research.


Assuntos
Ocupações em Saúde , Humanos
11.
J Grad Med Educ ; 13(1): 43-57, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33680301

RESUMO

BACKGROUND: In-training examinations (ITEs) are intended for low-stakes, formative assessment of residents' knowledge, but are increasingly used for high-stake purposes, such as to predict board examination failures. OBJECTIVE: The aim of this review was to investigate the relationship between performance on ITEs and board examination performance across medical specialties. METHODS: A search of the literature for studies assessing the strength of the relationship between ITE and board examination performance from January 2000 to March 2019 was completed. Results were categorized based on the type of statistical analysis used to determine the relationship between ITE performance and board examination performance. RESULTS: Of 1407 articles initially identified, 89 articles underwent full-text review, and 32 articles were included in this review. There was a moderate-strong relationship between ITE and board examination performance, and ITE scores significantly predict board examination scores for the majority of studies. Performing well on an ITE predicts a passing outcome for the board examination, but there is less evidence that performing poorly on an ITE will result in failing the associated specialty board examination. CONCLUSIONS: There is a moderate to strong correlation between ITE performance and subsequent performance on board examinations. That the predictive value for passing the board examination is stronger than the predictive value for failing calls into question the "common wisdom" that ITE scores can be used to identify "at risk" residents. The graduate medical education community should continue to exercise caution and restraint in using ITE scores for moderate to high-stakes decisions.


Assuntos
Internato e Residência , Conselhos de Especialidade Profissional , Competência Clínica , Educação de Pós-Graduação em Medicina , Avaliação Educacional , Humanos
12.
Acad Med ; 96(11S): S39-S47, 2021 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-34348369

RESUMO

PURPOSE: Innovation articles have their own submission category and guidelines in health professions education (HPE) journals, which suggests innovation might be a unique genre of scholarship. Yet, the requirements for innovation submissions vary among journals, suggesting ambiguity about the core content of this type of scholarship. To reduce this ambiguity, the researchers conducted a systematic overview to identify key features of innovation articles and evaluate their consistency in use across journals. Findings from this review may have implications for further development of innovation scholarship within HPE. METHOD: In this systematic overview, conducted in 2020, the researchers identified 13 HPE journals with innovation-type articles and used content analysis to identify key features from author guidelines and publications describing what editors look for in innovation articles. The researchers then audited a sample of 39 innovation articles (3/journal) published in 2019 to determine presence and consistency of 12 innovation features within and across HPE journals. Audit findings informed the researchers' evaluation of innovation as a genre in HPE. RESULTS: Findings show variability of innovation feature presence within and across journals. On average, articles included 7.8 of the 12 innovation features (SD 2.1, range 3-11). The most common features were description of: how the innovation was implemented (92%), a problem (90%), what was new or novel (79%), and data or outcomes (77%). On average, 5.5 (SD 1.5) out of 12 innovation features were consistently used in articles within each journal. CONCLUSIONS: The authors identified common features of innovation article types based on journal guidelines, but there was variability in presence and consistency of these features, suggesting HPE innovations are in an emerging state of genre development. The authors discuss potential reasons for variability within this article type and highlight the need for further discussion among authors, editors, and reviewers to improve clarity.


Assuntos
Difusão de Inovações , Ocupações em Saúde/educação , Publicações Periódicas como Assunto/tendências , Editoração/tendências , Políticas Editoriais , Humanos
13.
MedEdPORTAL ; 16: 10911, 2020 06 18.
Artigo em Inglês | MEDLINE | ID: mdl-32656332

RESUMO

Introduction: Reviewing elements of a curriculum, such as courses and clerkships in medical school, is an essential part of the quality improvement process. Yet there is a gap in the literature in terms of actual rubrics for evaluating course quality in medical schools. Methods: This resource describes a course review process and rubric to evaluate course quality: A subcommittee of faculty members and students evaluates goals, content and delivery, assessment, feedback to students, grading, and student feedback for each course with the rubric. Course directors, Curriculum Committee members, and Curriculum Evaluation Subcommittee members were surveyed on their perception of the process. Results: A large majority of Curriculum Committee and Curriculum Evaluation Subcommittee members agreed that the review process was objective (100%), provided an evaluation of course quality (>95%), helped identify areas of improvement/strengths (>91%) and issues/concerns in the curriculum (>95%), helped them become more familiar with the curriculum (>90%), and was a catalyst for changes in a course (>77%). Course/clerkship directors had less agreement that the course review process was a catalyst for curriculum changes (46%) and that the process helped identify areas of improvement for a course (62%). Discussion: This curriculum evaluation process provides a resource for other institutions to use and/or modify for their own course evaluation process. All stakeholders in the review process agreed that the evaluation process was successful in identifying areas that worked and did not work in courses.


Assuntos
Currículo , Educação Médica , Humanos , Melhoria de Qualidade , Faculdades de Medicina
14.
Perspect Med Educ ; 9(6): 343-349, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32820415

RESUMO

INTRODUCTION: Work meaning has gained attention as an important contributor to physician job engagement and well-being but little is known about how faculty participation in medical school learning communities might influence this phenomena. Our study goals were to determine how physician faculty members may derive meaning from serving as mentors for longitudinal learning communities of medical students, to understand how that meaning may impact other areas of their work, and relate our findings to existing literature and theoretical frameworks. METHODS: The research team conducted, recorded, transcribed, and coded 25 semi-structured telephone interviews of faculty mentors from four US medical schools with curricular learning communities. The team used an iterative interview coding process to generate final themes and relate these themes to existing literature. RESULTS: The authors identified five themes of meaning faculty derive from participation as learning community mentors: "I am a better professional," "I am more connected," "I am rejuvenated," "I am contributing," and "I am honored." A sixth theme, "I am harmed," encompassed the negative aspects of the learning community faculty experience. The authors found that their identified themes related closely to the theoretical framework for pathways to meaningful work proposed by Rosso et al. DISCUSSION: The alignment of the themes we identified on the experience of learning community faculty to existing literature on work meaning corroborates the theoretical framework and deepens understanding of beneficial and harmful learning community effects on faculty. As learning communities become increasingly common within medical schools, this understanding may be important for leaders in academic medicine considering potential indirect benefits of this educational model.


Assuntos
Docentes de Medicina/psicologia , Relações Interprofissionais , Liderança , Estudantes de Medicina/psicologia , Adulto , Docentes de Medicina/estatística & dados numéricos , Feminino , Humanos , Entrevistas como Assunto/métodos , Aprendizagem , Masculino , Pesquisa Qualitativa , Faculdades de Medicina/organização & administração , Faculdades de Medicina/estatística & dados numéricos , Estudantes de Medicina/estatística & dados numéricos , Estados Unidos
15.
J Med Educ Curric Dev ; 6: 2382120519855061, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31259252

RESUMO

BACKGROUND: Medical schools are increasingly using learning communities (LCs) for clinical skills curriculum delivery despite little research on LCs employed for this purpose. We evaluated an LC model compared with a non-LC model for preclerkship clinical skills curriculum using Kirkpatrick's hierarchy as an evaluation framework. METHODS: The first LC cohort's (N = 101; matriculating Fall 2013) reaction to the LC model was assessed with self-reported surveys. Change in skills and learning transfer to clerkships was measured with objective structured clinical examinations (OSCEs) at the end of years 2 and 3 and first and last clerkship preceptor evaluations; the LC cohort and the prior cohort (N = 86; matriculating Fall 2012) that received clinical skills instruction in a non-LC format were compared with Mann-Whitney U tests. RESULTS: The LC model for preclerkship clinical skills curriculum was rated as excellent or good by 96% of respondents in Semesters 1 to 3 (N = 95). Across multiple performance domains, 96% to 99% of students were satisfied to very satisfied with their LC faculty preceptors (N varied by item). For the end of preclerkship OSCE, the LC cohort scored higher than the non-LC cohort in history gathering (P = .003, d = 0.50), physical examination (P = .019, d = 0.32), and encounter documentation (P ⩽ .001, d = 0.47); the non-LC cohort scored higher than the LC cohort in communication (P = .001, d = 0.43). For the end of year 3 OSCE, the LC cohort scored higher than the non-LC cohort in history gathering (P = .006, d = 0.50) and encounter note documentation (P = .027, d = 0.24); there was no difference in physical examination or communication scores between cohorts. There was no detectable difference between LC and non-LC student performance on the preceptor evaluation forms at either the beginning or end of the clerkship curriculum. CONCLUSIONS: We observed limited performance improvements for LC compared with non-LC students on the end of the preclerkship OSCE but not on the clerkship preceptor evaluations. Additional studies of LC models for clinical skills curriculum delivery are needed to further elucidate their impact on the professional development of medical students.

16.
J Med Educ Curric Dev ; 6: 2382120519827890, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30923748

RESUMO

PURPOSE: Many US medical schools have adopted learning communities to provide a framework for advising and teaching functions. Faculty who participate in learning communities often have additional educator roles. Defining potential conflicts of interest (COIs) among these roles is an important consideration for schools with existing learning communities and those looking to develop them, both for transparency with students and also to comply with regulatory requirements. METHODS: A survey was sent to the institutional contact for each of the 42 Learning Communities Institute (LCI) member medical schools to assess faculty opinions about what roles potentially conflict. The survey asked the role of learning community faculty in summative and formative assessment of students and whether schools had existing policies around COIs in medical education. RESULTS: In all, 35 (85%) LCI representatives responded; 30 (86%) respondents agreed or strongly agreed that learning community faculty should be permitted to evaluate their students for formative purposes, while 19 (54%) strongly agreed or agreed that learning community faculty should be permitted to evaluate their students in a way that contributes to a grade; 31 (89%) reported awareness of the accreditation standard ensuring "that medical students can obtain academic counseling from individuals who have no role in making assessment or promotion decisions about them," but only 10 (29%) had a school policy about COIs in education. There was a wide range of responses about what roles potentially conflict with being a learning community faculty. CONCLUSION: The potential for COIs between learning community faculty and other educator roles concerns faculty at schools with learning communities, but most schools have not formally addressed these concerns.

17.
Anat Sci Educ ; 10(2): 170-175, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-27427860

RESUMO

The flipped classroom (FC) model has emerged as an innovative solution to improve student-centered learning. However, studies measuring student performance of material in the FC relative to the lecture classroom (LC) have shown mixed results. An aim of this study was to determine if the disparity in results of prior research is due to level of cognition (low or high) needed to perform well on the outcome, or course assessment. This study tested the hypothesis that (1) students in a FC would perform better than students in a LC on an assessment requiring higher cognition and (2) there would be no difference in performance for an assessment requiring lower cognition. To test this hypothesis the performance of 28 multiple choice anatomy items that were part of a final examination were compared between two classes of first year medical students at the University of Utah School of Medicine. Items were categorized as requiring knowledge (low cognition), application, or analysis (high cognition). Thirty hours of anatomy content was delivered in LC format to 101 students in 2013 and in FC format to 104 students in 2014. Mann Whitney tests indicated FC students performed better than LC students on analysis items, U = 4243.00, P = 0.030, r = 0.19, but there were no differences in performance between FC and LC students for knowledge, U = 5002.00, P = 0.720 or application, U = 4990.00, P = 0.700, items. The FC may benefit retention when students are expected to analyze material. Anat Sci Educ 10: 170-175. © 2016 American Association of Anatomists.


Assuntos
Anatomia/educação , Cognição , Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Aprendizagem Baseada em Problemas/métodos , Estudantes de Medicina/psicologia , Ensino , Currículo , Educação de Graduação em Medicina/classificação , Escolaridade , Feminino , Humanos , Masculino , Aprendizagem Baseada em Problemas/classificação , Faculdades de Medicina , Inquéritos e Questionários , Fatores de Tempo , Utah
18.
Am J Surg ; 213(2): 325-329, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28007315

RESUMO

BACKGROUND: Scores from the NBME Subject Examination in Surgery (Surgery Shelf) positively correlate with United States Medical Licensing Examination Step 1 (Step 1). Based on this relationship, the authors evaluated the predictive value of Step 1 on the Surgery Shelf. METHODS: Surgery Shelf standard scores were substituted for Step 1 standard scores for 395 students in 2012-2014 at one medical school. Linear regression was used to determine how well Step 1 scores predicted Surgery Shelf scores. Percent match between original (with Shelf) and modified (with Step 1) clerkship grades were computed. RESULTS: Step 1 scores significantly predicted Surgery Shelf scores, R2 = 0.42, P < 0.001. For every point increase in Step 1, a Surgery Shelf score increased by 0.30 points. Seventy-seven percent of original grades matched the modified grades. CONCLUSION: Replacing Surgery Shelf scores with Step 1 scores did not have an effect on the majority of final clerkship grades. This observation raises concern over use of Surgery Shelf scores as a measure of knowledge obtained during the Surgery clerkship.


Assuntos
Estágio Clínico , Avaliação Educacional , Cirurgia Geral/educação , Humanos , Licenciamento em Medicina , Modelos Lineares , Estados Unidos
19.
Am J Clin Pathol ; 148(6): 513-522, 2017 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-29165570

RESUMO

OBJECTIVES: To determine the impact of systemwide charge display on laboratory utilization. METHODS: This was a randomized controlled trial with a baseline period and an intervention period. Tests were randomized to a control arm or an active arm. The maximum allowable Medicare reimbursement rate was displayed for tests in the active arm during the intervention period. Total volume of tests in the active arm was compared with those in the control arm. Residents were surveyed before and after the intervention to assess charge awareness. RESULTS: Charge display had no effect on order behavior. This result held for patient type (inpatient vs outpatient) and for insurance category (commercial, government, self-pay). Residents overestimated the charges of tests both before and after the intervention. Many residents failed to notice the charge display in the computerized order entry system. CONCLUSIONS: The impact of charge display depends on context. Charge display is not always effective.


Assuntos
Centros Médicos Acadêmicos/estatística & dados numéricos , Registros Eletrônicos de Saúde , Laboratórios/economia , Medicare/economia , Padrões de Prática Médica/economia , Registros Eletrônicos de Saúde/estatística & dados numéricos , Humanos , Seguro/estatística & dados numéricos , Estados Unidos
20.
MedEdPORTAL ; 13: 10588, 2017 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-30800790

RESUMO

INTRODUCTION: Prior research has identified seven elements of a good assessment, but the elements have not been operationalized in the form of a rubric to rate assessment utility. It would be valuable for medical educators to have a systematic way to evaluate the utility of an assessment in order to determine if the assessment used is optimal for the setting. METHODS: We developed and refined an assessment utility rubric using a modified Delphi process. Twenty-nine graduate students pilot-tested the rubric in 2016 with hypothetical data from three examinations, and interrater reliability of rubric scores was measured with interclass correlation coefficients (ICCs). RESULTS: Consensus for all rubric items was reached after three rounds. The resulting assessment utility rubric includes four elements (equivalence, educational effect, catalytic effect, acceptability) with three items each, one element (validity evidence) with five items, and space to provide four feasibility items relating to time and cost. Rater scores had ICC values greater than .75. DISCUSSION: The rubric shows promise in allowing educators to evaluate the utility of an assessment specific to their setting. The medical education field needs to give more consideration to how an assessment drives learning forward, how it motivates trainees, and whether it produces acceptable ranges of scores for all stakeholders.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa