Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 70
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Med Educ ; 57(4): 349-358, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36454138

RESUMEN

INTRODUCTION: Engaging learners in continuing medical education (CME) is challenging. Recently, CME courses have transitioned to livestreamed CME, with learners viewing live, in-person courses online. The authors aimed to (1) compare learner engagement and teaching effectiveness in livestreamed with in-person CME and (2) determine how livestream engagement and teaching effectiveness is associated with (A) interactivity metrics, (B) presentation characteristics and (C) medical knowledge. METHODS: A 3-year, non-randomised study of in-person and livestream CME was performed. The course was in-person for 2018 but transitioned to livestream for 2020 and 2021. Learners completed the Learner Engagement Inventory and Teaching Effectiveness Instrument after each presentation. Both instruments were supported by content, internal structure and relations to other variables' validity evidence. Interactivity metrics included learner use of audience response, questions asked by learners and presentation views. Presentation characteristics included presentations using audience response, using pre/post-test format, time of day and words per slide. Medical knowledge was assessed by audience response. A repeated measures analysis of variance (anova) was used for comparisons and a mixed model approach for correlations. RESULTS: A total of 159 learners (response rate 27%) completed questionnaires. Engagement did not significantly differ between in-person or livestream CME. (4.56 versus 4.53, p = 0.64, maximum 5 = highly engaged). However, teacher effectiveness scores were higher for in-person compared with livestream (4.77 versus 4.71 p = 0.01, maximum 5 = highly effective). For livestreamed courses, learner engagement was associated with presentation characteristics, including presentation using of audience response (yes = 4.57, no = 4.45, p < .0001), use of a pre/post-test (yes = 4.62, no = 4.54, p < .0001) and time of presentation (morning = 4.58, afternoon = 4.53, p = .0002). Significant associations were not seen for interactivity metrics or medical knowledge. DISCUSSION: Livestreaming may be as engaging as in-person CME. Although teaching effectiveness in livestreaming was lower, this difference was small. CME course planners should consider offering livestream CME while exploring strategies to enhance teaching effectiveness in livestreamed settings.


Asunto(s)
Educación Médica Continua , Enseñanza , Humanos , Encuestas y Cuestionarios
2.
Teach Learn Med ; 32(5): 552-560, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32749160

RESUMEN

Problem: Conferences are the most common form of continuing medical education (CME), but their effect on clinician practice is inconsistent. Reflection is a critical step in the process of practice change among clinicians and may lead to improved outcomes following conference-based CME. However, reflection requires time to process newly-learned material. Adequate time for reflection may be noticeably absent during many conference presentations. Intervention: The pause procedure is a 90-second 'pause' during a 30-minute presentation so learners can review and discuss content. The goal of the pause procedure is to stimulate learners' active engagement with newly learned material which will, in turn, promote learner reflection. Context: Fifty-six presentations at two hospital medicine CME conferences were assigned to the pause procedure or control. Study outcomes provided by conference participants were validated reflection scores and commitment-to-change (CTC) statements for each presentation. A post-hoc survey of the intervention group was conducted to assess presenters' experiences with the pause procedure. Impact: A total of 527 conference participants completed presentation evaluations (response rate 72.7%). Presentations incorporating the pause procedure failed to lead higher participant reflection scores (percentage 'top box' score; intervention: 39.2% vs. control: 41.7%, p = 0.40) or participant CTC rates (median [IQR]; intervention: 4.64 [3.04, 10.57] vs. control: 8.16 [5.28, 17.12], p = 0.13) than control presentations. However, the majority of presenters (16 out of 17 survey respondents) had never before used the intervention and little active engagement among learners was noted during the pause procedure. Lessons Learned: Adding the pause procedure to CME presentations did not lead to greater reflection or CTC among clinician learners. However, presenters had limited experience with the intervention, which may have reduced their fidelity to the educational principles of the pause procedure. Faculty development may be necessary when planning a new educational intervention that is to be implemented by conference presenters.


Asunto(s)
Educación Médica Continua , Médicos/psicología , Aprendizaje Basado en Problemas/métodos , Congresos como Asunto , Humanos , Encuestas y Cuestionarios , Pensamiento
3.
BMC Med Educ ; 20(1): 403, 2020 Nov 04.
Artículo en Inglés | MEDLINE | ID: mdl-33148231

RESUMEN

BACKGROUND: Continuing medical education (CME) often uses passive educational models including lectures. However, numerous studies have questioned the effectiveness of these less engaging educational strategies. Studies outside of CME suggest that engaged learning is associated with improved educational outcomes. However, measuring participants' engagement can be challenging. We developed and determined the validity evidence for a novel instrument to assess learner engagement in CME. METHODS: We conducted a cross-sectional validation study at a large, didactic-style CME conference. Content validity evidence was established through review of literature and previously published engagement scales and conceptual frameworks on engagement, along with an iterative process involving experts in the field, to develop an eight-item Learner Engagement Instrument (LEI). Response process validity was established by vetting LEI items on item clarity and perceived meaning prior to implementation, as well as using a well-developed online platform with clear instructions. Internal structure validity evidence was based on factor analysis and calculating internal consistency reliability. Relations to other variables validity evidence was determined by examining associations between LEI and previously validated CME Teaching Effectiveness (CMETE) instrument scores. Following each presentation, all participants were invited to complete the LEI and the CMETE. RESULTS: 51 out of 206 participants completed the LEI and CMETE (response rate 25%) Correlations between the LEI and the CMETE overall scores were strong (r = 0.80). Internal consistency reliability for the LEI was excellent (Cronbach's alpha = 0.96). To support validity to internal structure, a factor analysis was performed and revealed a two dimensional instrument consisting of internal and external engagement domains. The internal consistency reliabilities were 0.96 for the internal engagement domain and 0.95 for the external engagement domain. CONCLUSION: Engagement, as measured by the LEI, is strongly related to teaching effectiveness. The LEI is supported by robust validity evidence including content, response process, internal structure, and relations to other variables. Given the relationship between learner engagement and teaching effectiveness, identifying more engaging and interactive methods for teaching in CME is recommended.


Asunto(s)
Educación Médica Continua , Estudiantes , Estudios Transversales , Humanos , Aprendizaje , Reproducibilidad de los Resultados
4.
Med Teach ; 41(3): 318-324, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-29703093

RESUMEN

PURPOSE: Experiential learning has been suggested as a framework for planning continuing medical education (CME). We aimed to (1) determine participants' learning styles at traditional CME courses and (2) explore associations between learning styles and participant characteristics. MATERIALS AND METHODS: Cross-sectional study of all participants (n = 393) at two Mayo Clinic CME courses who completed the Kolb Learning Style Inventory and provided demographic data. RESULTS: A total of 393 participants returned 241 surveys (response rate, 61.3%). Among the 143 participants (36.4%) who supplied complete demographic and Kolb data, Kolb learning styles included diverging (45; 31.5%), assimilating (56; 39.2%), converging (8; 5.6%), and accommodating (34; 23.8%). Associations existed between learning style and gender (p = 0.02). For most men, learning styles were diverging (23 of 63; 36.5%) and assimilating (30 of 63; 47.6%); for most women, diverging (22 of 80; 27.5%), assimilating (26 of 80; 32.5%), and accommodating (26 of 80; 32.5%). CONCLUSIONS: Internal medicine and psychiatry CME participants had diverse learning styles. Female participants had more variation in their learning styles than men. Teaching techniques must vary to appeal to all learners. The experiential learning theory sequentially moves a learner from Why? to What? to How? to If? to accommodate learning styles.


Asunto(s)
Logro , Educación Médica Continua/métodos , Satisfacción Personal , Adulto , Actitud del Personal de Salud , Estudios Transversales , Femenino , Humanos , Masculino , Factores Sexuales , Encuestas y Cuestionarios
5.
BMC Med Educ ; 18(1): 123, 2018 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-29866089

RESUMEN

BACKGROUND: We conducted a prospective validation study to develop a physician assistant (PA) clinical rotation evaluation (PACRE) instrument. The specific aims of this study were to 1) develop a tool to evaluate PA clinical rotations, and 2) explore associations between validated rotation evaluation scores and characteristics of the students and rotations. METHODS: The PACRE was administered to rotating PA students at our institution in 2016. Factor analysis, internal consistency reliability, and associations between PACRE scores and student or rotation characteristics were determined. RESULTS: Of 206 PACRE instruments sent, 124 were returned (60.2% response). Factor analysis supported a unidimensional model with a mean (SD) score of 4.31 (0.57) on a 5-point scale. Internal consistency reliability was excellent (Cronbach α=0.95). PACRE scores were associated with students' gender (P = .01) and rotation specialty (P = .006) and correlated with students' perception of being prepared (r = 0.32; P < .001) and value of the rotation (r = 0.57; P < .001). CONCLUSIONS: This is the first validated instrument to evaluate PA rotation experiences. Application of the PACRE questionnaire could inform rotation directors about ways to improve clinical experiences. The findings of this study suggest that PA students must be adequately prepared to have a successful experience on their rotations. PA programs should consider offering transition courses like those offered in many medical schools to prepare their students for clinical experiences. Future research should explore whether additional rotation characteristics and educational outcomes are associated with PACRE scores.


Asunto(s)
Asistentes Médicos/educación , Encuestas y Cuestionarios , Adulto , Análisis Factorial , Femenino , Humanos , Masculino , Asistentes Médicos/organización & administración , Evaluación de Programas y Proyectos de Salud , Estudios Prospectivos , Reproducibilidad de los Resultados , Factores Sexuales , Estudiantes de Medicina , Wisconsin , Adulto Joven
6.
Acad Psychiatry ; 42(4): 458-463, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28685348

RESUMEN

OBJECTIVE: Little is known about factors associated with effective continuing medical education (CME) in psychiatry. The authors aimed to validate a method to assess psychiatry CME teaching effectiveness and to determine associations between teaching effectiveness scores and characteristics of presentations, presenters, and participants. METHODS: This cross-sectional study was conducted at the Mayo Clinic Psychiatry Clinical Reviews and Psychiatry in Medical Settings. Presentations were evaluated using an eight-item CME teaching effectiveness instrument, its content based on previously published instruments. Factor analysis, internal consistency and interrater reliabilities, and temporal stability reliability were calculated. Associations were determined between teaching effectiveness scores and characteristics of presentations, presenters, and participants. RESULTS: In total, 364 participants returned 246 completed surveys (response rate, 67.6%). Factor analysis revealed a unidimensional model of psychiatry CME teaching effectiveness. Cronbach α for the instrument was excellent at 0.94. Item mean score (SD) ranged from 4.33 (0.92) to 4.71 (0.59) on a 5-point scale. Overall interrater reliability was 0.84 (95% CI, 0.75-0.91), and temporal stability was 0.89 (95% CI, 0.77-0.97). No associations were found between teaching effectiveness scores and characteristics of presentations, presenters, and participants. CONCLUSIONS: This study provides a new, validated measure of CME teaching effectiveness that could be used to improve psychiatry CME. In contrast to prior research in other medical specialties, CME teaching effectiveness scores were not associated with use of case-based or interactive presentations. This outcome suggests the need for distinctive considerations regarding psychiatry CME; a singular approach to CME teaching may not apply to all medical specialties.


Asunto(s)
Braquiterapia/normas , Educación Médica Continua/normas , Psiquiatría/educación , Enseñanza/normas , Estudios Transversales , Educación Médica Continua/métodos , Humanos , Reproducibilidad de los Resultados
7.
Med Teach ; 39(1): 74-78, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27631895

RESUMEN

During lectures, a pause procedure (the presenter pauses so students can discuss content) can improve educational outcomes. We aimed to determine whether (1) continuing medical education (CME) presentations with a pause procedure were evaluated more favorably and (2) a pause procedure improved recall. In this randomized controlled intervention study of all participants (N = 214) at the Mayo Clinic Internal Medicine Board Review course, 48 lectures were randomly assigned to an intervention (pause procedure) or control (traditional lecture) group. The pause procedure was a 1-min pause at the middle and end of the presentation. Study outcomes were (1) presentation evaluation instrument scores and (2) number of recalled items per lecture. A total of 214 participants returned 145 surveys (response rate, 68%). Mean presentation evaluation scores were significantly higher for pause procedure than for traditional presentations (70.9% vs 65.8%; 95%CI for the difference, 3.5-6.7; p < .0001). Mean number of rapid recall items was higher for pause procedure presentations (0.68 vs 0.59; 95%CI for the difference, 0.02-0.14; p = .01). In a traditional CME course, presentations with a pause procedure had higher evaluation scores and more content was recalled. The pause procedure could arm CME presenters with an easy technique to improve educational content delivery.


Asunto(s)
Educación Médica Continua/métodos , Recuerdo Mental , Adulto , Factores de Edad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Aprendizaje Basado en Problemas , Factores Sexuales
8.
Med Teach ; 39(7): 697-703, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28301975

RESUMEN

Effective medical educators can engage learners through self-reflection. However, little is known about the relationships between teaching effectiveness and self-reflection in continuing medical education (CME). We aimed to determine associations between presenter teaching effectiveness and participant self-reflection in conference-based CME. This cross-sectional study evaluated presenters and participants at a national CME course. Participants provided CME teaching effectiveness (CMETE) ratings and self-reflection scores for each presentation. Overall CMETE and CME self-reflection scores (five-point Likert scale with one as strongly disagree and five as strongly agree) were averaged for each presentation. Correlations were measured among self-reflection, CMETE, and presentation characteristics. In total, 624 participants returned 430 evaluations (response, 68.9%) for the 38 presentations. Correlation between CMETE and self-reflection was medium (Pearson correlation, 0.3-0.5) or large (0.5-1.0) for most presentations (n = 33, 86.9%). Higher mean (SD) CME reflection scores were associated with clinical cases (3.66 [0.12] vs. 3.48 [0.14]; p = 0.003) and audience response (3.66 [0.12] vs. 3.51 [0.14]; p = 0.005). To our knowledge, this is the first study to show a relationship between teaching effectiveness and participant self-reflection in conference-based CME. Presenters should consider using clinical cases and audience response systems to increase teaching effectiveness and promote self-reflection among CME learners.


Asunto(s)
Educación Médica Continua/métodos , Médicos/psicología , Enseñanza , Estudios Transversales , Educación Médica Continua/normas , Humanos , Enseñanza/normas
9.
BMC Med Educ ; 17(1): 114, 2017 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-28697744

RESUMEN

BACKGROUND: E-learning-the use of Internet technologies to enhance knowledge and performance-has become a widely accepted instructional approach. Little is known about the current use of e-learning in postgraduate medical education. To determine utilization of e-learning by United States internal medicine residency programs, program director (PD) perceptions of e-learning, and associations between e-learning use and residency program characteristics. METHODS: We conducted a national survey in collaboration with the Association of Program Directors in Internal Medicine of all United States internal medicine residency programs. RESULTS: Of the 368 PDs, 214 (58.2%) completed the e-learning survey. Use of synchronous e-learning at least sometimes, somewhat often, or very often was reported by 85 (39.7%); 153 programs (71.5%) use asynchronous e-learning at least sometimes, somewhat often, or very often. Most programs (168; 79%) do not have a budget to integrate e-learning. Mean (SD) scores for the PD perceptions of e-learning ranged from 3.01 (0.94) to 3.86 (0.72) on a 5-point scale. The odds of synchronous e-learning use were higher in programs with a budget for its implementation (odds ratio, 3.0 [95% CI, 1.04-8.7]; P = .04). CONCLUSIONS: Residency programs could be better resourced to integrate e-learning technologies. Asynchronous e-learning was used more than synchronous, which may be to accommodate busy resident schedules and duty-hour restrictions. PD perceptions of e-learning are relatively moderate and future research should determine whether PD reluctance to adopt e-learning is based on unawareness of the evidence, perceptions that e-learning is expensive, or judgments about value versus effectiveness.


Asunto(s)
Instrucción por Computador , Curriculum , Educación de Postgrado en Medicina , Internado y Residencia , Adulto , Actitud del Personal de Salud , Educación de Postgrado en Medicina/normas , Educación de Postgrado en Medicina/tendencias , Evaluación Educacional , Femenino , Humanos , Internado y Residencia/organización & administración , Internado y Residencia/normas , Internado y Residencia/tendencias , Masculino , Persona de Mediana Edad , Aprendizaje Basado en Problemas , Evaluación de Programas y Proyectos de Salud , Estados Unidos , Carga de Trabajo
10.
J Gen Intern Med ; 31(5): 518-23, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-26902239

RESUMEN

BACKGROUND: Entrustable professional activities (EPAs) have been developed to assess resident physicians with respect to Accreditation Council for Graduate Medical Education (ACGME) competencies and milestones. Although the feasibility of using EPAs has been reported, we are unaware of previous validation studies on EPAs and potential associations between EPA quality scores and characteristics of educational programs. OBJECTIVES: Our aim was to validate an instrument for assessing the quality of EPAs for assessment of internal medicine residents, and to examine associations between EPA quality scores and features of rotations. DESIGN: This was a prospective content validation study to design an instrument to measure the quality of EPAs that were written for assessing internal medicine residents. PARTICIPANTS: Residency leadership at Mayo Clinic, Rochester participated in this study. This included the Program Director, Associate program directors and individual rotation directors. INTERVENTIONS: The authors reviewed salient literature. Items were developed to reflect domains of EPAs useful for assessment. The instrument underwent further testing and refinement. Each participating rotation director created EPAs that they felt would be meaningful to assess learner performance in their area. These 229 EPAs were then assessed with the QUEPA instrument to rate the quality of each EPA. MAIN MEASURES: Performance characteristics of the QUEPA are reported. Quality ratings of EPAs were compared to the primary ACGME competency, inpatient versus outpatient setting and specialty type. KEY RESULTS: QUEPA tool scores demonstrated excellent reliability (ICC range 0.72 to 0.94). Higher ratings were given to inpatient versus outpatient (3.88, 3.66; p = 0.03) focused EPAs. Medical knowledge EPAs scored significantly lower than EPAs assessing other competencies (3.34, 4.00; p < 0.0001). CONCLUSIONS: The QUEPA tool is supported by good validity evidence and may help in rating the quality of EPAs developed by individual programs. Programs should take care when writing EPAs for the outpatient setting or to assess medical knowledge, as these tended to be rated lower.


Asunto(s)
Competencia Clínica/normas , Educación de Postgrado en Medicina/normas , Evaluación Educacional/métodos , Acreditación , Evaluación Educacional/normas , Humanos , Medicina Interna/educación , Internado y Residencia/normas , Minnesota , Estudios Prospectivos , Reproducibilidad de los Resultados
11.
J Med Internet Res ; 18(9): e244, 2016 09 16.
Artículo en Inglés | MEDLINE | ID: mdl-27637296

RESUMEN

BACKGROUND: Most research on how to enhance response rates in physician surveys has been done using paper surveys. Uncertainties remain regarding how to enhance response rates in Internet-based surveys. OBJECTIVE: To evaluate the impact of a low-cost nonmonetary incentive and paper mail reminders (formal letter and postcard) on response rates in Internet-based physician surveys. METHODS: We executed a factorial-design randomized experiment while conducting a nationally representative Internet-based physician survey. We invited 3966 physicians (randomly selected from a commercial database of all licensed US physicians) via email to complete an Internet-based survey. We used 2 randomly assigned email messages: one message offered a book upon survey completion, whereas the other did not mention the book but was otherwise identical. All nonrespondents received several email reminders. Some physicians were further assigned at random to receive 1 reminder via paper mail (either a postcard or a letter) or no paper reminder. The primary outcome of this study was the survey response rate. RESULTS: Of the 3966 physicians who were invited, 451 (11.4%) responded to at least one survey question and 336 (8.5%) completed the entire survey. Of those who were offered a book, 345/2973 (11.6%) responded compared with 106/993 (10.7%) who were not offered a book (odds ratio 1.10, 95% CI 0.87-1.38, P=.42). Regarding the paper mail reminder, 168/1572 (10.7%) letter recipients, 148/1561 (9.5%) postcard recipients, and 69/767 (9.0%) email-only recipients responded (P=.35). The response rate for those receiving letters or postcards was similar (odds ratio 1.14, 95% CI 0.91-1.44, P=.26). CONCLUSIONS: Offering a modest nonmonetary incentive and sending a paper reminder did not improve survey response rate. Further research on how to enhance response rates in Internet-based physician surveys is needed.


Asunto(s)
Internet , Motivación , Sistemas Recordatorios , Adulto , Factores de Edad , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Médicos , Proyectos de Investigación , Encuestas y Cuestionarios
12.
Telemed Rep ; 4(1): 100-108, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37283856

RESUMEN

Background: A steep increase in the use of delivery of virtual care occurred during the COVID-19 public health emergency (PHE) because of easing up of payment and coverage restrictions. With the end of PHE, there is uncertainty regarding continued coverage and payment parity for the virtual care services. Methods: On November 8, 2022, The Mass General Brigham held the Third Annual Virtual Care Symposium: Demystifying Clinical Appropriateness in Virtual Care and What's Ahead for Pay Parity. Results: In one of the panels, experts from Mayo Clinic led by Dr. Bart Demaerschalk discussed key issues related to "Payment and Coverage Parity for Virtual Care and In-Person Care: How Do We Get There?" The discussions centered around current policies around payment and coverage parity for virtual care, including state licensure laws for virtual care delivery and the current evidence base regarding outcomes, costs, and resource utilization associated with virtual care. The panel discussion ended with highlighting next steps targeting policymakers, payers, and industry groups to help strengthen the case for parity. Conclusions: To ensure the continued viability of virtual care delivery, legislators and insurers must address the coverage and payment parity between telehealth and in-person visits. This will require a renewed focus on research on clinical appropriateness, parity, equity and access, and economics of virtual care.

13.
J Gen Intern Med ; 27(4): 425-31, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-21948229

RESUMEN

BACKGROUND: Feedback is essential for improving the skills of continuing medical education (CME) presenters. However, there has been little research on improving the quality of feedback to CME presenters. OBJECTIVES: To validate an instrument for generating balanced and behavior-specific feedback from a national cross-section of participants to presenters at a large internal medicine CME course. DESIGN, SETTING, AND PARTICIPANTS: A prospective, randomized validation study with qualitative data analysis that included all 317 participants at a Mayo Clinic internal medicine CME course in 2009. MEASUREMENTS: An 8-item (5-point Likert scales) CME faculty assessment enhanced study form (ESF) was designed based on literature and expert review. Course participants were randomized to a standard form, a generic study form (GSF), or the ESF. The dimensionality of instrument scores was determined using factor analysis to account for clustered data. Internal consistency and interrater reliabilities were calculated. Associations between overall feedback scores and presenter and presentation variables were identified using generalized estimating equations to account for multiple observations within talk and speaker combinations. Two raters reached consensus on qualitative themes and independently analyzed narrative entries for evidence of balanced and behavior-specific comments. RESULTS: Factor analysis of 5,241 evaluations revealed a uni-dimensional model for measuring CME presenter feedback. Overall internal consistency (Cronbach alpha = 0.94) and internal consistency reliability (ICC range 0.88-0.95) were excellent. Feedback scores were associated with presenters' academic ranks (mean score): Instructor (4.12), Assistant Professor (4.38), Associate Professor (4.56), Professor (4.70) (p = 0.046). Qualitative analysis revealed that the ESF generated the highest numbers of balanced comments (GSF = 11, ESF = 26; p = 0.01) and behavior-specific comments (GSF = 64, ESF = 104; p = 0.001). CONCLUSIONS: We describe a practical and validated method for generating balanced and behavior-specific feedback for CME presenters in internal medicine. Our simple method for prompting course participants to give balanced and behavior-specific comments may ultimately provide CME presenters with feedback for improving their presentations.


Asunto(s)
Educación Médica Continua/métodos , Retroalimentación Psicológica , Medicina Interna/educación , Enseñanza/métodos , Competencia Clínica , Intervalos de Confianza , Educación Médica Continua/estadística & datos numéricos , Femenino , Humanos , Medicina Interna/normas , Medicina Interna/estadística & datos numéricos , Masculino , Investigación Cualitativa , Estadística como Asunto , Estadísticas no Paramétricas , Estados Unidos
14.
J Gen Intern Med ; 26(3): 293-8, 2011 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-20978863

RESUMEN

BACKGROUND: Critical reflection by faculty physicians on adverse patient events is important for changing physician's behaviors. However, there is little research regarding physician reflection on quality improvement (QI). OBJECTIVE: To develop and validate a computerized case-based learning system (CBLS) to measure faculty physicians' reflections on adverse patient events. DESIGN: Prospective validation study. PARTICIPANTS: Staff physicians in the Department of Medicine at Mayo Clinic Rochester. MAIN MEASURES: The CBLS was developed by Mayo Clinic information technology, medical education, and QI specialists. The reflection questionnaire, adapted from a previously validated instrument, contained eight items structured on five-point scales. Three cases, representing actual adverse events, were developed based on the most common error types: systems, medication, and diagnostic. In 2009, all Mayo Clinic hospital medicine, non-interventional cardiology, and pulmonary faculty were invited to participate. Faculty reviewed each case, determined the next management step, rated case generalizability and relevance, and completed the reflection questionnaire. Factor analysis and internal consistency reliability were calculated. Associations between reflection scores and characteristics of faculty and patient cases were determined. KEY RESULTS: Forty-four faculty completed 107 case reflections. The CBLS was rated as average to excellent in 95 of 104 (91.3%) completed satisfaction surveys. Factor analysis revealed two levels of reflection: Minimal and High. Internal consistency reliability was very good (overall Cronbach's α=0.77). Item mean scores ranged from 2.89 to 3.73 on a five-point scale. The overall reflection score was 3.41 (standard deviation 0.64). Reflection scores were positively associated with case generalizability (p=0.001), and case relevance (p=0.02). CONCLUSIONS: The CBLS is a valid method for stratifying faculty physicians' levels of reflection on adverse patient events. Reflection scores are associated with case generalizability and relevance, indicating that reflection improves with pertinent patient encounters. We anticipate that this instrument will be useful in future research on QI among low versus high-reflecting physicians.


Asunto(s)
Competencia Clínica/normas , Docentes Médicos/normas , Errores Médicos , Médicos/normas , Desarrollo de Programa/normas , Adulto , Femenino , Humanos , Masculino , Errores Médicos/prevención & control , Persona de Mediana Edad , Desarrollo de Programa/métodos , Encuestas y Cuestionarios/normas
15.
Med Educ ; 45(2): 149-54, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-21166692

RESUMEN

OBJECTIVES: transformative learning theory supports the idea that reflection on quality improvement (QI) opportunities and the ability to develop successful QI projects may be fundamentally linked. We used validated methods to explore associations between resident doctors' reflections on QI opportunities and the quality of their QI project proposals. METHODS: eighty-six residents completed written reflections on practice improvement opportunities and developed QI proposals. Two faculty members assessed residents' reflections using the 18-item Mayo Evaluation of Reflection on Improvement Tool (MERIT), and assessed residents' QI proposals using the seven-item Quality Improvement Project Assessment Tool (QIPAT-7). Both instruments have been validated in previous work. Associations between MERIT and QIPAT-7 scores were determined. Internal consistency reliabilities of QIPAT-7 and MERIT scores were calculated. RESULTS: there were no significant associations between MERIT overall and domain scores, and QIPAT-7 overall and item scores. The internal consistency of MERIT and QIPAT-7 item groups were acceptable (Cronbach's α 0.76-0.94). CONCLUSIONS: the lack of association between MERIT and QIPAT-7 scores indicates a distinction between resident doctors' skills at reflection on QI opportunities and their abilities to develop QI projects. These findings suggest that practice-based reflection and QI project development are separate constructs, and that skilful reflection may not predict the ability to design meaningful QI initiatives. Future QI curricula should consider teaching and assessing QI reflection and project development as distinct components.


Asunto(s)
Competencia Clínica/normas , Internado y Residencia/normas , Mejoramiento de la Calidad , Pensamiento , Estudios Transversales , Humanos , Minnesota , Innovación Organizacional
16.
Med Educ ; 44(3): 248-55, 2010 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-20444055

RESUMEN

OBJECTIVES: Resident reflection on the clinical learning environment is prerequisite to identifying quality improvement (QI) opportunities and demonstrating competence in practice-based learning. However, residents' abilities to reflect on QI opportunities are unknown. Therefore, we developed and determined the validity of the Mayo Evaluation of Reflection on Improvement Tool (MERIT) for assessing resident reflection on QI opportunities. METHODS: The content of MERIT, which consists of 18 items structured on 4-point scales, was based on existing literature and input from national experts. Using MERIT, six faculty members rated 50 resident reflections. Factor analysis was used to examine the dimensionality of MERIT instrument scores. Inter-rater and internal consistency reliabilities were calculated. RESULTS: Factor analysis revealed three factors (eigenvalue; number of items): Reflection on Personal Characteristics of QI (8.5; 7); Reflection on System Characteristics of QI (1.9; 6), and Problem of Merit (1.5; 5). Inter-rater reliability was very good (intraclass correlation coefficient range: 0.73-0.89). Internal consistency reliability was excellent (Cronbach's alpha 0.93 overall and 0.83-0.91 for factors). Item mean scores were highest for Problem of Merit (3.29) and lowest for Reflection on System Characteristics of QI (1.99). CONCLUSIONS: Validity evidence supports MERIT as a meaningful measure of resident reflection on QI opportunities. Our findings suggest that dimensions of resident reflection on QI opportunities may include personal, system and Problem of Merit factors. Additionally, residents may be more effective at reflecting on 'problems of merit' than personal and systems factors.


Asunto(s)
Competencia Clínica/normas , Evaluación Educacional/métodos , Internado y Residencia , Análisis Factorial , Humanos , Internado y Residencia/métodos , Internado y Residencia/normas , Control de Calidad , Reproducibilidad de los Resultados
17.
Med Educ Online ; 25(1): 1694308, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31747854

RESUMEN

Background: Industry funding in continuing medical education has been extensively studied in the USA. Although continuing medical education is also a requirement for Chinese physicians, little is known about Chinese physician perceptions of industry support in continuing medical education.Objective: We aim to determine perceptions regarding industry support for CME among Chinese physicians at a large CME course, examine potential associations between Chinese physicians' perceptions and their demographic characteristics, and compare Chinese and US physicians' perceptions of industry support for CME.Design: We performed a cross-sectional survey of physicians at a nephrology continuing medical education conference in China. All participants received a previously published, anonymous survey consisting of 4 items, with questions asked in English and Mandarin Chinese. Responses were compared with those of a previous cohort in the USA.Results: The response rate was 24% (128/541). Most respondents were nephrologists (112/126, 89%), women (91/128, 71%), and aged 20 to 40 years (79/127, 62%). Most respondents preferred industry-supported continuing medical education (84/123, 68%) or had no preference (33/123, 27%). More clinicians than clinical researchers supported industry offsetting costs (76.9% vs 58.3%; P = .03). Almost half of participants (58/125, 46%) stated that industry-supported continuing medical education was biased in support of industry. Compared with US physicians, Chinese physicians were more likely to believe, or had no opinion, that industry-supported courses were biased (67.2% vs 47.0%; P < .001).Conclusions: Chinese continuing medical education participants preferred industry-sponsored continuing medical education and were strongly in favor of industry offsetting costs, but almost half believed that such education was biased in favor of supporting companies. Concern for bias was higher among Chinese than US physicians. Given participants' concerns, further study examining industry bias in Chinese continuing medical education is recommended.Abbreviations: CME: Continuing medical education; US: USA.


Asunto(s)
Educación Médica Continua , Médicos/psicología , Adulto , China , Estudios Transversales , Femenino , Humanos , Masculino , Persona de Mediana Edad , Encuestas y Cuestionarios , Adulto Joven
18.
J Physician Assist Educ ; 31(1): 2-7, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32004252

RESUMEN

PURPOSE: The purpose of this study was to describe participant characteristics and effective teaching methods at a national continuing medical education (CME) conference on hospital medicine for physician assistants (PAs) and nurse practitioners (NPs). METHODS: In this cross-sectional study, participants provided demographic information and teaching effectiveness scores for each presentation. Associations between teaching effectiveness score and presentation characteristics were determined. RESULTS: In total, 163 of 253 participants (64.4%) completed evaluations of 28 presentations. Many of the participants were younger than 50 years (69.0%), had practiced for fewer than 5 years (41.5%), and worked in nonacademic settings (76.7%). Teaching effectiveness scores were significantly associated with the use of clinical cases (perfect scores for 68.8% of presentations with clinical cases vs. 59.8% without; P = .04). CONCLUSION: Many PAs and NPs at an HM CME conference were early-career clinicians working in nonacademic settings. Presenters at CME conferences in hospital medicine should consider using clinical cases to improve their teaching effectiveness among PA and NP learners.


Asunto(s)
Educación Continua/organización & administración , Medicina Hospitalar/educación , Enfermeras Practicantes/educación , Asistentes Médicos/educación , Enseñanza/organización & administración , Adulto , Anciano , Estudios Transversales , Humanos , Aprendizaje , Persona de Mediana Edad , Factores Socioeconómicos , Adulto Joven
19.
Med Educ Online ; 23(1): 1474700, 2018 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-29768977

RESUMEN

Continuous quality improvement is a component of professionalism. Maintenance of Certification (MOC) is a mechanism in the USA for physicians to keep current with medical knowledge and contribute to practice improvement. Little is known about primary care physicians' perceptions of the practice improvement (Part IV) components of MOC. We aimed to determine primary care physicians' perceptions of their professional responsibility to participate in Part IV MOC. This was a cross-sectional study of primary care physicians using the American Medical Association Masterfile. We developed a nine-item survey, designed from expert consensus and literature to determine views on Part IV MOC as a professional responsibility. We surveyed 1500 randomly selected primary care physicians via mail from November 2014 to May 2015. The response rate was 42% (627 of 1,500): 47% (273 of 585) were family practitioners and 49% (289 of 585) were internists. Factor analysis revealed a two-factor survey, with five items pertaining to positive views of MOC Part IV and four items pertaining to negative views. Internists were more likely to view MOC Part IV as time consuming (82.0% vs. 70.3%, P = .001), expensive (50.9% vs. 38.8%, P = .004), and not relevant to practice (39.1% vs. 23.8%, P < .001). Family medicine practitioners were more likely to view MOC Part IV as improving patient care (64.5% vs. 48.8%, P < .001) and maintaining professional responsibility (48.7% vs. 32.5%, P < .001). Regardless of specialty, most physicians viewed MOC Part IV as time intensive, not beneficial for career advancement, and not a professional responsibility. Family medicine practitioners demonstrated more positive views of MOC Part IV. The difference between family medicine practitioners and internists could be related to the ABIM MOC controversy. Future changes to practice improvement requirements could focus on limiting time requirements and on clinical relevance. ABBREVIATIONS: ABIM: American Board of Internal Medicine; AMA: American Medical Association; CQI: continuous quality improvement; IRB: institutional review board; MOC: Maintenance of Certification; QI: quality improvement.


Asunto(s)
Actitud del Personal de Salud , Competencia Clínica , Rol del Médico , Médicos de Atención Primaria/psicología , Mejoramiento de la Calidad/organización & administración , Adulto , Certificación , Estudios Transversales , Medicina Familiar y Comunitaria , Femenino , Humanos , Medicina Interna , Masculino , Persona de Mediana Edad , Calidad de la Atención de Salud , Características de la Residencia , Factores de Tiempo , Estados Unidos
20.
Acad Med ; 93(3): 471-477, 2018 03.
Artículo en Inglés | MEDLINE | ID: mdl-28640030

RESUMEN

PURPOSE: To begin to quantify and understand the use of the flipped classroom (FC)-a progressive, effective, curricular model-in internal medicine (IM) education in relation to residency program and program director (PD) characteristics. METHOD: The authors conducted a survey that included the Flipped Classroom Perception Instrument (FCPI) in 2015 regarding programs' use and PDs' perceptions of the FC model. RESULTS: Among the 368 IM residency programs, PDs at 227 (61.7%) responded to the survey and 206 (56.0%) completed the FCPI. Regarding how often programs used the FC model, 34 of the 206 PDs (16.5%) reported "never"; 44 (21.4%) reported "very rarely"; another 44 (21.4%) reported "somewhat rarely"; 59 (28.6%) reported "sometimes"; 16 (7.8%) reported "somewhat often"; and 9 (4.4%) reported "very often." The mean FCPI score (standard deviation [SD]) for the in-class application factor (4.11 [0.68]) was higher (i.e., more favorable) than for the preclass activity factor (3.94 [0.65]) (P < .001). FC perceptions (mean [SD]) were higher among younger PDs (≤ 50 years, 4.12 [0.62]; > 50 years, 3.94 [0.61]; P = .04) and women compared with men (4.28 [0.56] vs. 3.91 [0.62]; P < .001). PDs with better perceptions of FCs had higher odds of using FCs (odds ratio, 4.768; P < .001). CONCLUSIONS: Most IM programs use the FC model at least to some extent, and PDs prefer the interactive in-class components over the independent preclass activities. PDs who are women and younger perceived the model more favorably.


Asunto(s)
Docentes Médicos/psicología , Medicina Interna/educación , Competencia Clínica , Femenino , Humanos , Internado y Residencia , Masculino , Percepción , Evaluación de Programas y Proyectos de Salud , Encuestas y Cuestionarios
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA