Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
1.
Vaccine ; 36(14): 1823-1829, 2018 03 27.
Article in English | MEDLINE | ID: mdl-29496350

ABSTRACT

PURPOSE: The aims of this study are to evaluate the impact of a novel immunization curriculum based on the Preferred Cognitive Styles and Decision Making Model (PCSDM) on internal medicine (IM) resident continuity clinic patient panel immunization rates, as well as resident immunization knowledge, attitudes, and practices (KAP). METHODS: A cluster-randomized controlled trial was performed among 143 IM residents at Mayo Clinic to evaluate the PCSDM curriculum plus fact-based immunization curriculum (intervention) compared to fact-based immunization curriculum alone (control) on the outcomes of resident continuity clinic patient panel immunization rates for influenza, pneumococcal, tetanus, pertussis, and zoster vaccines. Pre-study and post-study immunization KAP surveys were administered to IM residents. RESULTS: Ninety-nine residents participated in the study. Eighty-two residents completed pre-study and post-study surveys. Influenza and pertussis immunization rates improved for both intervention and control groups. There was no significant difference in immunization rate improvement between the groups. Influenza immunization rates improved significantly by 33.4% and 32.3% in the intervention and control groups, respectively. The odds of receiving influenza immunization at the end of the study relative to pre-study for the entire study cohort was 4.6 (p < 0.0001). The odds of having received pertussis immunization at the end of the study relative to pre-study for the entire study cohort was 1.2 (p = 0.0002). Both groups had significant improvements in immunization knowledge. The intervention group had significant improvements in multiple domains that assessed confidence in counseling patients on immunizations. CONCLUSIONS: Fact-based immunization education was useful in improving IM resident immunization rates for influenza and pertussis. The PCSDM immunization curriculum did not lead to increases in immunization rates compared with the fact-based curriculum, but it did significantly increase resident confidence in communicating with patients about vaccines.


Subject(s)
Education, Medical , Immunization , Internship and Residency , Adult , Curriculum , Decision Making , Female , Health Knowledge, Attitudes, Practice , Humans , Male , Vaccination , Vaccination Coverage , Vaccines
2.
J Gen Intern Med ; 31(5): 518-23, 2016 May.
Article in English | MEDLINE | ID: mdl-26902239

ABSTRACT

BACKGROUND: Entrustable professional activities (EPAs) have been developed to assess resident physicians with respect to Accreditation Council for Graduate Medical Education (ACGME) competencies and milestones. Although the feasibility of using EPAs has been reported, we are unaware of previous validation studies on EPAs and potential associations between EPA quality scores and characteristics of educational programs. OBJECTIVES: Our aim was to validate an instrument for assessing the quality of EPAs for assessment of internal medicine residents, and to examine associations between EPA quality scores and features of rotations. DESIGN: This was a prospective content validation study to design an instrument to measure the quality of EPAs that were written for assessing internal medicine residents. PARTICIPANTS: Residency leadership at Mayo Clinic, Rochester participated in this study. This included the Program Director, Associate program directors and individual rotation directors. INTERVENTIONS: The authors reviewed salient literature. Items were developed to reflect domains of EPAs useful for assessment. The instrument underwent further testing and refinement. Each participating rotation director created EPAs that they felt would be meaningful to assess learner performance in their area. These 229 EPAs were then assessed with the QUEPA instrument to rate the quality of each EPA. MAIN MEASURES: Performance characteristics of the QUEPA are reported. Quality ratings of EPAs were compared to the primary ACGME competency, inpatient versus outpatient setting and specialty type. KEY RESULTS: QUEPA tool scores demonstrated excellent reliability (ICC range 0.72 to 0.94). Higher ratings were given to inpatient versus outpatient (3.88, 3.66; p = 0.03) focused EPAs. Medical knowledge EPAs scored significantly lower than EPAs assessing other competencies (3.34, 4.00; p < 0.0001). CONCLUSIONS: The QUEPA tool is supported by good validity evidence and may help in rating the quality of EPAs developed by individual programs. Programs should take care when writing EPAs for the outpatient setting or to assess medical knowledge, as these tended to be rated lower.


Subject(s)
Clinical Competence/standards , Education, Medical, Graduate/standards , Educational Measurement/methods , Accreditation , Educational Measurement/standards , Humans , Internal Medicine/education , Internship and Residency/standards , Minnesota , Prospective Studies , Reproducibility of Results
4.
BMC Med Educ ; 15: 76, 2015 Apr 14.
Article in English | MEDLINE | ID: mdl-25889758

ABSTRACT

BACKGROUND: We aimed to explore the influence of a motivationally-enhanced instructional design on motivation to learn and knowledge, hypothesizing that outcomes would be higher for the enhanced instructional format. METHODS: Medicine residents completed four online learning modules on primary care topics. Using a crossover design, learners were randomized to receive two standard and two motivationally-enhanced learning modules. Both formats had self-assessment questions, but the enhanced format questions were framed to place learners in a supervisory/teaching role. Learners received a baseline motivation questionnaire, a short motivation survey before and after each module, and a knowledge posttest. RESULTS: One hundred twenty seven residents were randomized. 123 residents (97%) completed at least one knowledge posttest and 119 (94%) completed all four posttests. Across all modules, a one-point increase in the pretest short motivation survey was associated with a 2.1-point increase in posttest knowledge. The change in motivation was significantly higher for the motivationally enhanced format (standard mean change -0.01, enhanced mean change +0.09, difference = 0.10, CI 0.001 to 0.19; p = 0.048). Mean posttest knowledge score was similar (standard mean 72.8, enhanced mean 73.0, difference = 0.2, CI -1.9 to 2.1; p = 0.90). CONCLUSIONS: The motivationally enhanced instructional format improved motivation more than the standard format, but impact on knowledge scores was small and not statistically significant. Learners with higher pre-intervention motivation scored better on post-intervention knowledge tests, suggesting that motivation may prove a viable target for future instructional enhancements.


Subject(s)
Computer-Assisted Instruction/methods , Internship and Residency , Motivation , Students, Medical/psychology , Cross-Over Studies , Family Practice/education , Humans , Internal Medicine/education , Self-Assessment , Surveys and Questionnaires
5.
Am J Hosp Palliat Care ; 31(3): 275-80, 2014 May.
Article in English | MEDLINE | ID: mdl-23588577

ABSTRACT

Many primary care providers feel uncomfortable discussing end-of-life care. The aim of this intervention was to assess internal medicine residents' advance care planning (ACP) practices and improve residents' ACP confidence. Residents participated in a facilitated ACP quality improvement workshop, which included an interactive presentation and chart audit of their own patients. Pre- and postintervention surveys assessed resident ACP-related confidence. Only 24% of the audited patients had an advance directive (AD), and 28% of the ACP-documentation was of no clinical utility. Terminally ill patients (odds ratio 2.8, P < .001) were more likely to have an AD. Patients requiring an interpreter were less likely to have participated in ACP. Residents reported significantly improved confidence with ACP and identified important training gaps. Future studies examining the impact on ACP quality are needed.


Subject(s)
Advance Care Planning , Attitude of Health Personnel , Internship and Residency/statistics & numerical data , Adult , Ambulatory Care Facilities , Education , Female , Humans , Male , Medical Audit , Terminal Care
6.
Acad Med ; 89(1): 169-75, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24280856

ABSTRACT

PURPOSE: Questions enhance learning in Web-based courses, but preliminary evidence suggests that too many questions may interfere with learning. The authors sought to determine how varying the number of self-assessment questions affects knowledge outcomes in a Web-based course. METHOD: The authors conducted a randomized crossover trial in one internal medicine and one family medicine residency program between January 2009 and July 2010. Eight Web-based modules on ambulatory medicine topics were developed, with varying numbers of self-assessment questions (0, 1, 5, 10, or 15). Participants completed modules in four different formats each year, with sequence randomly assigned. Participants completed a pretest for half their modules. Outcomes included knowledge, completion time, and module ratings. RESULTS: One hundred eighty residents provided data. The mean (standard error) percent correct knowledge score was 53.2 (0.8) for pretests and 73.7 (0.5) for posttests. In repeated-measures analysis pooling all data, mean posttest knowledge scores were highest for the 10- and 15-question formats (75.7 [1.1] and 74.4 [1.0], respectively) and lower for 0-, 1-, and 5-question formats (73.1 [1.3], 72.9 [1.0], and 72.8 [1.5], respectively); P = .04 for differences across all modules. Modules with more questions generally took longer to complete and were rated higher, although differences were small. Residents most often identified 10 questions as ideal. Posttest knowledge scores were higher for modules that included a pretest (75.4 [0.9] versus 72.2 [0.9]; P = .0002). CONCLUSIONS: Increasing the number of self-assessment questions improves learning until a plateau beyond which additional questions do not add value.


Subject(s)
Computer-Assisted Instruction , Education, Medical, Graduate/methods , Educational Measurement/methods , Family Practice/education , Internal Medicine/education , Internet , Internship and Residency , Adult , Cross-Over Studies , Female , Humans , Male , Minnesota
9.
J Gen Intern Med ; 28(8): 1014-9, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23595923

ABSTRACT

BACKGROUND: There have been recent calls for improved internal medicine outpatient training, yet assessment of clinical and educational variables within existing models is lacking. OBJECTIVE: To assess the impact of clinic redesign from a traditional weekly clinic model to a 50/50 outpatient-inpatient model on clinical and educational outcomes. DESIGN: Pre-intervention and post-intervention study intervals, comparing the 2009-2010 and 2010-2011 academic years. PARTICIPANTS: Ninety-six residents in a Primary Care Internal Medicine site of a large academic internal medicine residency program who provide care for > 13,000 patients. INTERVENTION: Continuity clinic redesign from a traditional weekly clinic model to a 50/50 model characterized by 50 % outpatient and 50 % inpatient experiences scheduled in alternating 1 month blocks, with twice weekly continuity clinic during outpatient months and no clinic during inpatient months. MAIN MEASURES: 1) Clinical outcomes (panel size, patient visits, adherence with chronic disease and preventive service guidelines, continuity of care, patient satisfaction, and perceived safety/teamwork in clinic); 2) Educational outcomes (attendance at teaching conference, resident and faculty satisfaction, faculty assessment of resident clinic performance, and residents' perceived preparedness for outpatient management). RESULTS: Redesign was associated with increased mean panel size (120 vs. 137.6; p ≤ 0.001), decreased continuity of care (63 % vs. 48 % from provider perspective; 61 % vs. 51 % from patient perspective; p ≤ 0.001 for both; team continuity was preserved), decreased missed appointments (12.5 % vs. 10.9 %; p ≤ 0.01), improved perceived safety and teamwork (3.6 vs. 4.1 on 5-point scale; p ≤ 0.001), improved mean teaching conference attendance (57.1 vs. 64.4; p ≤ 0.001), improved resident clinic performance (3.6 vs. 3.9 on 5-point scale; p ≤ 0.001), and little change in other outcomes. CONCLUSION: Although this model requires further study in other settings, these results suggest that a 50/50 model may allow residents to manage more patients while enhancing the climate of teamwork and safety in the continuity clinic, compared to traditional models. Future work should explore ways to preserve continuity of care within this model.


Subject(s)
Ambulatory Care Facilities/standards , Continuity of Patient Care/standards , Inpatients , Internal Medicine/standards , Internship and Residency/standards , Outpatients , Ambulatory Care Facilities/organization & administration , Clinical Competence/standards , Continuity of Patient Care/organization & administration , Female , Humans , Internal Medicine/methods , Internal Medicine/organization & administration , Internship and Residency/methods , Internship and Residency/organization & administration , Male
10.
Acad Med ; 88(5): 585-92, 2013 May.
Article in English | MEDLINE | ID: mdl-23524923

ABSTRACT

Evidence suggests that teamwork is essential for safe, reliable practice. Creating health care teams able to function effectively in patient-centered medical homes (PCMHs), practices that organize care around the patient and demonstrate achievement of defined quality care standards, remains challenging. Preparing trainees for practice in interprofessional teams is particularly challenging in academic health centers where health professions curricula are largely siloed. Here, the authors review a well-delineated set of teamwork competencies that are important for high-functioning teams and suggest how these competencies might be useful for interprofessional team training and achievement of PCMH standards. The five competencies are (1) team leadership, the ability to coordinate team members' activities, ensure appropriate task distribution, evaluate effectiveness, and inspire high-level performance, (2) mutual performance monitoring, the ability to develop a shared understanding among team members regarding intentions, roles, and responsibilities so as to accurately monitor one another's performance for collective success, (3) backup behavior, the ability to anticipate the needs of other team members and shift responsibilities during times of variable workload, (4) adaptability, the capability of team members to adjust their strategy for completing tasks on the basis of feedback from the work environment, and (5) team orientation, the tendency to prioritize team goals over individual goals, encourage alternative perspectives, and show respect and regard for each team member. Relating each competency to a vignette from an academic primary care clinic, the authors describe potential strategies for improving teamwork learning and applying the teamwork competences to academic PCMH practices.


Subject(s)
Academic Medical Centers/organization & administration , Clinical Competence , Cooperative Behavior , Interprofessional Relations , Patient Care Team/organization & administration , Patient-Centered Care/organization & administration , Primary Health Care/organization & administration , Academic Medical Centers/standards , Adaptation, Psychological , Communication , Feedback, Psychological , Humans , Leadership , Patient Care Team/standards , Patient-Centered Care/standards , Primary Health Care/standards , Professional Role , United States
11.
J Grad Med Educ ; 5(2): 203-10, 2013 Jun.
Article in English | MEDLINE | ID: mdl-24404261

ABSTRACT

BACKGROUND: Evidence-based practice in education requires high-quality evidence, and many in the medical education community have called for an improvement in the methodological quality of education research. OBJECTIVE: Our aim was to use a valid measure of medical education research quality to highlight the methodological quality of research publications and provide an overview of the recent internal medicine (IM) residency literature. METHODS: We searched MEDLINE and PreMEDLINE to identify English-language articles published in the United States and Canada between January 1, 2010, and December 31, 2011, focusing on IM residency education. Study quality was assessed using the Medical Education Research Study Quality Instrument (MERSQI), which has demonstrated reliability and validity. Qualitative articles were excluded. Articles were ranked by quality score, and the top 25% were examined for common themes, and 2 articles within each theme were selected for in-depth presentation. RESULTS: The search identified 731 abstracts of which 223 articles met our inclusion criteria. The mean (±SD) MERSQI score of the 223 studies included in the review was 11.07 (±2.48). Quality scores were highest for data analysis (2.70) and lowest for study design (1.41) and validity (1.29). The themes identified included resident well-being, duty hours and resident workload, career decisions and gender, simulation medicine, and patient-centered outcomes. CONCLUSIONS: Our review provides an overview of the IM medical education literature for 2010-2011, highlighting 5 themes of interest to the medical education community. Study design and validity are 2 areas where improvements in methodological quality are needed, and authors should consider these when designing research protocols.

12.
J Grad Med Educ ; 5(4): 668-73, 2013 Dec.
Article in English | MEDLINE | ID: mdl-24455021

ABSTRACT

BACKGROUND: The outpatient continuity clinic is an essential component of internal medicine residency programs, yet continuity of patient care in these clinics is suboptimal. Reasons for this discontinuity have been inadequately explored. OBJECTIVE: We sought to assess perceived factors contributing to discontinuity in trainee ambulatory clinics. METHODS: The study encompassed 112 internal medicine residents at a large academic medical center in the Midwest. We conducted 2 hours of facilitated discussion with 18 small groups of residents. Residents were asked to reflect on factors that pose barriers to continuity in their ambulatory practice and potential mechanisms to reduce these barriers. Resident comments were transcribed and inductive analysis was performed to develop themes. We used these themes to derive recommendations for improving continuity of care in a resident ambulatory clinic. RESULTS: Key themes included an imbalance of clinic scheduling that favors access for patients with acute symptoms over continuity, clinic triage scripts that deemphasize continuity, inadequate communication among residents and faculty regarding shared patients, residents' inefficient use of nonphysician care resources, and a lack of shared values between patients and providers regarding continuity of care. CONCLUSIONS: The results offer important information that may be applied in iterative program changes to enhance continuity of care in resident clinics.

13.
Med Educ ; 45(12): 1230-40, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22026751

ABSTRACT

CONTEXT: The Motivated Strategies for Learning Questionnaire (MSLQ) purports to measure motivation using the expectancy-value model. Although it is widely used in other fields, this instrument has received little study in health professions education. OBJECTIVES: The purpose of this study was to evaluate the validity of MSLQ scores. METHODS: We conducted a validity study evaluating the relationships of MSLQ scores to other variables and their internal structure (reliability and factor analysis). Participants included 210 internal medicine and family medicine residents participating in a web-based course on ambulatory medicine at an academic medical centre. Measurements included pre-course MSLQ scores, pre- and post-module motivation surveys, post-module knowledge test and post-module Instructional Materials Motivation Survey (IMMS) scores. RESULTS: Internal consistency was universally high for all MSLQ items together (Cronbach's α = 0.93) and for each domain (α ≥ 0.67). Total MSLQ scores showed statistically significant positive associations with post-test knowledge scores. For example, a 1-point rise in total MSLQ score was associated with a 4.4% increase in post-test scores (ß = 4.4; p < 0.0001). Total MSLQ scores showed moderately strong, statistically significant associations with several other measures of effort, motivation and satisfaction. Scores on MSLQ domains demonstrated associations that generally aligned with our hypotheses. Self-efficacy and control of learning belief scores demonstrated the strongest domain-specific relationships with knowledge scores (ß = 2.9 for both). Confirmatory factor analysis showed a borderline model fit. Follow-up exploratory factor analysis revealed the scores of five factors (self-efficacy, intrinsic interest, test anxiety, extrinsic goals, attribution) demonstrated psychometric and predictive properties similar to those of the original scales. CONCLUSIONS: Scores on the MSLQ are reliable and predict meaningful outcomes. However, the factor structure suggests a simplified model might better fit the empiric data. Future research might consider how assessing and responding to motivation could enhance learning.


Subject(s)
Internship and Residency/statistics & numerical data , Motivation , Probability Learning , Psychometrics/standards , Students, Medical/psychology , Education, Medical , Female , Humans , Knowledge , Learning , Male , Personal Satisfaction , Psychometrics/methods , Reproducibility of Results , Surveys and Questionnaires
14.
Acad Med ; 86(6): 737-41, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21512373

ABSTRACT

PURPOSE: Residents' reflections on quality improvement (QI) opportunities are poorly understood. The authors used the Mayo Evaluation of Reflection on Improvement Tool (MERIT) to measure residents' reflection scores across three years and to determine associations between reflection scores and resident and adverse patient event characteristics. METHOD: From 2006 to 2009, 48 Mayo Clinic internal medicine residents completed biannual reflections on adverse events and classified event severity and preventability. Faculty assessed residents' reflections using MERIT, which contains 18 Likert-scaled items and measures three factors-personal reflection, systems reflection, and event merit. ANOVA was used to identify changes in MERIT scores across three years of training and among factors, paired t tests were used to identify differences between MERIT factor scores, and generalized estimating equations were used to examine associations between MERIT scores and resident and adverse event characteristics. RESULTS: The residents completed 240 reflections. MERIT reflection scores were stable over time. Individual factor scores differed significantly (P < .0001), with event merit being the highest and systems reflection the lowest. Event preventability was significantly associated with MERIT factor scores and overall scores (beta = 0.415; CI = 0.186-0.643; P = .0004). No significant associations between MERIT scores and resident characteristics or event severity were identified. CONCLUSIONS: Residents' reflections on adverse events remained constant over time, were lowest for systems factors, and were associated with adverse event preventability. Future research should explore learners' emphasis on systems aspects of QI and the relationship between QI and event preventability.


Subject(s)
Educational Measurement/methods , Internal Medicine/education , Internship and Residency , Quality Improvement , Risk Management , Self-Assessment , Factor Analysis, Statistical , Humans , Longitudinal Studies , Minnesota , Reproducibility of Results
15.
J Gen Intern Med ; 26(7): 759-64, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21369769

ABSTRACT

BACKGROUND: Individual faculty assessments of resident competency are complicated by inconsistent application of standards, lack of reliability, and the "halo" effect. OBJECTIVE: We determined whether the addition of faculty group assessments of residents in an ambulatory clinic, compared with individual faculty-of-resident assessments alone, have better reliability and reduced halo effects. DESIGN: This prospective, longitudinal study was performed in the outpatient continuity clinics of a large internal medicine residency program. MAIN MEASURES: Faculty-on-resident and group faculty-on-resident assessment scores were used for comparison. KEY RESULTS: Overall mean scores were significantly higher for group than individual assessments (3.92 ± 0.51 vs. 3.83 ± 0.38, p = 0.0001). Overall inter-rater reliability increased when combining group and individual assessments compared to individual assessments alone (intraclass correlation coefficient, 95% CI = 0.828, 0.785-0.866 vs. 0.749, 0.686-0.804). Inter-item correlations were less for group (0.49) than individual (0.68) assessments. CONCLUSIONS: This study demonstrates improved inter-rater reliability and reduced range restriction (halo effect) of resident assessment across multiple performance domains by adding the group assessment method to traditional individual faculty-on-resident assessment. This feasible model could help graduate medical education programs achieve more reliable and discriminating resident assessments.


Subject(s)
Clinical Competence/standards , Education, Medical, Graduate/methods , Educational Measurement , Internal Medicine/education , Internship and Residency/standards , Peer Group , Analysis of Variance , Education, Medical, Graduate/standards , Humans , Longitudinal Studies , Prospective Studies , Reproducibility of Results
16.
Med Educ ; 45(2): 149-54, 2011 Feb.
Article in English | MEDLINE | ID: mdl-21166692

ABSTRACT

OBJECTIVES: transformative learning theory supports the idea that reflection on quality improvement (QI) opportunities and the ability to develop successful QI projects may be fundamentally linked. We used validated methods to explore associations between resident doctors' reflections on QI opportunities and the quality of their QI project proposals. METHODS: eighty-six residents completed written reflections on practice improvement opportunities and developed QI proposals. Two faculty members assessed residents' reflections using the 18-item Mayo Evaluation of Reflection on Improvement Tool (MERIT), and assessed residents' QI proposals using the seven-item Quality Improvement Project Assessment Tool (QIPAT-7). Both instruments have been validated in previous work. Associations between MERIT and QIPAT-7 scores were determined. Internal consistency reliabilities of QIPAT-7 and MERIT scores were calculated. RESULTS: there were no significant associations between MERIT overall and domain scores, and QIPAT-7 overall and item scores. The internal consistency of MERIT and QIPAT-7 item groups were acceptable (Cronbach's α 0.76-0.94). CONCLUSIONS: the lack of association between MERIT and QIPAT-7 scores indicates a distinction between resident doctors' skills at reflection on QI opportunities and their abilities to develop QI projects. These findings suggest that practice-based reflection and QI project development are separate constructs, and that skilful reflection may not predict the ability to design meaningful QI initiatives. Future QI curricula should consider teaching and assessing QI reflection and project development as distinct components.


Subject(s)
Clinical Competence/standards , Internship and Residency/standards , Quality Improvement , Thinking , Cross-Sectional Studies , Humans , Minnesota , Organizational Innovation
17.
Med Educ ; 44(3): 248-55, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20444055

ABSTRACT

OBJECTIVES: Resident reflection on the clinical learning environment is prerequisite to identifying quality improvement (QI) opportunities and demonstrating competence in practice-based learning. However, residents' abilities to reflect on QI opportunities are unknown. Therefore, we developed and determined the validity of the Mayo Evaluation of Reflection on Improvement Tool (MERIT) for assessing resident reflection on QI opportunities. METHODS: The content of MERIT, which consists of 18 items structured on 4-point scales, was based on existing literature and input from national experts. Using MERIT, six faculty members rated 50 resident reflections. Factor analysis was used to examine the dimensionality of MERIT instrument scores. Inter-rater and internal consistency reliabilities were calculated. RESULTS: Factor analysis revealed three factors (eigenvalue; number of items): Reflection on Personal Characteristics of QI (8.5; 7); Reflection on System Characteristics of QI (1.9; 6), and Problem of Merit (1.5; 5). Inter-rater reliability was very good (intraclass correlation coefficient range: 0.73-0.89). Internal consistency reliability was excellent (Cronbach's alpha 0.93 overall and 0.83-0.91 for factors). Item mean scores were highest for Problem of Merit (3.29) and lowest for Reflection on System Characteristics of QI (1.99). CONCLUSIONS: Validity evidence supports MERIT as a meaningful measure of resident reflection on QI opportunities. Our findings suggest that dimensions of resident reflection on QI opportunities may include personal, system and Problem of Merit factors. Additionally, residents may be more effective at reflecting on 'problems of merit' than personal and systems factors.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Internship and Residency , Factor Analysis, Statistical , Humans , Internship and Residency/methods , Internship and Residency/standards , Quality Control , Reproducibility of Results
18.
Jt Comm J Qual Patient Saf ; 35(10): 497-501, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19886088

ABSTRACT

BACKGROUND: Appropriate and timely communication of test results is an important element of high-quality health care. Patients' preferences regarding and satisfaction with test result notification methods in a primary care practice were evaluated. METHODS: Some 1,458 consecutive patients were surveyed for whom routine blood tests were performed in the primary care internal medicine division at the Mayo Clinic Rochester (Minnesota) between January and March 2006. RESULTS: Among 888 respondents, test result notification occurred by telephone call (43%), return visit (35%), letter (3%), e-mail (0.1%), or a combination of methods (19%). Most (60%) telephone calls were handled by nurses. Patient preferences for notification method were telephone call (55%), return visit (20%), letter (19%), e-mail (5%), and automated answering mechanism (1%). Among patients reporting preference for telephone call, 67% wanted a call from a physician or nurse practitioner. Overall, 44% of patients received results by their preferred method; patients who did not were more likely to be dissatisfied with the communication method than those who did (10% vs. 5%, p = 0.01). A majority of patients were at least somewhat anxious to learn their test results, and patients greatly valued timeliness in test-result notification. DISCUSSION: The results describe primary care patients preferences for communication from their providers. Disparities exist between current practice and patient preferences in this important care delivery process. A telephone call from a physician or nurse practitioner was used to deliver test results for fewer than half of the patients who preferred to receive their results by this method. Future work should explore reimbursement of patient-preferred options and assess ways to improve resource-conscious test result communication methods.


Subject(s)
Communication , Diagnostic Tests, Routine/psychology , Patient Access to Records , Patient Preference , Primary Health Care/methods , Adult , Electronic Mail , Female , Health Care Surveys , Humans , Male , Middle Aged , Office Visits , Postal Service , Telephone
19.
Acad Med ; 84(10): 1419-25, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19881436

ABSTRACT

PURPOSE: The comparative efficacy of case-based (CB) and non-CB self-assessment questions in Web-based instruction is unknown. The authors sought to compare CB and non-CB questions. METHOD: The authors conducted a randomized crossover trial in the continuity clinics of two academic residency programs. Four Web-based modules on ambulatory medicine were developed in both CB (periodic questions based on patient scenarios) and non-CB (questions matched for content but lacking patient scenarios) formats. Participants completed two modules in each format (sequence randomly assigned). Participants also completed a pretest of applied knowledge for two modules (randomly assigned). RESULTS: For the 130 participating internal medicine and family medicine residents, knowledge scores improved significantly (P < .0001) from pretest (mean: 53.5; SE: 1.1) to posttest (75.1; SE: 0.7). Posttest knowledge scores were similar in CB (75.0; SE: 0.1) and non-CB formats (74.7; SE: 1.1); the 95% CI was -1.6, 2.2 (P = .76). A nearly significant (P = .062) interaction between format and the presence or absence of pretest suggested a differential effect of question format, depending on pretest. Overall, those taking pretests had higher posttest knowledge scores (76.7; SE: 1.1) than did those not taking pretests (73.0; SE: 1.1; 95% CI: 1.7, 5.6; P = .0003). Learners preferred the CB format. Time required was similar (CB: 42.5; SE: 1.8 minutes, non-CB: 40.9; SE: 1.8 minutes; P = .22). CONCLUSIONS: Our findings suggest that, among postgraduate physicians, CB and non-CB questions have similar effects on knowledge scores, but learners prefer CB questions. Pretests influence posttest scores.


Subject(s)
Education, Medical, Continuing/methods , Family Practice/education , Internal Medicine/education , Teaching/methods , Cross-Over Studies , Humans , Internet
20.
Acad Med ; 84(11): 1505-9, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19858805

ABSTRACT

PURPOSE: The Instructional Materials Motivation Survey (IMMS) purports to assess the motivational characteristics of instructional materials or courses using the Attention, Relevance, Confidence, and Satisfaction (ARCS) model of motivation. The IMMS has received little use or study in medical education. The authors sought to evaluate the validity of IMMS scores and compare scores between standard and adaptive Web-based learning modules. METHOD: During the 2005-2006 academic year, 124 internal medicine residents at the Mayo School of Graduate Medical Education (Rochester, Minnesota) were asked to complete the IMMS for two Web-based learning modules. Participants were randomly assigned to use one module that adapted to their prior knowledge of the topic, and one module using a nonadaptive design. IMMS internal structure was evaluated using Cronbach alpha and interdimension score correlations. Relations to other variables were explored through correlation with global module satisfaction and regression with knowledge scores. RESULTS: Of the 124 eligible participants, 79 (64%) completed the IMMS at least once. Cronbach alpha was >or=0.75 for scores from all IMMS dimensions. Interdimension score correlations ranged 0.40 to 0.80, whereas correlations between IMMS scores and global satisfaction ratings ranged 0.40 to 0.63 (P<.001). Knowledge scores were associated with Attention and Relevance subscores (P=.033 and .01, respectively) but not with other IMMS dimensions (P>or=.07). IMMS scores were similar between module designs (on a five-point scale, differences ranged from 0.0 to 0.15, P>or=.33). CONCLUSIONS: These limited data generally support the validity of IMMS scores. Adaptive and standard Web-based instructional designs were similarly motivating. Cautious use and further study of the IMMS are warranted.


Subject(s)
Attitude , Curriculum , Education, Medical , Internal Medicine/education , Internet , Motivation , Personal Satisfaction , Teaching , Data Collection , Educational Measurement , Educational Status , Female , Humans , Male , Regression Analysis , Statistics as Topic
SELECTION OF CITATIONS
SEARCH DETAIL
...