Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 139
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Teach Learn Med ; 36(1): 99-106, 2024.
Article in English | MEDLINE | ID: mdl-37266979

ABSTRACT

Issue: Efforts to improve medical education often focus on optimizing technical aspects of teaching and learning. However, without considering the connection between the pedagogical-curricular and the foundational philosophically-defined educational aims of medicine and medical education, critical system reform is unlikely. The transformation of medical education requires leaders uniquely prepared to view medicine and medical education critically as it is and as it ought to be, and who have the capacity to lead changes aimed at overcoming the identified gaps. This paper proposes a five-level topology to guide leaders to develop this capacity. Evidence: Without reference to a shared understanding of a larger, more profound philosophical vision of the ideal physician and of the educational process of "becoming" that physician, efforts to change medical education are likely to be incremental and insufficient rather than transformative. Such efforts may lead to frequent pedagogical-curricular reforms, shifting evaluation models, and paradigmatic conflicts in medical education systems across contexts. This paper describes a leadership program meant to develop transformational educational leaders. The leadership program is built on and teaches the five-level topology we describe here. The five levels are 1) Philosophy 2) Philosophy of Education 3) Theory of Practice 4) Implementation and 5) Evaluation. Implications: The leadership development program exemplifies how the topology can be implemented as a framework to foster transformation in medical education. The topology is a metaphor exemplified by the Mobius Strip, a continuous and never-broken object, which reflects the ways in which the five levels are inherently connected and reflect on each other. Medical education leadership requires deeper engagement with paradigmatic thought to transform the field for the future.


Subject(s)
Education, Medical , Physicians , Humans , Leadership , Learning
2.
Teach Learn Med ; 35(4): 436-441, 2023.
Article in English | MEDLINE | ID: mdl-35668557

ABSTRACT

Construct: The construct being assessed is readiness-for-residency of graduating medical students, as measured through two assessment frameworks. Background: Readiness-for-residency of near-graduate medical students should be but is not consistently assessed. To address this, the Association of American Medical Colleges (AAMC), in 2014, identified and described 13 core Entrustable Professional Activities (EPAs), which are tasks that all residents should be able to perform unsupervised upon entering residency. However, the AAMC did not initially provide measurement guidelines or propose standardized assessments. We designed Night-onCall (NOC), an immersive simulation for our near-graduating medical students to assess and address their readiness-for-residency, framed around tasks suggested by the AAMC's core EPAs. In adopting this EPA assessment framework, we began by building upon an established program of competency-based clinical skills assessments, repurposing competency-based checklists to measure components of the EPAs where possible, and designing new checklists, when necessary. This resulted in a blended suite of 14 checklists, which theoretically provide substantive assessment of all 13 core EPAs. In this paper, we describe the consensus-based mapping process conducted to ensure we understood the relationship between competency and EPA-based assessment lenses and could therefore report meaningful feedback on both to transitioning students in the NOC exercise. Approach: Between January-November 2017, five clinician and two non-clinician health professions educators at NYU Grossman School of Medicine conducted a rigorous consensus-based mapping process, which included each rater mapping each of the 310 NOC competency-based checklist items to lists of entrustable behaviors expected of learners according to the AAMC 13 core EPAs. Findings: All EPAs were captured to varying degrees by the 14 NOC checklists (overall Intraclass Correlation Coefficient (ICC) = 0.77). Consensus meetings resolved discrepancies and improved ICC values for three (EPA-9, EPA-10, EPA-12) of the four EPAs that initially showed poor reliability. Conclusions: Findings suggest that with some limitations (e.g., EPA-7 "form clinical questions/retrieve evidence") established competency-based assessments can be repurposed to measure readiness-for-residency through an EPA lens and both can be reported to learners and faculty.

3.
Med Teach ; 45(10): 1140-1147, 2023 10.
Article in English | MEDLINE | ID: mdl-36961759

ABSTRACT

PURPOSE: To describe patterns of clinical communication skills that inform curriculum enhancement and guide coaching of medical students. MATERIALS AND METHODS: Performance data from 1182 consenting third year medical students in 9 cohorts (2011-2019), on a 17-item Clinical Communication Skills Assessment Tool (CCSAT) completed by trained Standardized Patients as part of an eight case high stakes Comprehensive Clinical Skills Exam (CCSE) were analyzed using latent profile analysis (LPA). Assessment domains included: information gathering (6 items), relationship development (5 items), patient education (3 items), and organization/time management (3 items). LPA clustered learners with similar strength/weakness into profiles based on item response patterns across cases. One-way analysis of variance (ANOVA) assessed for significant differences by profile for CCSAT items. RESULTS: Student performance clustered into six profiles in three groups, high performing (HP1 and HP2-Low Patient Education, 15.7%), average performing (AP1 and AP2-Interrupters, 40.9%), and lower performing profiles (LP1-Non-interrupters and LP2, 43.4%) with adequate model fit estimations and similar distribution in each cohort. We identified 3 CCSAT items that discriminated among learner's skill profiles. CONCLUSION: Clinical communication skill performance profiles provide nuanced, benchmarked guidance for curriculum improvement and tailoring of communication skills coaching.


Subject(s)
Education, Medical, Undergraduate , Students, Medical , Humans , Curriculum , Communication , Clinical Competence
4.
BMC Med Educ ; 22(1): 373, 2022 May 16.
Article in English | MEDLINE | ID: mdl-35578333

ABSTRACT

BACKGROUND: Publicly accessible information regarding imaging procedures is lacking, especially in non-English languages. Biomedical engineering students do not generally have opportunities to practice conveying scientific knowledge to the public. METHODS: As part of a Techniques and Clinical Usage of Medical Imaging Devices course, for extra credit, several biomedical engineering students choose to create and edit Wikipedia articles in the local language (Hebrew). The goal of this activity was to serve the local community, while improving students' abilities and self-perception in reading and reporting scientific knowledge. Following task completion, individual interviews were conducted with the students to assess the impact of the task on student personal development, sense of meaning and their view of their role in educating the public. RESULTS: Most students considered the task meaningful and impactful on society. Additional academic credit was not perceived as the most important incentive for participating. CONCLUSIONS: Medical and other professional schools should seek to include tasks such as writing Wikipedia articles in their curricula. Educational assignments that integrate academic work, student identity development and direct community benefit can have a long-term beneficial impact on learners and society.


Subject(s)
Curriculum , Writing , Humans , Students , Technology
5.
J Interprof Care ; 35(2): 193-199, 2021.
Article in English | MEDLINE | ID: mdl-32506976

ABSTRACT

Understanding how previous experiences with interprofessional education and collaboration inform health care provider perspectives is important for developing interprofessional interventions at the graduate level. The purpose of this study was to examine how previous work experiences of graduate level health professions students inform perspectives about interprofessional education and collaboration. Drawing from program evaluation data of two separate graduate level interprofessional education interventions based in primary care and home health care, we conducted a qualitative secondary data analysis of 75 interviews generated by focus groups and individual interviews with graduate students from 4 health professions cadres. Using directed content analysis, the team coded to capture descriptions of interprofessional education or collaboration generated from participants' previous work experiences. Coding revealed 173 discrete descriptions related to previous experiences of interprofessional education or collaboration. Three themes were identified from the analysis that informed participant perspectives: Previous educational experiences (including work-based training); previous work experiences; and organizational factors and interprofessional collaboration. Experiences varied little between professions except when aspects of professional training created unique circumstances. The study reveals important differences between graduate and undergraduate learners in health professions programs that can inform interprofessional education and collaboration intervention design.


Subject(s)
Interprofessional Relations , Students, Health Occupations , Cooperative Behavior , Education, Graduate , Health Occupations , Health Personnel , Humans , Patient Care Team
6.
J Gen Intern Med ; 35(8): 2258-2265, 2020 08.
Article in English | MEDLINE | ID: mdl-32096079

ABSTRACT

BACKGROUND: To ensure a next generation of female leaders in academia, we need to understand challenges they face and factors that enable fellowship-prepared women to thrive. We surveyed woman graduates of the Robert Wood Johnson Clinical Scholars Program (CSP) from 1976 to 2011 regarding their experiences, insights, and advice to women entering the field. METHODS: We surveyed every CSP woman graduate through 2012 (n = 360) by email and post. The survey, 12 prompts requiring open text responses, explored current work situation, personal definitions of success, job negotiations, career regrets, feelings about work, and advice for others. Four independent reviewers read overlapping subsets of the de-identified data, iteratively created coding categories, and defined and refined emergent themes. RESULTS: Of the 360 cohort, 108 (30%) responded. The mean age of respondents was 45 (range 32 to 65), 85% are partnered, and 87% have children (average number of children 2.15, range 1 to 5). We identified 11 major code categories and conducted a thematic analysis. Factors common to very satisfied respondents include personally meaningful work, schedule flexibility, spousal support, and collaborative team research. Managing professional-personal balance depended on career stage, clinical specialty, and children's age. Unique to women who completed the CSP prior to 1995 were descriptions of "atypical" paths with career transitions motivated by discord between work and personal ambitions and the emphasis on the importance of maintaining relevance and remaining open to opportunities in later life. CONCLUSIONS: Women CSP graduates who stayed in academic medicine are proud to have pursued meaningful work despite challenges and uncertain futures. They thrived by remaining flexible and managing change while remaining true to their values. We likely captured the voices of long-term survivors in academic medicine. Although transferability of these findings is uncertain, these voices add to the national discussion about retaining clinical researchers and keeping women academics productive and engaged.


Subject(s)
Job Satisfaction , Personal Satisfaction , Career Choice , Child , Fellowships and Scholarships , Female , Happiness , Humans , Research Personnel , Surveys and Questionnaires
7.
BMC Med Educ ; 20(1): 199, 2020 Jun 19.
Article in English | MEDLINE | ID: mdl-32560652

ABSTRACT

BACKGROUND: Medical Education research suffers from several methodological limitations including too many single institution, small sample-sized studies, limited access to quality data, and insufficient institutional support. Increasing calls for medical education outcome data and quality improvement research have highlighted a critical need for uniformly clean and easily accessible data. Research registries may fill this gap. In 2006, the Research on Medical Education Outcomes (ROMEO) unit of the Program for Medical Innovations and Research (PrMEIR) at New York University's (NYU) Robert I. Grossman School of Medicine established the Database for Research on Academic Medicine (DREAM). DREAM is a database of routinely collected, de-identified undergraduate (UME, medical school leading up to the Medical Doctor degree) and graduate medical education (GME, residency also known as post graduate education leading to eligibility for specialty board certification) outcomes data available, through application, to researchers. Learners are added to our database through annual consent sessions conducted at the start of educational training. Based on experience, we describe our methods in creating and maintaining DREAM to serve as a guide for institutions looking to build a new or scale up their medical education registry. RESULTS: At present, our UME and GME registries have consent rates of 90% (n = 1438/1598) and 76% (n = 1988/2627), respectively, with a combined rate of 81% (n = 3426/4225). 7% (n = 250/3426) of these learners completed both medical school and residency at our institution. DREAM has yielded a total of 61 individual studies conducted by medical education researchers and a total of 45 academic journal publications. CONCLUSION: We have built a community of practice through the building of DREAM and hope, by persisting in this work the full potential of this tool and the community will be realized. While researchers with access to the registry have focused primarily on curricular/ program evaluation, learner competency assessment, and measure validation, we hope to expand the output of the registry to include patient outcomes by linking learner educational and clinical performance across the UME-GME continuum and into independent practice. Future publications will reflect our efforts in reaching this goal and will highlight the long-term impact of our collaborative work.


Subject(s)
Biomedical Research , Education, Medical, Graduate , Education, Medical, Undergraduate , Evidence-Based Medicine , Program Development , Registries/standards , Humans
8.
J Med Libr Assoc ; 108(2): 219-228, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32256233

ABSTRACT

OBJECTIVE: Evidence-based medicine practices of medical students in clinical scenarios are not well understood. Optimal foraging theory (OFT) is one framework that could be useful in breaking apart information-seeking patterns to determine effectiveness and efficiency of different methods of information seeking. The aims of this study were to use OFT to determine the number and type of resources used in information seeking when medical students answer a clinical question, to describe common information-seeking patterns, and identify patterns associated with higher quality answers to a clinical question. METHODS: Medical students were observed via screen recordings while they sought evidence related to a clinical question and provided a written response for what they would do for that patient based on the evidence that they found. RESULTS: Half (51%) of study participants used only 1 source before answering the clinical question. While the participants were able to successfully and efficiently navigate point-of-care tools and search engines, searching PubMed was not favored, with only half (48%) of PubMed searches being successful. There were no associations between information-seeking patterns and the quality of answers to the clinical question. CONCLUSION: Clinically experienced medical students most frequently relied on point-of-care tools alone or in combination with PubMed to answer a clinical question. OFT can be used as a framework to understand the information-seeking practices of medical students in clinical scenarios. This has implications for both teaching and assessment of evidence-based medicine in medical students.


Subject(s)
Evidence-Based Medicine , Information Seeking Behavior , Students, Medical/psychology , Clinical Competence , Evidence-Based Medicine/methods , Health Literacy , Humans , Students, Medical/statistics & numerical data , Time Factors
9.
J Gen Intern Med ; 34(5): 773-777, 2019 05.
Article in English | MEDLINE | ID: mdl-30993628

ABSTRACT

BACKGROUND: Few programs train residents in recognizing and responding to distressed colleagues at risk for suicide. AIM: To assess interns' ability to identify a struggling colleague, describe resources, and recognize that physicians can and should help colleagues in trouble. SETTING: Residency programs at an academic medical center. PARTICIPANTS: One hundred forty-five interns. PROGRAM DESIGN: An OSCE case was designed to give interns practice and feedback on their skills in recognizing a colleague in distress and recommending the appropriate course of action. Embedded in a patient "sign-out" case, standardized health professionals (SHP) portrayed a resident with depressed mood and an underlying drinking problem. The SHP assessed intern skills in assessing symptoms and directing the resident to seek help. PROGRAM EVALUATION: Interns appreciated the opportunity to practice addressing this situation. Debriefing the case led to productive conversations between faculty and residents on available resources. Interns' skills require further development: while 60% of interns asked about their colleague's emotional state, only one-third screened for depression and just under half explored suicidal ideation. Only 32% directed the colleague to specific resources for his depression (higher among those that checked his emotional state, 54%, or screened for depression, 80%). DISCUSSION: This OSCE case identified varying intern skill levels for identifying and assessing a struggling colleague while also providing experiential learning and supporting a culture of addressing peer wellness.


Subject(s)
Attitude of Health Personnel , Internship and Residency , Adult , Depression/psychology , Education, Medical, Graduate/organization & administration , Female , Help-Seeking Behavior , Humans , Male
10.
BMC Health Serv Res ; 18(1): 47, 2018 01 29.
Article in English | MEDLINE | ID: mdl-29378584

ABSTRACT

BACKGROUND: Obesity is a worldwide epidemic, and its prevalence is higher among Veterans in the United States. Based on our prior research, primary care teams at a Veterans Affairs (VA) hospital do not feel well-equipped to deliver effective weight management counseling and often lack sufficient time. Further, effective and intensive lifestyle-based weight management programs (e.g. VA MOVE! program) are underutilized despite implementation of systematic screening and referral at all VA sites. The 5As behavior change model (Assess, Advise, Agree, Assist, Arrange) is endorsed by the United States Preventive Service Task Force for use in counseling patients about weight management in primary care and reimbursed by Medicare. In this paper, we describe the iterative development of a technology-assisted intervention designed to provide primary care-based 5As counseling within Patient-Centered Medical Homes without overburdening providers/healthcare teams. METHODS: Thematic analyses of prior formative work (focus groups with patients [n = 54] and key informant interviews with staff [n = 25]) helped to create a technology-assisted, health coaching intervention called Goals for Eating and Moving (GEM). To further develop the intervention, we then conducted two rounds of testing with previous formative study participants (n = 5 for Round 1, n = 5 for Round 2). Each session included usability testing of prototypes of the online GEM tool, pilot testing of 5As counseling by a Health Coach, and a post-session open-ended interview. RESULTS: Three main themes emerged from usability data analyses: participants' emotional responses, tool language, and health literacy. Findings from both rounds of usability testing, pilot testing, as well as the open-ended interview data, were used to finalize protocols for the full intervention in the clinic setting to be conducted with Version 3 of the GEM tool. CONCLUSIONS: The use of qualitative research methods and user-centered design approaches enabled timely detection of salient issues to make iterative improvements to the intervention. Future studies will determine whether this intervention can increase enrollment in intensive weight management programs and promote clinically meaningful weight loss in both Veterans and in other patient populations and health systems.


Subject(s)
Directive Counseling/methods , Obesity/prevention & control , Patient Satisfaction/statistics & numerical data , Patient-Centered Care/methods , Primary Health Care , Veterans Health , Veterans , Female , Humans , Male , Middle Aged , Obesity/epidemiology , Outcome and Process Assessment, Health Care , Primary Health Care/organization & administration , Program Development , Program Evaluation , Qualitative Research , Risk Reduction Behavior , United States/epidemiology , User-Computer Interface , Veterans/psychology , Weight Loss
11.
Subst Abus ; 39(4): 476-483, 2018.
Article in English | MEDLINE | ID: mdl-29565782

ABSTRACT

BACKGROUND: We developed and implemented the Substance Abuse Research Education and Training (SARET) program for medical, dental, nursing, and social work students to address the dearth of health professionals pursuing research and careers in substance use disorders (SUD). SARET has 2 main components: (1) a novel online curriculum addressing core SUD research topics, to reach a large number of students; (2) a mentored summer research experience for in-depth exposure. METHODS: Modules were integrated into the curricula of the lead institution, and of 5 external schools. We assessed the number of Web modules completed and their effect on students' interest in SUD research. We also assessed the impact of the mentorship experience on participants' attitudes and early career trajectories, including current involvement in SUD research. RESULTS: Since 2008, over 24,000 modules have been completed by approximately 9700 individuals. In addition to integration of the modules into curricula at the lead institution, all 5 health-professional partner schools integrated at least 1 module and approximately 5500 modules were completed by individuals outside the lead institution. We found an increase in interest in SUD research after completion of the modules for students in all 4 disciplines. From 2008 to 2015, 76 students completed summer mentorships; 8 students completed year-long mentorships; 13 published in SUD-related journals, 18 presented at national conferences, and 3 are actively engaged in SUD-related research. Mentorship participants reported a positive influence on their attitudes towards SUD-related clinical care, research, and interprofessional collaboration, leading in some cases to changes in career plans. CONCLUSIONS: A modular curriculum that stimulates clinical and research interest in SUD can be successfully integrated into medical, dental, nursing, and social work curricula. The SARET program of mentored research participation fostered early research successes and influenced career choice of some participants. Longer-term follow-up will enable us to assess more distal careers of the program.


Subject(s)
Behavioral Research/education , Career Choice , Education/statistics & numerical data , Health Occupations/education , Program Evaluation , Substance-Related Disorders , Behavioral Research/trends , Curriculum , Education/methods , Education/trends , Health Knowledge, Attitudes, Practice , Health Occupations/statistics & numerical data , Humans , Internet , Mentoring
12.
Med Teach ; 39(3): 255-261, 2017 Mar.
Article in English | MEDLINE | ID: mdl-28033728

ABSTRACT

AIM: To assess the feasibility and utility of measuring baseline professional identity formation (PIF) in a theory-based professionalism curriculum for early medical students. METHODS: All 132 entering students completed the professional identity essay (PIE) and the defining issues test (DIT2). Students received score reports with individualized narrative feedback and wrote a structured reflection after a large-group session in which the PIF construct was reviewed. Analysis of PIEs resulted in assignment of a full or transitional PIF stage (1-5). The DIT2 score reflects the proportion of the time students used universal ethical principles to justify a response to 6 moral dilemma cases. Students' reflections were content analyzed. RESULTS: PIF scores were distributed across stage 2/3, stage 3, stage 3/4, and stage 4. No student scores were in stages 1, 2, 4/5, or 5. The mean DIT2 score was 53% (range 9.7?76.5%); the correlation between PIF stage and DIT score was ρ = 0.18 (p = 0.03). Students who took an analytic approach to the data and demonstrated both awareness that they are novices and anticipation of continued PIF tended to respond more positively to the feedback. CONCLUSIONS: These PIF scores distributed similarly to novice students in other professions. Developmental-theory based PIF and moral reasoning measures are related. Students reflected on these measures in meaningful ways suggesting utility of measuring PIF scores in medical education.


Subject(s)
Education, Medical, Undergraduate , Professionalism , Social Identification , Students, Medical/psychology , Adolescent , Adult , Feasibility Studies , Female , Humans , Male , Schools, Medical , Young Adult
13.
Ann Surg ; 264(3): 501-7, 2016 09.
Article in English | MEDLINE | ID: mdl-27433908

ABSTRACT

OBJECTIVES: Professionalism education is a vital component of surgical training. This research attempts to determine whether an annual, year-long professionalism curriculum in a large surgical residency can effectively change professionalism attitudes. SUMMARY OF BACKGROUND DATA: The ACGME mandated 6 competencies in 2003. The competencies of Professionalism and Interpersonal/Professional Communication Skills had never been formally addressed in surgical resident education in the past. METHODS: A professionalism curriculum was developed focusing on specific resident professionalism challenges: admitting mistakes, effective communication with colleagues at all levels, delivering the news of an unexpected death, interdisciplinary challenges of working as a team, the cultural challenge of obtaining informed consent through an interpreter, and the stress of surgical practice on you and your family. These professionalism skills were then evaluated with a 6-station Objective Structured Clinical Examination (OSCE). Identical OSCE scenarios were administered to 2 cohorts of surgical residents: in 2007 (before instituting the professionalism curriculum in 2008) and again in 2014. Surgical residents were rated by trained Standardized Patients according to a behaviorally anchored professionalism criteria checklist. RESULTS: An analysis of variance was conducted of overall OSCE professionalism scores (% well done) as the dependent variable for the 2 resident cohorts (2007 vs 2014). The 2007 residents received a mean score of 38% of professionalism items "well done" (SD 9%) and the 2014 residents received a mean 59% "well done" (SD 8%). This difference is significant (F = 49.01, P < .001). CONCLUSIONS: Professionalism education has improved surgical resident understanding, awareness, and practice of professionalism in a statistically significant manner from 2007 to 2014. This documented improvement in OSCE performance reflects the value of a professionalism curriculum in the care of the patients we seek to serve.


Subject(s)
Curriculum , General Surgery/education , Internship and Residency , Professionalism/education , Analysis of Variance , New York
14.
J Gen Intern Med ; 31(8): 846-53, 2016 08.
Article in English | MEDLINE | ID: mdl-27121308

ABSTRACT

BACKGROUND: Interprofessional collaboration (IPC) is essential for quality care. Understanding residents' level of competence is a critical first step to designing targeted curricula and workplace learning activities. In this needs assessment, we measured residents' IPC competence using specifically designed Objective Structured Clinical Exam (OSCE) cases and surveyed residents regarding training needs. METHODS: We developed three cases to capture IPC competence in the context of physician-nurse collaboration. A trained actor played the role of the nurse (Standardized Nurse - SN). The Interprofessional Education Collaborative (IPEC) framework was used to create a ten-item behaviorally anchored IPC performance checklist (scored on a three-point scale: done, partially done, well done) measuring four generic domains: values/ethics; roles/responsibilities; interprofessional communication; and teamwork. Specific skills required for each scenario were also assessed, including teamwork communication (SBAR and CUS) and patient-care-focused tasks. In addition to evaluating IPC skills, the SN assessed communication, history-taking and physical exam skills. IPC scores were computed as percent of items rated well done in each domain (Cronbach's alpha > 0.77). Analyses include item frequencies, comparison of mean domain scores, correlation between IPC and other skills, and content analysis of SN comments and resident training needs. RESULTS: One hundred and seventy-eight residents (of 199 total) completed an IPC case and results are reported for the 162 who participated in our medical education research registry. IPC domain scores were: Roles/responsibilities mean = 37 % well done (SD 37 %); Values/ethics mean = 49 % (SD 40 %); Interprofessional communication mean = 27 % (SD 36 %); Teamwork mean = 47 % (SD 29 %). IPC was not significantly correlated with other core clinical skills. SNs' comments focused on respect and IPC as a distinct skill set. Residents described needs for greater clarification of roles and more workplace-based opportunities structured to support interprofessional education/learning. CONCLUSIONS: The IPC cases and competence checklist are a practical method for conducting needs assessments and evaluating IPC training/curriculum that provides rich and actionable data at both the individual and program levels.


Subject(s)
Clinical Competence/standards , Cooperative Behavior , Internship and Residency/standards , Interprofessional Relations , Nurses/standards , Physicians/standards , Adult , Female , Humans , Internship and Residency/methods , Male , Patient Care Team/standards
15.
BMC Med Inform Decis Mak ; 16(1): 128, 2016 10 05.
Article in English | MEDLINE | ID: mdl-27716279

ABSTRACT

BACKGROUND: Obesity disproportionately affects Latina women, but few targeted, technology-assisted interventions that incorporate tailored health information exist for this population. The Veterans Health Administration (VHA) uses an online weight management tool (MOVE!23) which is publicly available, but was not designed for use in non-VHA populations. METHODS: We conducted a qualitative study to determine how interactions between the tool and other contextual elements impacted task performance when the target Latina users interacted with MOVE!23. We sought to identify and classify specific facilitators and barriers that might inform design changes to the tool and its context of use, and in turn promote usability. Six English-speaking, adult Latinas were recruited from an inner city primary care clinic and a nursing program at a local university in the United States to engage in a "Think-Aloud" protocol while using MOVE!23. Sessions were recorded, transcribed, and coded to identify interactions between four factors that contribute to usability (Tool, Task, User, Context). RESULTS: Five themes influencing usability were identified: Technical Ability and Technology Preferences; Language Confusion and Ambiguity; Supportive Tool Design and Facilitator Guidance; Relevant Examples; and Personal Experience. Features of the tool, task, and other contextual factors failed to fully support participants at times, impeding task completion. Participants interacted with the tool more readily when its language was familiar and content was personally relevant. When faced with ambiguity and uncertainty, they relied on the tool's visual cues and examples, actively sought relevant personal experiences, and/or requested facilitator support. CONCLUSIONS: The ability of our participants to successfully use the tool was influenced by the interaction of individual characteristics with those of the tool and other contextual factors. We identified both tool-specific and context-related changes that could overcome barriers to the use of MOVE!23 among Latinas. Several general considerations for the design of eHealth tools are noted.


Subject(s)
Hispanic or Latino , Medical Informatics Applications , Obesity/therapy , Patient Acceptance of Health Care , Adult , Female , Humans , Middle Aged , Qualitative Research , United States , United States Department of Veterans Affairs , Young Adult
16.
Med Teach ; 38(8): 787-92, 2016 Aug.
Article in English | MEDLINE | ID: mdl-27049798

ABSTRACT

Remediation in medical education, the process of facilitating corrections for physician trainees who are not on course to competence, predictably consumes significant institutional resources. Although remediation is a logical consequence of mandating, measuring, and reporting clinical competence, many program leaders continue to take an unstructured approach toward organizing effective, efficient plans for struggling trainees, almost all of who will become practicing physicians. The following 12 tips derive from a decade of remediation experience at each of the authors' three institutions. It is informed by the input of a group of 34 interdisciplinary North American experts assembled to contribute two books on the subject. We intend this summary to guide program leaders to build better remediation systems and emphasize that developing such systems is an important step toward enabling the transition from time-based to competency-based medical education.


Subject(s)
Competency-Based Education , Education, Medical , Program Development , Students, Medical , Clinical Competence , Guidelines as Topic
17.
Med Teach ; 38(9): 886-96, 2016 Sep.
Article in English | MEDLINE | ID: mdl-26652913

ABSTRACT

AIM: We sought to investigate the number of US medical schools utilizing portfolios, the format of portfolios, information technology (IT) innovations, purpose of portfolios and their ability to engage faculty and students. METHODS: A 21-question survey regarding portfolios was sent to the 141 LCME-accredited, US medical schools. The response rate was 50% (71/141); 47% of respondents (33/71) reported that their medical school used portfolios in some form. Of those, 7% reported the use of paper-based portfolios and 76% use electronic portfolios. Forty-five percent reported portfolio use for formative evaluation only; 48% for both formative and summative evaluation, and 3% for summative evaluation alone. RESULTS: Seventy-two percent developed a longitudinal, competency-based portfolio. The most common feature of portfolios was reflective writing (79%). Seventy-three percent allow access to the portfolio off-campus, 58% allow usage of tablets and mobile devices, and 9% involve social media within the portfolio. Eighty percent and 69% agreed that the portfolio engaged students and faculty, respectively. Ninety-seven percent reported that the portfolios used at their institution have room for improvement. CONCLUSION: While there is significant variation in the purpose and structure of portfolios in the medical schools surveyed, most schools using portfolios reported a high level of engagement with students and faculty.


Subject(s)
Accreditation , Education, Medical, Undergraduate , Formative Feedback , Schools, Medical , Writing , Clinical Competence , Surveys and Questionnaires , United States
18.
J Gen Intern Med ; 30(9): 1363-8, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26173523

ABSTRACT

We have previously proposed that by identifying a set of Educationally Sensitive Patient Outcomes (ESPOs), medical education outcomes research becomes more feasible and likely to provide meaningful guidance for medical education policy and practice. ESPOs are proximal outcomes that are sensitive to provider education, measurable, and linked to more distal health outcomes. Our previous model included Patient Activation and Clinical Microsystem Activation as ESPOs. In this paper, we discuss how Health Literacy, defined as "the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions," is another important ESPO. Between one-third and one-half of all US adults have limited health literacy skills. Providers can be trained to adopt a "universal precautions approach" to addressing patient health literacy, through the acquisition of specific skills (e.g., teachback, "chunking" information, use of plain language written materials) and by learning how to take action to improve the "health literacy environment." While there are several ways to measure health literacy, identifying which measurement tools are most sensitive to provider education is important, but challenging and complex. Further research is needed to test this model and identify additional ESPOs.


Subject(s)
Education, Medical , Health Literacy , Models, Educational , Outcome Assessment, Health Care , Patient-Centered Care , Competency-Based Education , Humans , Quality of Health Care
19.
BMC Fam Pract ; 16: 167, 2015 Nov 14.
Article in English | MEDLINE | ID: mdl-26572125

ABSTRACT

BACKGROUND: Obesity is highly prevalent among Veterans. In the United States, the Veterans Health Administration (VHA) offers a comprehensive weight management program called MOVE!. Yet, fewer than 10 % of eligible patients ever attend one MOVE! visit. The VHA has a patient-centered medical home (PCMH) model of primary care (PC) called Patient-Aligned Care Teams (PACT) at all Veterans Affairs (VA) Medical Centers. PACT teamlets conduct obesity screening, weight management counseling, and refer to MOVE!. As part of a needs assessment to improve delivery of weight management services, the purpose of this study was to assess PACT teamlet and MOVE! staff: 1) current attitudes and perceptions regarding obesity care; 2) obesity-related counseling practices 3) experiences with the MOVE! program; and 4) targets for interventions to improve implementation of obesity care in the PC setting. METHODS: We recruited 25 PACT teamlet members from a single VA study site-11 PC physicians, 5 registered nurses, 5 licensed practical nurses, 1 clerical assistant, and 3 MOVE! staff (2 dietitians, 1 psychologist)-for individual interviews using a combination of convenience and snowball sampling. Audio recorded interviews were professionally transcribed and iteratively coded by two independent reviewers. The analytic process was guided by discourse analysis in order to discover how the participants perceived and provided weight management care and what specific attitudes affected their practices, all as bounded within the organization. RESULTS: Emerging themes included: 1) role perceptions, 2) anticipated outcomes of weight management counseling and programs, and 3) communication and information dissemination. Perceived role among PCPs was influenced by training, whereas personal experience with their own weight management impacted role perception among LPNs/RNs. Attitudes about whether or not they could impact patients' weight outcomes via counseling or referral to MOVE! varied. System-level communication about VHA priorities through electronic health records and time allocation influenced teams to prioritize referral to MOVE! over weight management counseling. CONCLUSION: We found a diversity of attitudes, and practices within PACT, and identified factors that can enhance the MOVE! program and inform interventions to improve weight management within primary care. Although findings are site-specific, many are supported in the literature and applicable to other VA and non-VA sites with PCMH models of care.


Subject(s)
Disease Management , Overweight/rehabilitation , Patient-Centered Care/methods , Primary Health Care/methods , Qualitative Research , Quality Assurance, Health Care , Veterans Health , Adult , Female , Humans , Male , Middle Aged , Patient Care Team/standards , Primary Health Care/organization & administration , United States , United States Department of Veterans Affairs , Veterans
20.
Teach Learn Med ; 27(4): 366-9, 2015.
Article in English | MEDLINE | ID: mdl-26507993

ABSTRACT

UNLABELLED: SGEA 2015 CONFERENCE ABSTRACT (EDITED). Evaluating Interprofessional Teamwork During a Large-Scale Simulation. Courtney West, Karen Landry, Anna Graham, and Lori Graham. CONSTRUCT: This study investigated the multidimensional measurement of interprofessional (IPE) teamwork as part of large-scale simulation training. BACKGROUND: Healthcare team function has a direct impact on patient safety and quality of care. However, IPE team training has not been the norm. Recognizing the importance of developing team-based collaborative care, our College of Nursing implemented an IPE simulation activity called Disaster Day and invited other professions to participate. The exercise consists of two sessions: one in the morning and another in the afternoon. The disaster scenario is announced just prior to each session, which consists of team building, a 90-minute simulation, and debriefing. Approximately 300 Nursing, Medicine, Pharmacy, Emergency Medical Technicians, and Radiology students and over 500 standardized and volunteer patients participated in the Disaster Day event. To improve student learning outcomes, we created 3 competency-based instruments to evaluate collaborative practice in multidimensional fashion during this exercise. APPROACH: A 20-item IPE Team Observation Instrument designed to assess interprofessional team's attainment of Interprofessional Education Collaborative (IPEC) competencies was completed by 20 faculty and staff observing the Disaster Day simulation. One hundred sixty-six standardized patients completed a 10-item Standardized Patient IPE Team Evaluation Instrument developed from the IPEC competencies and adapted items from the 2014 Henry et al. PIVOT Questionnaire. This instrument assessed the standardized or volunteer patient's perception of the team's collaborative performance. A 29-item IPE Team's Perception of Collaborative Care Questionnaire, also created from the IPEC competencies and divided into 5 categories of Values/Ethics, Roles and Responsibilities, Communication, Teamwork, and Self-Evaluation, was completed by 188 students including 99 from Nursing, 43 from Medicine, 6 from Pharmacy, and 40 participants who belonged to more than one component, were students at another institution, or did not indicate their institution. The team instrument was designed to assess each team member's perception of how well the team and him- or herself met the competencies. Five of the items on the team perceptions questionnaire mirrored items on the standardized patient evaluation: demonstrated leadership practices that led to effective teamwork, discussed care and decisions about that care with patient, described roles and responsibilities clearly, worked well together to coordinate care, and good/effective communication. RESULTS: Internal consistency reliability of the IPE Team Observation Instrument was 0.80. In 18 of the 20 items, more than 50% of observers indicated the item was demonstrated. Of those, 6 of the items were observed by 50% to 75% of the observers, and the remaining 12 were observed by more than 80% of the observers. Internal consistency reliability of the IPE Team's Perception of Collaborative Care Instrument was 0.95. The mean response score-1 (strongly disagree) to 4 (strongly agree)-was calculated for each section of the instrument. The overall mean score was 3.57 (SD = .11). Internal consistency reliability of the Standardized Patient IPE Team Evaluation Instrument was 0.87. The overall mean score was 3.28 (SD = .17). The ratings for the 5 items shared by the standardized patient and team perception instruments were compared using independent sample t tests. Statistically significant differences (p < .05) were present in each case, with the students rating themselves higher on average than the standardized patients did (mean differences between 0.2 and 0.6 on a scale of 1-4). CONCLUSIONS: Multidimensional, competency-based instruments appear to provide a robust view of IPE teamwork; however, challenges remain. Due to the large scale of the simulation exercise, observation-based assessment did not function as well as self- and standardized patient-based assessment. To promote greater variation in observer assessments during future Disaster Day simulations, we plan to adjust the rating scale from "not observed," "observed," and "not applicable" to a 4-point scale and reexamine interrater reliability.


Subject(s)
Education, Medical/methods , Interdisciplinary Communication , Competency-Based Education/organization & administration , Humans , Program Evaluation
SELECTION OF CITATIONS
SEARCH DETAIL