Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 11 de 11
1.
Gerontol Geriatr Educ ; 44(3): 354-363, 2023 07 03.
Article En | MEDLINE | ID: mdl-35377832

As federal partners, the Veterans Health Administration (VA) and the Indian Health Service (IHS) agreed to share resources, such as education. The VA Geriatric Scholars Program, a workforce development program, provides one of its training programs on team-based primary care of elders to clinicians working in IHS and Tribal Health Programs. The practical impact of that training is described. A mixed methods approach was applied to the course's evaluation survey at five clinics in the Northwestern Plains, Southwest, Pacific Coast, and Alaska. Quantitative approaches assessed participants' self-reported intention to improve recognition and assessment of common geriatric syndromes. A qualitative approach applied to open-ended text responses revealed intensions to improve team-based care. Among the 51 respondents in our sample, we found significant improvements in self-reported ability to recognize previously unfamiliar potential risks to elders' health and safety, t(49) = 8.0233, p < .001, as well as increased comfort with conducting geriatric assessments and increased confidence in interprofessional team-based communication. Improvements to team-based care included enhanced clinical skills, organizational factors and the need to train additional employees. This evaluation demonstrates the value of sharing resources among federal partners and its value for participants in IHS and Tribal Health Programs.


Geriatrics , United States Indian Health Service , United States , Humans , Aged , Geriatrics/education , Clinical Competence , Surveys and Questionnaires , Primary Health Care/methods
2.
Appl Clin Inform ; 11(4): 528-534, 2020 08.
Article En | MEDLINE | ID: mdl-32785904

BACKGROUND: With the increased usage of dashboard reporting systems to monitor and track patient panels by clinical users, developers must ensure that the information displays they produce are accurate and intuitive. When evaluating usability of a clinical dashboard among potential end users, developers oftentimes rely on methods such as questionnaires as opposed to other, more time-intensive strategies that incorporate direct observation. OBJECTIVES: Prior to release of the potentially inappropriate medication (PIM) clinical dashboard, designed to facilitate completion of a quality improvement project by clinician scholars enrolled in the Veterans Affairs (VA) workforce development Geriatric Scholars Program (GSP), we evaluated the usability of the system. This article describes the process of usability testing a dashboard reporting system with clinicians using direct observation and think-aloud moderating techniques. METHODS: We developed a structured interview protocol that combines virtual observation, think-aloud moderating techniques, and retrospective questioning of the overall user experience, including use of the System Usability Scale (SUS). Thematic analysis was used to analyze field notes from the interviews of three GSP alumni. RESULTS: Our structured approach to usability testing identified specific functional problems with the dashboard reporting system that were missed by results from the SUS. Usability testing lead to overall improvements in the intuitive use of the system, increased data transparency, and clarification of the dashboard's purpose. CONCLUSION: Reliance solely on questionnaires and surveys at the end stages of dashboard development can mask potential functional problems that will impede proper usage and lead to misinterpretation of results. A structured approach to usability testing in the developmental phase is an important tool for developers of clinician friendly systems for displaying easily digested information and tracking outcomes for the purpose of quality improvement.


Potentially Inappropriate Medication List , Data Display , Electronic Health Records , Feasibility Studies , Humans , Quality Control , Surveys and Questionnaires , User-Computer Interface
3.
J Physician Assist Educ ; 31(1): 23-27, 2020 Mar.
Article En | MEDLINE | ID: mdl-32004253

PURPOSE: This study describes and examines the short- and longer-term impact of a required longitudinal medical Spanish curriculum on physician assistant student preparedness and ability to communicate with patients in Spanish during clinical rotations. METHODS: Fifty-eight preclinical students participated in an 80-hour curriculum delivered weekly over 3 semesters. Teaching followed a framework of second-language acquisition and included structured grammar and medical vocabulary practice with didactic, interactive, and group assignments. Vocabulary and grammar were assessed with quizzes. Oral proficiency was assessed by faculty with Spanish Objective Structured Clinical Examination (OSCE) stations at midpoint and end using the Interagency Language Roundtable (ILR), a 6-level scale (immediate outcome). Students self-rated proficiency and confidence and evaluated curriculum effectiveness for preparing them to care for Spanish-speaking patients (longer-term outcomes). RESULTS: All students passed the written and oral quizzes. Faculty-scored ILR verbal proficiency at the OSCEs increased by a mean level of 0.5 over 6 months. Student self-assessed proficiency improved on average by one level from baseline to 24 months later. Students rated highly curriculum effectiveness, preparedness to communicate in Spanish during clinical rotations, ability to judge when an interpreter was needed, and the importance of medical Spanish to future practice. CONCLUSIONS: A required integrated longitudinal medical Spanish curriculum was well received. Physician assistant students demonstrated short-term interval progression in Spanish proficiency, with improvements in both faculty and self-rating scores, and readiness to apply the skill to practice. They valued active learning associated with repeated practice with feedback, role playing, and interval assessments throughout the curriculum.


Communication , Multilingualism , Physician Assistants/education , Adult , Curriculum , Educational Measurement/methods , Female , Humans , Language , Male , Young Adult
4.
J Appl Gerontol ; 39(7): 770-777, 2020 07.
Article En | MEDLINE | ID: mdl-29865902

Caregivers play an important role in the in-home care of community dwelling older adults living with Alzheimer's disease or related dementias (ADRD); however, many of these caregivers lack training in caring for this vulnerable population. In 2015, we developed and implemented an interactive, community-based, knowledge and skills-based training program for In-Home Supportive Services (IHSS) caregivers. This report shares the results of a process evaluation of this training program as it evolved over the course of three training sessions in Riverside County, California. Our iterative evaluation process reveals the unique needs of training and assessing a population of demographically diverse adult learners and provides guidance for those planning to implement similar training in underserved communities. Factors such as reliance on self-reported abilities, language readability level, and test anxiety may have confounded attempts to capture learner feedback and actual knowledge gains from our caregiver training program.


Alzheimer Disease , Home Care Services , Aged , Caregivers , Humans , Staff Development , Workforce
5.
Am J Manag Care ; 25(9): 425-430, 2019 09.
Article En | MEDLINE | ID: mdl-31518091

OBJECTIVES: The Veterans Affairs (VA) Geriatric Scholars Program (GSP) is a workforce development program to enhance skills and competencies among VA clinicians who provide healthcare for older veterans in VA primary care clinics. An intensive geriatrics didactics (IGD) course is a core element of this professional development program. The objective of this study was to evaluate the impact of completing the IGD course on providers' rates of prescribing definite potentially inappropriate medications (DPIMs) based on Beers Criteria from 2008 to 2016. STUDY DESIGN: We applied a longitudinal interrupted time series design to examine changes in DPIM prescribing rates for GSP participants before and after completing the IGD course. METHODS: The time series was divided into two 12-month periods, representing the preintervention period (ie, 12 months prior to completing the IGD course) and the postintervention period (ie, 12 months after completing the IGD course), and populated with pharmacy dispensing data from the VA's Corporate Data Warehouse. An adjusted slope impact model was developed to estimate the postintervention change in the proportion of the dispensed medications identified as DPIMs. RESULTS: After adjusting for case mix, we observed a statistically significant reduction in the proportion of DPIMs dispensed post IGD (slope change, 0.994; 95% CI, 0.991-0.997). This change in slope reflects a total decrease of 7971 DPIM dispensings during the postintervention period. This equates to an estimated 24 fewer DPIM dispensings per provider during the postintervention period. CONCLUSIONS: Although the size of the effect was modest, we found that participation in the GSP IGD course reduced prescribing of DPIMs for older veterans.


Geriatrics/standards , Inappropriate Prescribing/statistics & numerical data , Pharmaceutical Services/standards , Potentially Inappropriate Medication List/standards , Practice Guidelines as Topic , United States Department of Veterans Affairs/standards , Veterans/statistics & numerical data , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , United States
6.
J Physician Assist Educ ; 30(3): 168-173, 2019 Sep.
Article En | MEDLINE | ID: mdl-31385903

PURPOSE: This study's aim was to examine the impact of a brief video presentation for changing clinician knowledge and attitude about precepting physician assistant (PA) students. METHODS: In this mixed methods study, we developed a 12-minute video and made presentations to potential preceptors. Change in knowledge and attitudes was assessed with a pre/post survey. We conducted focus groups (FGs) to elicit barriers and motivators for precepting PA students and assessed acceptability and impact of the video. RESULTS: Twenty-three preceptors participated in three 45-minute presentations. Participants showed significant knowledge increase in 7 of 10 survey questions. After the presentation, willingness to precept PA students was high. Major FG themes were: teaching is motivating, need clarity about PA students' needs, need support to teach, prefer video presentation to email, and similarities with medical student teaching is positive. CONCLUSIONS: A brief in-person video presentation is acceptable and is associated with increased knowledge and comfort in precepting PA students.


Physician Assistants/education , Preceptorship , Video Recording , Adult , Female , Humans , Male , Middle Aged , Physician Assistants/psychology , Preceptorship/organization & administration , Students, Health Occupations/psychology , Students, Health Occupations/statistics & numerical data , Video Recording/methods
7.
J Physician Assist Educ ; 29(3): 162-166, 2018 Sep.
Article En | MEDLINE | ID: mdl-30086122

PURPOSE: Whether physician assistant (PA) students' self-assessment or standardized patient (SP) evaluations of students' medical Spanish proficiency accurately reflect their language proficiency is unclear. This study compares PA student and SP ratings with an expert faculty member's rating to determine whether student or SP ratings can be used to evaluate language proficiency. METHODS: Fifty-eight students participated in a single-station Spanish Objective Structured Clinical Examination (OSCE) at the midpoint of a medical Spanish curriculum. Using the Interagency Language Roundtable (ILR)-a 6-point, single-item language proficiency scale previously validated among physicians-PA students and SPs evaluated students' medical Spanish proficiency. Their scores were then compared with the scores derived by an expert faculty rater who had viewed a video of each student-SP encounter. The faculty's score was considered the gold standard. Correlation between scores was calculated using Spearman's rank correlation coefficient. RESULTS: Mean student scores were highest when rated by SPs (M = 3.8, SD = 0.9), followed by self (M = 3.0, SD = 0.9), and then faculty (M = 2.5, SD = 1.2). Spearman's rank correlation coefficient showed a strong positive correlation between students and the expert faculty rater (rs = 0.67, P < .001) and between SPs and the expert faculty rater (rs = 0.72, P < .001). The correlation was stronger for high- than for low-proficiency students. Students' self-rated scores showed significant improvement from baseline to the OSCE. CONCLUSIONS: PA students participating in a medical Spanish curriculum and SPs show good correlation with an expert faculty rater in assessing Spanish proficiency during an OSCE. Standardized patients demonstrate scoring leniency. The ILR has potential for tracking aggregate student progress and curriculum effectiveness. With training, student self-rating could be used for interval assessment of medical Spanish communication.


Educational Measurement/methods , Hispanic or Latino , Language , Physician Assistants/education , Self-Assessment , Adult , Clinical Competence , Communication , Female , Humans , Male , Patient Simulation
8.
Am J Pharm Educ ; 82(5): 6487, 2018 06.
Article En | MEDLINE | ID: mdl-30013246

Objective. To examine concordance between in-room and video faculty ratings of interprofessional behaviors in a standardized team objective structured clinical encounter (TOSCE). Methods. In-room and video-rated student performance scores in an interprofessional 2-station TOSCE were compared using a validated 3-point scale assessing six team competencies. Scores for each student were derived from two in-room faculty members and one faculty member who viewed video recordings of the same team encounter from equivalent visual vantage points. All faculty members received the same rigorous rater training. Paired sample t-tests were used to compare individual student scores. McNemar's test was used to compare student pass/fail rates to determine the impact of rating modality on performance scores. Results. In-room and video student scores were captured for 12 novice teams (47 students) with each team consisting of students from four professions (medicine, pharmacy, physician assistant, nursing). Video ratings were consistently lower for all competencies and significantly lower for competencies of roles and responsibilities, and conflict management. Using a criterion of an average score of 2 out of 3 for at least one station for passing, 56% of students passed when rated in-room compared with 20% when rated by video. Conclusion. In-room and video ratings are not equal. Educators should consider scoring discrepancies based on modality when assessing team behaviors.


Educational Measurement/methods , Interprofessional Relations , Students, Health Occupations/psychology , Students, Pharmacy/psychology , Communication , Education, Pharmacy/methods , Educational Measurement/statistics & numerical data , Faculty , Humans , Patient Care Team , Pilot Projects
9.
Med Educ Online ; 22(1): 1314751, 2017.
Article En | MEDLINE | ID: mdl-28475438

BACKGROUND: There is a need for validated and easy-to-apply behavior-based tools for assessing interprofessional team competencies in clinical settings. The seven-item observer-based Modified McMaster-Ottawa scale was developed for the Team Objective Structured Clinical Encounter (TOSCE) to assess individual and team performance in interprofessional patient encounters. OBJECTIVE: We aimed to improve scale usability for clinical settings by reducing item numbers while maintaining generalizability; and to explore the minimum number of observed cases required to achieve modest generalizability for giving feedback. DESIGN: We administered a two-station TOSCE in April 2016 to 63 students split into 16 newly-formed teams, each consisting of four professions. The stations were of similar difficulty. We trained sixteen faculty to rate two teams each. We examined individual and team performance scores using generalizability (G) theory and principal component analysis (PCA). RESULTS: The seven-item scale shows modest generalizability (.75) with individual scores. PCA revealed multicollinearity and singularity among scale items and we identified three potential items for removal. Reducing items for individual scores from seven to four (measuring Collaboration, Roles, Patient/Family-centeredness, and Conflict Management) changed scale generalizability from .75 to .73. Performance assessment with two cases is associated with reasonable generalizability (.73). Students in newly-formed interprofessional teams show a learning curve after one patient encounter. Team scores from a two-station TOSCE demonstrate low generalizability whether the scale consisted of four (.53) or seven items (.55). CONCLUSION: The four-item Modified McMaster-Ottawa scale for assessing individual performance in interprofessional teams retains the generalizability and validity of the seven-item scale. Observation of students in teams interacting with two different patients provides reasonably reliable ratings for giving feedback. The four-item scale has potential for assessing individual student skills and the impact of IPE curricula in clinical practice settings. ABBREVIATIONS: IPE: Interprofessional education; SP: Standardized patient; TOSCE: Team objective structured clinical encounter.


Clinical Competence/standards , Interprofessional Relations , Patient Care Team/standards , Adult , Faculty , Female , Humans , Male , Quality Indicators, Health Care , Students, Health Occupations
10.
Med Educ Online ; 20: 26691, 2015.
Article En | MEDLINE | ID: mdl-26004993

BACKGROUND: Current scales for interprofessional team performance do not provide adequate behavioral anchors for performance evaluation. The Team Observed Structured Clinical Encounter (TOSCE) provides an opportunity to adapt and develop an existing scale for this purpose. We aimed to test the feasibility of using a retooled scale to rate performance in a standardized patient encounter and to assess faculty ability to accurately rate both individual students and teams. METHODS: The 9-point McMaster-Ottawa Scale developed for a TOSCE was converted to a 3-point scale with behavioral anchors. Students from four professions were trained a priori to perform in teams of four at three different levels as individuals and teams. Blinded faculty raters were trained to use the scale to evaluate individual and team performances. G-theory was used to analyze ability of faculty to accurately rate individual students and teams using the retooled scale. RESULTS: Sixteen faculty, in groups of four, rated four student teams, each participating in the same TOSCE station. Faculty expressed comfort rating up to four students in a team within a 35-min timeframe. Accuracy of faculty raters varied (38-81% individuals, 50-100% teams), with errors in the direction of over-rating individual, but not team performance. There was no consistent pattern of error for raters. CONCLUSION: The TOSCE can be administered as an evaluation method for interprofessional teams. However, faculty demonstrate a 'leniency error' in rating students, even with prior training using behavioral anchors. To improve consistency, we recommend two trained faculty raters per station.


Educational Measurement/methods , Health Personnel/education , Interprofessional Relations , Patient Care Team/organization & administration , Communication , Cooperative Behavior , Humans , Negotiating , Observer Variation , Patient Simulation , Professional Role
11.
Acad Med ; 87(8): 1077-82, 2012 Aug.
Article En | MEDLINE | ID: mdl-22722349

PURPOSE: Scoring clinical assessments in a reliable and valid manner using criterion-referenced standards remains an important issue and directly affects decisions made regarding examinee proficiency. This generalizability study of students' clinical performance examination (CPX) scores examines the reliability of those scores and of their interpretation, particularly according to a newly introduced, "critical actions" criterion-referenced standard and scoring approach. METHOD: The authors applied a generalizability framework to the performance scores of 477 third-year students attending three different medical schools in 2008. The norm-referenced standard included all station checklist items. The criterion-referenced standard included only those items deemed critical to patient care by a faculty panel. The authors calculated and compared variance components and generalizability coefficients for each standard across six common stations. RESULTS: Norm-referenced scores had moderate generalizability (ρ = 0.51), whereas criterion-referenced scores showed low dependability (φ = 0.20). The estimated 63% of measurement error associated with the person-by-station interaction suggests case specificity. Increasing the number of stations on the CPX from 6 to 24, an impractical solution both for cost and time, would still yield only moderate dependability (φ = 0.50). CONCLUSIONS: Though the performance assessment of complex skills, like clinical competence, seems intrinsically valid, careful consideration of the scoring standard and approach is needed to avoid misinterpretation of proficiency. Further study is needed to determine how best to improve the reliability of criterion-referenced scores, by implementing changes to the examination structure, the process of standard-setting, or both.


Clinical Competence/standards , Education, Medical, Undergraduate/standards , Educational Measurement/methods , California , Checklist , Diagnosis, Differential , Humans , Medical History Taking , Models, Educational , Patient Simulation , Physical Examination , Physician-Patient Relations , Reference Standards , Reproducibility of Results , Schools, Medical , United States
...