Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Med Educ ; 24(1): 185, 2024 Feb 23.
Article in English | MEDLINE | ID: mdl-38395858

ABSTRACT

BACKGROUND: The increasing linguistic and cultural diversity in the United States underscores the necessity of enhancing healthcare professionals' cross-cultural communication skills. This study focuses on incorporating interpreter and limited-English proficiency (LEP) patient training into the medical and physician assistant student curriculum. This aims to improve equitable care provision, addressing the vulnerability of LEP patients to healthcare disparities, including errors and reduced access. Though training is recognized as crucial, opportunities in medical curricula remain limited. METHODS: To bridge this gap, a novel initiative was introduced in a medical school, involving second-year students in clinical sessions with actual LEP patients and interpreters. These sessions featured interpreter input, patient interactions, and feedback from interpreters and clinical preceptors. A survey assessed the perspectives of students, preceptors, and interpreters. RESULTS: Outcomes revealed positive reception of interpreter and LEP patient integration. Students gained confidence in working with interpreters and valued interpreter feedback. Preceptors recognized the sessions' value in preparing students for future clinical interactions. CONCLUSIONS: This study underscores the importance of involving experienced interpreters in training students for real-world interactions with LEP patients. Early interpreter training enhances students' communication skills and ability to serve linguistically diverse populations. Further exploration could expand languages and interpretation modes and assess long-term effects on students' clinical performance. By effectively training future healthcare professionals to navigate language barriers and cultural diversity, this research contributes to equitable patient care in diverse communities.


Subject(s)
Physician Assistants , Students, Medical , Humans , United States , Cross-Cultural Comparison , Translating , Communication , Communication Barriers , Physician-Patient Relations
2.
JAMA Intern Med ; 183(9): 1028-1030, 2023 09 01.
Article in English | MEDLINE | ID: mdl-37459090

ABSTRACT

This study compares performance on free-response clinical reasoning examinations of first- and second-year medical students vs 2 models of a popular chatbot.


Subject(s)
Students, Medical , Humans , Educational Measurement/methods , Physical Examination , Software , Clinical Reasoning
3.
medRxiv ; 2023 Mar 29.
Article in English | MEDLINE | ID: mdl-37034742

ABSTRACT

Importance: Studies show that ChatGPT, a general purpose large language model chatbot, could pass the multiple-choice US Medical Licensing Exams, but the model's performance on open-ended clinical reasoning is unknown. Objective: To determine if ChatGPT is capable of consistently meeting the passing threshold on free-response, case-based clinical reasoning assessments. Design: Fourteen multi-part cases were selected from clinical reasoning exams administered to pre-clerkship medical students between 2019 and 2022. For each case, the questions were run through ChatGPT twice and responses were recorded. Two clinician educators independently graded each run according to a standardized grading rubric. To further assess the degree of variation in ChatGPT's performance, we repeated the analysis on a single high-complexity case 20 times. Setting: A single US medical school. Participants: ChatGPT. Main Outcomes and Measures: Passing rate of ChatGPT's scored responses and the range in model performance across multiple run throughs of a single case. Results: 12 out of the 28 ChatGPT exam responses achieved a passing score (43%) with a mean score of 69% (95% CI: 65% to 73%) compared to the established passing threshold of 70%. When given the same case 20 separate times, ChatGPT's performance on that case varied with scores ranging from 56% to 81%. Conclusions and Relevance: ChatGPT's ability to achieve a passing performance in nearly half of the cases analyzed demonstrates the need to revise clinical reasoning assessments and incorporate artificial intelligence (AI)-related topics into medical curricula and practice.

4.
Front Public Health ; 9: 706697, 2021.
Article in English | MEDLINE | ID: mdl-34434915

ABSTRACT

Case investigation (CI) and contact tracing (CT) are key to containing the COVID-19 pandemic. Widespread community transmission necessitates a large, diverse workforce with specialized knowledge and skills. The University of California, San Francisco and Los Angeles partnered with the California Department of Public Health to rapidly mobilize and train a CI/CT workforce. In April through August 2020, a team of public health practitioners and health educators constructed a training program to enable learners from diverse backgrounds to quickly acquire the competencies necessary to function effectively as CIs and CTs. Between April 27 and May 5, the team undertook a curriculum design sprint by performing a needs assessment, determining relevant goals and objectives, and developing content. The initial four-day curriculum consisted of 13 hours of synchronous live web meetings and 7 hours of asynchronous, self-directed study. Educational content emphasized the principles of COVID-19 exposure, infectious period, isolation and quarantine guidelines and the importance of prevention and control interventions. A priority was equipping learners with skills in rapport building and health coaching through facilitated web-based small group skill development sessions. The training was piloted among 31 learners and subsequently expanded to an average weekly audience of 520 persons statewide starting May 7, reaching 7,499 unique enrollees by August 31. Capacity to scale and sustain the training program was afforded by the UCLA Extension Canvas learning management system. Repeated iteration of content and format was undertaken based on feedback from learners, facilitators, and public health and community-based partners. It is feasible to rapidly train and deploy a large workforce to perform CI and CT. Interactive skills-based training with opportunity for practice and feedback are essential to develop independent, high-performing CIs and CTs. Rigorous evaluation will continue to monitor quality measures to improve the training experience and outcomes.


Subject(s)
COVID-19 , Contact Tracing , Humans , Pandemics , SARS-CoV-2 , San Francisco , Workforce
SELECTION OF CITATIONS
SEARCH DETAIL
...