Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 639
Filtrar
1.
Curr Pharm Teach Learn ; 16(11): 102159, 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39089218

RESUMO

PURPOSE: Objective structured clinical examinations (OSCE) are a valuable assessment within healthcare education, as they provide the opportunity for students to demonstrate clinical competency, but can be resource intensive to provide faculty graders. The purpose of this study was to determine how overall OSCE scores compared between faculty, peer, and self-evaluations within a Doctor of Pharmacy (PharmD) curriculum. METHODS: This study was conducted during the required nonprescription therapeutics course. Seventy-seven first-year PharmD students were included in the study, with 6 faculty members grading 10-15 students each. Students were evaluated by 3 graders: self, peer, and faculty. All evaluators utilized the same rubric. The primary endpoint of the study was to compare the overall scores between groups. Secondary endpoints included interrater reliability and quantification of feedback type based on the evaluator group. RESULTS: The maximum possible score for the OSCE was 50 points; the mean scores for self, peer, and faculty evaluations were 43.3, 43.5, and 41.7 points, respectively. No statistically significant difference was found between the self and peer raters. However, statistical significance was found in the comparison of self versus faculty (p = 0.005) and in peer versus faculty (p < 0.001). When these scores were correlated to a letter grade (A, B, C or less), higher grades had greater similarity among raters compared to lower scores. Despite differences in scoring, the interrater reliability, or W score, on overall letter grade was 0.79, which is considered strong agreement. CONCLUSIONS: This study successfully demonstrated how peer and self-evaluation of an OSCE provides a comparable alternative to traditional faculty grading, especially in higher performing students. However, due to differences in overall grades, this strategy should be reserved for low-stakes assessments and basic skill evaluations.

2.
Stud Health Technol Inform ; 315: 671-672, 2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-39049375

RESUMO

This project introduces an innovative virtual reality (VR) training program for student Nurse Practitioners, incorporating advanced 3D modeling, animation, and Large Language Models (LLMs). Designed to simulate realistic patient interactions, the program aims to improve communication, history taking, and clinical decision-making skills in a controlled, authentic setting. This abstract outlines the methods, results, and potential impact of this cutting-edge educational tool on nursing education.


Assuntos
Profissionais de Enfermagem , Realidade Virtual , Profissionais de Enfermagem/educação , Instrução por Computador/métodos , Currículo , Humanos , Interface Usuário-Computador , Educação em Enfermagem
3.
Artigo em Inglês | MEDLINE | ID: mdl-39042360

RESUMO

Summative assessments are often underused for feedback, despite them being rich with data of students' applied knowledge and clinical and professional skills. To better inform teaching and student support, this study aims to gain insights from summative assessments through profiling students' performance patterns and identify those students missing the basic knowledge and skills in medical specialities essential for their future career. We use Latent Profile Analysis to classify a senior undergraduate year group (n = 295) based on their performance in applied knowledge test (AKT) and OSCE, in which items and stations are pre-classified across five specialities (e.g. Acute and Critical Care, Paediatrics,…). Four distinct groups of students with increasing average performance levels in the AKT, and three such groups in the OSCE are identified. Overall, these two classifications are positively correlated. However, some students do well in one assessment format but not in the other. Importantly, in both the AKT and the OSCE there is a mixed group containing students who have met the required standard to pass, and those who have not. This suggests that a conception of a borderline group at the exam-level can be overly simplistic. There is little literature relating AKT and OSCE performance in this way, and the paper discusses how our analysis gives placement tutors key insights into providing tailored support for distinct student groups needing remediation. It also gives additional information to assessment writers about the performance and difficulty of their assessment items/stations, and to wider faculty about student overall performance and across specialities.

4.
Cureus ; 16(6): e61564, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38962609

RESUMO

INTRODUCTION: Objective Structured Clinical Examinations (OSCEs) are essential assessments for evaluating the clinical competencies of medical students. The COVID-19 pandemic caused a significant disruption in medical education, prompting institutions to adopt virtual formats for academic activities. This study analyzes the feasibility, satisfaction, and experiences of pediatric board candidates and faculty during virtual or electronic OSCE (e-OSCE) training sessions using Zoom video communication (Zoom Video Communications, Inc., San Jose, USA). METHODS: This is a post-event survey assessing the perceptions of faculty and candidates and the perceived advantages and obstacles of e-OSCE. RESULTS: A total of 142 participants were invited to complete a post-event survey, and 105 (73.9%) completed the survey. There was equal gender representation. More than half of the participants were examiners. The overall satisfaction with the virtual e-OSCE was high, with a mean score of 4.7±0.67 out of 5. Most participants were likely to recommend e-OSCE to a friend or colleague (mean score 8.84±1.51/10). More faculty (66.1%) than candidates (40.8%) preferred e-OSCE (P=0.006). CONCLUSION: Transitioning to virtual OSCE training during the pandemic proved feasible, with high satisfaction rates. Further research on virtual training for OSCE in medical education is recommended to optimize its implementation and outcomes.

5.
BMC Med Educ ; 24(1): 801, 2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39061036

RESUMO

BACKGROUND: The administration of performance assessments during the coronavirus disease of 2019 (COVID-19) pandemic posed many challenges, especially for examinations employed as part of certification and licensure. The National Assessment Collaboration (NAC) Examination, an Objective Structured Clinical Examination (OSCE), was modified during the pandemic. The purpose of this study was to gather evidence to support the reliability and validity of the modified NAC Examination. METHODS: The modified NAC Examination was delivered to 2,433 candidates in 2020 and 2021. Cronbach's alpha, decision consistency, and accuracy values were calculated. Validity evidence includes comparisons of scores and sub-scores for demographic groups: gender (male vs. female), type of International Medical Graduate (IMG) (Canadians Studying Abroad (CSA) vs. non-CSA), postgraduate training (PGT) (no PGT vs. PGT), and language of examination (English vs. French). Criterion relationships were summarized using correlations within and between the NAC Examination and the Medical Council of Canada Qualifying Examination (MCCQE) Part I scores. RESULTS: Reliability estimates were consistent with other OSCEs similar in length and previous NAC Examination administrations. Both total score and sub-score differences for gender were statistically significant. Total score differences by type of IMG and PGT were not statistically significant, but sub-score differences were statistically significant. Administration language was not statistically significant for either the total scores or sub-scores. Correlations were all statistically significant with some relationships being small or moderate (0.20 to 0.40) or large (> 0.40). CONCLUSIONS: The NAC Examination yields reliable total scores and pass/fail decisions. Expected differences in total scores and sub-scores for defined groups were consistent with previous literature, and internal relationships amongst NAC Examination sub-scores and their external relationships with the MCCQE Part I supported both discriminant and criterion-related validity arguments. Modifications to OSCEs to address health restrictions can be implemented without compromising the overall quality of the assessment. This study outlines some of the validity and reliability analyses for OSCEs that required modifications due to COVID.


Assuntos
COVID-19 , Competência Clínica , Avaliação Educacional , Humanos , COVID-19/diagnóstico , COVID-19/epidemiologia , Reprodutibilidade dos Testes , Avaliação Educacional/métodos , Masculino , Feminino , Competência Clínica/normas , Canadá , SARS-CoV-2 , Pandemias , Educação de Pós-Graduação em Medicina/normas , Médicos Graduados Estrangeiros/normas
6.
Med Teach ; : 1-9, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38976711

RESUMO

INTRODUCTION: Ensuring equivalence in high-stakes performance exams is important for patient safety and candidate fairness. We compared inter-school examiner differences within a shared OSCE and resulting impact on students' pass/fail categorisation. METHODS: The same 6 station formative OSCE ran asynchronously in 4 medical schools, with 2 parallel circuits/school. We compared examiners' judgements using Video-based Examiner Score Comparison and Adjustment (VESCA): examiners scored station-specific comparator videos in addition to 'live' student performances, enabling 1/controlled score comparisons by a/examiner-cohorts and b/schools and 2/data linkage to adjust for the influence of examiner-cohorts. We calculated score impact and change in pass/fail categorisation by school. RESULTS: On controlled video-based comparisons, inter-school variations in examiners' scoring (16.3%) were nearly double within-school variations (8.8%). Students' scores received a median adjustment of 5.26% (IQR 2.87-7.17%). The impact of adjusting for examiner differences on students' pass/fail categorisation varied by school, with adjustment reducing failure rate from 39.13% to 8.70% (school 2) whilst increasing failure from 0.00% to 21.74% (school 4). DISCUSSION: Whilst the formative context may partly account for differences, these findings query whether variations may exist between medical schools in examiners' judgements. This may benefit from systematic appraisal to safeguard equivalence. VESCA provided a viable method for comparisons.

7.
BMC Med Educ ; 24(1): 817, 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39075511

RESUMO

CONTEXT: Objective Structured Clinical Examinations (OSCEs) are an increasingly popular evaluation modality for medical students. While the face-to-face interaction allows for more in-depth assessment, it may cause standardization problems. Methods to quantify, limit or adjust for examiner effects are needed. METHODS: Data originated from 3 OSCEs undergone by 900-student classes of 5th- and 6th-year medical students at Université Paris Cité in the 2022-2023 academic year. Sessions had five stations each, and one of the three sessions was scored by consensus by two raters (rather than one). We report OSCEs' longitudinal consistency for one of the classes and staff-related and student variability by session. We also propose a statistical method to adjust for inter-rater variability by deriving a statistical random student effect that accounts for staff-related and station random effects. RESULTS: From the four sessions, a total of 16,910 station scores were collected from 2615 student sessions, with two of the sessions undergone by the same students, and 36, 36, 35 and 20 distinct staff teams in each station for each session. Scores had staff-related heterogeneity (p<10-15), with staff-level standard errors approximately doubled compared to chance. With mixed models, staff-related heterogeneity explained respectively 11.4%, 11.6%, and 4.7% of station score variance (95% confidence intervals, 9.5-13.8, 9.7-14.1, and 3.9-5.8, respectively) with 1, 1 and 2 raters, suggesting a moderating effect of consensus grading. Student random effects explained a small proportion of variance, respectively 8.8%, 11.3%, and 9.6% (8.0-9.7, 10.3-12.4, and 8.7-10.5), and this low amount of signal resulted in student rankings being no more consistent over time with this metric, rather than with average scores (p=0.45). CONCLUSION: Staff variability impacts OSCE scores as much as student variability, and the former can be reduced with dual assessment or adjusted for with mixed models. Both are small compared to unmeasured sources of variability, making them difficult to capture consistently.


Assuntos
Competência Clínica , Avaliação Educacional , Variações Dependentes do Observador , Estudantes de Medicina , Humanos , Avaliação Educacional/métodos , Avaliação Educacional/normas , Competência Clínica/normas , Educação de Graduação em Medicina/normas , Paris , Reprodutibilidade dos Testes
8.
Curr Pharm Teach Learn ; 16(11): 102152, 2024 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-39033560

RESUMO

INTRODUCTION: In Switzerland, becoming a licensed pharmacist requires succeeding a federal entry-to-practice exam that includes an Objective Structured Clinical Examination (OSCE). Candidates from the University of Geneva (UNIGE) exhibited a higher failure rate in this part of the examination in comparison to candidates from other Swiss institutions. The institution made a specific set of pedagogical changes to a 3-week pharmacy services course that is run during their Master's second year to prepare them for their entry-to-practice OSCE. One key change was a switch from a summative in-classroom OSCE to an on-line formative OSCE. METHODS: New teaching activities were introduced between 2019 2020 and 2021-2022 academic years to help students strengthen their patient-facing skills and prepare for the federal OSCE. These online activities consisted in formative OSCEs supplemented with group and individual debriefings and in 18 h clinical case simulations reproducing OSCE requirements and assessed with standardized evaluation grids. Failure rates before and after the introduction of these activities were compared, and their perceived usefulness by UNIGE candidates was collected through a questionnaire survey. RESULTS: The UNIGE failure rate decreased from 6.8% in 2018/2019 to 3.3% in 2022 following the implementation of the new teaching activities. The difference in failure rates between UNIGE and the other institutions became less pronounced in 2022 compared to 2018/2019. The redesigned Master's course was highlighted as useful for preparation, with all new activities perceived as beneficial. Questionnaire responses brought attention to challenges faced by UNIGE candidates, including stress management, insufficient information or practical training, and experiences related to quarantine. These insights informed further development of teaching methods. DISCUSSION: Although the results do not establish a direct link between participation in new teaching activities and increased performance, they suggest resolving the initial issue. Our findings relate to pedagogical concepts such as constructive alignment, formative assessment and examination anxiety, and generally support the benefits of online format. CONCLUSION: This study used a participatory action research based on mixed methods to address a challenge in pharmacy education. Online teaching activities including formative OSCEs, case simulations and debriefings were implemented. Improved performance in entry-to-practice OSCE was subsequently observed. The results highlight the potential of formative, active, and constructively aligned online activities, such as role-playing and case simulation, to enhance patient-facing skills and improve outcomes in summative assessments of these skills.

9.
J Eval Clin Pract ; 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39073068

RESUMO

RATIONALE: Objective Structured Clinical Examinations (OSCEs) are widely used for assessing clinical competence, especially in high-stakes environments such as medical licensure. However, the reuse of OSCE cases across multiple administrations raises concerns about parameter stability, known as item parameter drift (IPD). AIMS & OBJECTIVES: This study aims to investigate IPD in reused OSCE cases while accounting for examiner scoring effects using a Many-facet Rasch Measurement (MFRM) model. METHOD: Data from 12 OSCE cases, reused over seven administrations of the Internationally Educated Nurse Competency Assessment Program (IENCAP), were analyzed using the MFRM model. Each case was treated as an item, and examiner scoring effects were accounted for in the analysis. RESULTS: The results indicated that despite accounting for examiner effects, all cases exhibited some level of IPD, with an average absolute IPD of 0.21 logits. Three cases showed positive directional trends. IPD significantly affected score decisions in 1.19% of estimates, at an invariance violation of 0.58 logits. CONCLUSION: These findings suggest that while OSCE cases demonstrate sufficient stability for reuse, continuous monitoring is essential to ensure the accuracy of score interpretations and decisions. The study provides an objective threshold for detecting concerning levels of IPD and underscores the importance of addressing examiner scoring effects in OSCE assessments. The MFRM model offers a robust framework for tracking and mitigating IPD, contributing to the validity and reliability of OSCEs in evaluating clinical competence.

10.
BMC Med Educ ; 24(1): 673, 2024 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-38886698

RESUMO

OBJECTIVE: To analyze the satisfaction levels, perceptions of developing clinical competencies through objective structured clinical examination and to explore the experiences, challenges, and suggestions of undergraduate dental students. METHODS: The study adopted a mixed-method convergent design. Quantitative data were collected from 303 participants through surveys, evaluating satisfaction levels with objective structured clinical examination (OSCE). Additionally, qualitative insights were gathered through student focus group interviews, fundamental themes were developed from diverse expressions on various aspects of OSCE assessments. The Chi-Square tests, was performed to assess associations between variables. Data integration involved comparing and contrasting quantitative and qualitative findings to derive comprehensive conclusions. RESULTS: The satisfaction rates include 69.4% for the organization of OSCE stations and 57.4% for overall effectiveness. However, a crucial challenge was identified, with only 36.7% of students receiving adequate post-OSCE feedback. Furthermore, a majority of students (50%) expressed concerns about the clinical relevance of OSCEs. The study showed a significant associations (p < 0.05) between satisfaction levels and years of study as well as previous OSCE experience. Student focus group interviews revealed diverse perspectives on OSCE assessments. While students appreciate the helpfulness of OSCEs, concerns were raised regarding time constraints, stress, examiner training, and the perceived lack of clinical relevance. CONCLUSION: The students anticipated concerns about the clinical relevance of OSCEs, highlighting the need for a more aligned assessment approach. Diverse perspectives on OSCE assessments reveal perceived helpfulness alongside challenges such as lack of feedback, examiner training, time constraints, and mental stress.


Assuntos
Competência Clínica , Educação em Odontologia , Avaliação Educacional , Grupos Focais , Satisfação Pessoal , Estudantes de Odontologia , Humanos , Estudantes de Odontologia/psicologia , Feminino , Masculino , Educação em Odontologia/normas , Inquéritos e Questionários , Adulto Jovem , Adulto
11.
Front Med (Lausanne) ; 11: 1395466, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38903805

RESUMO

This study aimed to investigate the experience of medical students assessing their cohort peers in formative clinical assessment. The exercise was designed to provide students with a formative experience prior to their summative assessment, and to determine what students could learn by being on the "other side of the mark sheet." Students were grateful for the experience learning both from the assessment practice, and from the individual written feedback provided immediately afterwards. They also described how much they learnt from seeing the assessment from the assessor's viewpoint, with many students commenting that they learnt more from being the "assessor" than from being the "student" in the process. Students were asked how they felt about being assessed by their peers, with some describing the experience as being more intimidating and stressful than when compared to assessment by clinicians. An interesting aspect of this study is that it also demonstrates some findings which suggest that the students' current learning context appears to have an effect on their attitudes to their peers as assessors. It is possible the competitive cultural milieu of the teaching hospital environment may have a negative effect on medical student collegiality and peer support.

12.
Med Educ Online ; 29(1): 2370617, 2024 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-38934534

RESUMO

While objective clinical structured examination (OSCE) is a worldwide recognized and effective method to assess clinical skills of undergraduate medical students, the latest Ottawa conference on the assessment of competences raised vigorous debates regarding the future and innovations of OSCE. This study aimed to provide a comprehensive view of the global research activity on OSCE over the past decades and to identify clues for its improvement. We performed a bibliometric and scientometric analysis of OSCE papers published until March 2024. We included a description of the overall scientific productivity, as well as an unsupervised analysis of the main topics and the international scientific collaborations. A total of 3,224 items were identified from the Scopus database. There was a sudden spike in publications, especially related to virtual/remote OSCE, from 2020 to 2024. We identified leading journals and countries in terms of number of publications and citations. A co-occurrence term network identified three main clusters corresponding to different topics of research in OSCE. Two connected clusters related to OSCE performance and reliability, and a third cluster on student's experience, mental health (anxiety), and perception with few connections to the two previous clusters. Finally, the United States, the United Kingdom, and Canada were identified as leading countries in terms of scientific publications and collaborations in an international scientific network involving other European countries (the Netherlands, Belgium, Italy) as well as Saudi Arabia and Australia, and revealed the lack of important collaboration with Asian countries. Various avenues for improving OSCE research have been identified: i) developing remote OSCE with comparative studies between live and remote OSCE and issuing international recommendations for sharing remote OSCE between universities and countries; ii) fostering international collaborative studies with the support of key collaborating countries; iii) investigating the relationships between student performance and anxiety.


Assuntos
Bibliometria , Competência Clínica , Educação de Graduação em Medicina , Avaliação Educacional , Humanos , Avaliação Educacional/métodos , Avaliação Educacional/normas , Educação de Graduação em Medicina/normas , Reprodutibilidade dos Testes , Estudantes de Medicina/psicologia , Estudantes de Medicina/estatística & dados numéricos , Pesquisa Biomédica/normas
13.
Med Teach ; : 1-6, 2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-38943517

RESUMO

PURPOSE OF ARTICLE: This paper explores issues pertinent to teaching and assessment of clinical skills at the early stages of medical training, aimed at preventing academic integrity breaches. The drivers for change, the changes themselves, and student perceptions of those changes are described. METHODS: Iterative changes to a summative high stakes Objective Structured Clinical Examination (OSCE) assessment in an undergraduate medical degree were undertaken in response to perceived/known breaches of assessment security. Initial strategies focused on implementing best practice teaching and assessment design principles, in association with increased examination security. RESULTS: These changes failed to prevent alleged sharing of examination content between students. A subsequent iteration saw a more radical deviation from classic OSCE assessment design, with students being assessed on equivalent competencies, not identical items (OSCE stations). This more recent approach was broadly acceptable to students, and did not result in breaches of academic integrity that were detectable. CONCLUSIONS: Ever increasing degrees of assessment security need not be the response to breaches of academic integrity. Use of non-identical OSCE items across a cohort, underpinned by constructive alignment of teaching and assessment may mitigate the incentives to breach academic integrity, though face validity is not universal.

14.
JMIR Med Educ ; 10: e47438, 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38904482

RESUMO

Unlabelled: A significant component of Canadian medical education is the development of clinical skills. The medical educational curriculum assesses these skills through an objective structured clinical examination (OSCE). This OSCE assesses skills imperative to good clinical practice, such as patient communication, clinical decision-making, and medical knowledge. Despite the widespread implementation of this examination across all academic settings, few preparatory resources exist that cater specifically to Canadian medical students. MonkeyJacket is a novel, open-access, web-based application, built with the goal of providing medical students with an accessible and representative tool for clinical skill development for the OSCE and clinical settings. This viewpoint paper presents the development of the MonkeyJacket application and its potential to assist medical students in preparation for clinical examinations and practical settings. Limited resources exist that are web-based; accessible in terms of cost; specific to the Medical Council of Canada (MCC); and, most importantly, scalable in nature. The goal of this research study was to thoroughly describe the potential utility of the application, particularly its capacity to provide practice and scalable formative feedback to medical students. MonkeyJacket was developed to provide Canadian medical students with the opportunity to practice their clinical examination skills and receive peer feedback by using a centralized platform. The OSCE cases included in the application were developed by using the MCC guidelines to ensure their applicability to a Canadian setting. There are currently 75 cases covering 5 specialties, including cardiology, respirology, gastroenterology, neurology, and psychiatry. The MonkeyJacket application is a web-based platform that allows medical students to practice clinical decision-making skills in real time with their peers through a synchronous platform. Through this application, students can practice patient interviewing, clinical reasoning, developing differential diagnoses, and formulating a management plan, and they can receive both qualitative feedback and quantitative feedback. Each clinical case is associated with an assessment checklist that is accessible to students after practice sessions are complete; the checklist promotes personal improvement through peer feedback. This tool provides students with relevant case stems, follow-up questions that probe for differential diagnoses and management plans, assessment checklists, and the ability to review the trend in their performance. The MonkeyJacket application provides medical students with a valuable tool that promotes clinical skill development for OSCEs and clinical settings. MonkeyJacket introduces a way for medical learners to receive feedback regarding patient interviewing and clinical reasoning skills that is both formative and scalable in nature, in addition to promoting interinstitutional learning. The widespread use of this application can increase the practice of and feedback on clinical skills among medical learners. This will not only benefit the learner; more importantly, it can provide downstream benefits for the most valuable stakeholder in medicine-the patient.


Assuntos
Competência Clínica , Internet , Humanos , Canadá , Avaliação Educacional/métodos , Estudantes de Medicina , Educação Médica/métodos , Currículo
15.
Am J Pharm Educ ; 88(9): 100734, 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38944280

RESUMO

OBJECTIVE: To identify factors influencing patient responses in potentially sensitive situations that might lead to embarrassment (defined by politeness theory (PT) as positive face-threatening acts [FTAs]) or a sense of imposition (defined by PT as negative FTAs) during Objective Structured Clinical Examinations (OSCEs) and to assess the participant's ability to mitigate such situations. METHODS: Nineteen OSCE video recordings of 10 pharmacy trainees interacting with mock patients were examined using the PT framework. All relevant participants' speech acts were coded and quantified into type of FTAs and the mitigation strategies used. Patient (assessor) responses were classified then quantified into preferred responses (ie, quick response) vs dispreferred (ie, delayed or hesitant responses) using conversation analysis. The chi-square test was used to identify any association between relevant variables according to predefined hypotheses using SPSS version 27. RESULTS: A total of 848 FTAs were analyzed. Participants failed to meet patient face needs in 32.4% of positive FTAs, in 11.5% of negative FTAs, and 44.4% of positive and negative FTAs. Although patients disclosing information about any inappropriate lifestyle behavior (as per OSCE scripts) expressed these via dispreferred mannerisms, participants were less likely to provide patients with reassurance when patient face needs were challenged in this way (68.2% of these dispreferred responses were not given reassuring feedback) than when they were maintained. CONCLUSION: Improving educational programs to include the context of patient face needs and conversational strategies for properly dealing with highly sensitive situations was suggested as a way to equip trainees with the skills to effectively build rapport with patients.

16.
Cureus ; 16(4): e59008, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38800217

RESUMO

INTRODUCTION: Medical communication skills are a critical component of clinical medicine and patient satisfaction. Communication skills are difficult to teach and evaluate, necessitating tools that are effective and efficient. This study presents and validates the 7 Elements Communication Rating Form (7E-CRF), a streamlined, dual-purpose, evidence-based medical communication checklist that functions as a teaching and assessment tool. METHOD: A 14-item teaching and assessment tool is described and validated using face, concurrent, and predictive validity indices. The study was conducted with 661 medical students from the West Virginia School of Osteopathic Medicine (WVSOM). Student performance was assessed in year 1 labs, year 2 labs, and year 2 and year 3 objective structured clinical examination (OSCE). These internal indices were compared with student performance on the Humanistic Domain of the Comprehensive Osteopathic Medical Licensing Examination (COMLEX) Level 2-Performance Evaluation (PE), a licensure exam previously taken in years 3 or 4 of osteopathic medical schools. RESULTS: The evidence of interrater reliability and predictive validity is strong. Data from the 7E-CRF is compared to performance on the COMLEX Level 2-PE, Humanistic Domain. The 7E-CRF can identify students who are at a 10-fold increased risk of failure on the COMLEX Level 2-PE Humanistic Domain.  Conclusions: The 7E-CRF integrates instruction and assessment, based on a national and international model. The simplicity, foundation in professional consensus, ease of use, and predictive efficacy make the 7E-CRF a highly valuable instrument for medical schools in teaching and evaluating competency in medical communication skills.

17.
J Dent Educ ; 2024 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-38736189

RESUMO

PURPOSE: This study aims to evaluate how student performance and perspectives changed when the Objective Structured Clinical Exam (OSCE) assessment system was changed from a composite score to discipline-specific grading at the Harvard School of Dental Medicine. METHODS: The retrospective study population consisted of all students (n = 349) who completed three OSCEs (OSCE 1, 2, and 3) as part of the predoctoral program during the years 2014-2023. Data on the students' OSCE scores were obtained from the Office of Dental Education, and data on students' race/ethnicity and gender were obtained from their admissions data. RESULTS: The likelihood of a student failing the OSCE after the assessment system change significantly increased with an adjusted odds ratio of 20.12. After the change, the number of failed subjects per student decreased with an adjusted mean ratio of 0.48. Students perceived the OSCE as being less useful after the change. Independent of the grading change, OSCEs 1 and 2 were seen as more useful compared to OSCE 3, which is administered in the last year of the Doctor of Dental Medicine program. CONCLUSION: The discipline-specific nature of the new assessment system helps focus on specific areas of remediation, rather than blanket remediation used previously, in order to isolate the actual areas of deficiency and to focus remediation efforts so that students can align their learning needs appropriately. Therefore, although the actual number of fails identified increased for the course, the assessment change has allowed for more directed, actionable information to be gained from the OSCE to prepare students to work toward competency standards.

18.
GMS J Med Educ ; 41(2): Doc14, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38779694

RESUMO

Modern medical moulages are becoming increasingly important in simulation-based health professions education. Their lifelikeness is important so that simulation engagement is not disrupted while their standardization is crucial in high-stakes exams. This report describes in detail how three-dimensional transfers are developed and produced so that educators will be able to develop their own. In addition, evaluation findings and lessons learnt from deploying transfers in summative assessments are shared. Step-by-step instructions are given for the creation and application of transfers, including materials and photographic visualizations. We also examined feedback on 10 exam stations (out of a total of 81) with self-developed three-dimensional transfers and complement this with additional lessons learnt. By the time of submission, the authors successfully developed and deployed over 40 different three-dimensional transfers representing different clinical findings in high-stakes exams using the techniques explained in this article or variations thereof. Feedback from students and examiners after completing the OSCE is predominantly positive, with lifelikeness being the quality most often commented upon. Caveats derived from feedback and own experiences are included. The step-by-step approach reported can be adapted and replicated by healthcare educators to build their own three-dimensional transfers. This should widen the scope and the lifelikeness of their simulations. At the same time we propose that this level of lifelikeness should be expected by learners as not to disrupt simulation engagement. Our evaluation of their use in high-stakes assessments suggests they are both useful and accepted.


Assuntos
Treinamento por Simulação , Humanos , Treinamento por Simulação/métodos , Avaliação Educacional/métodos , Competência Clínica/normas , Dermatopatias , Modelos Anatômicos , Imageamento Tridimensional
19.
Rev Neurol (Paris) ; 2024 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-38705796

RESUMO

BACKGROUND: There is little consensus on how to make a diagnosis announcement of severe chronic disease in neurology. Other medical specialties, such as oncology, have developed assessment methods similar to the Objective Structured Clinical Examination (OSCE) to address this issue. Here we report the implementation of an OSCE focused on the diagnosis announcement of chronic disease in neurology by residents. OBJECTIVE: We aimed to evaluate the acceptability, feasibility and validity in routine practice of an OSCE combined with a theoretical course focused on diagnosis announcement in neurology. METHOD: Eighteen neurology residents were prospectively included between 2019 and 2022. First, they answered a questionnaire on their previous level of training in diagnosis announcement. Second, in a practical session with a simulated patient, they made a 15-min diagnosis announcement and then had 5mins of immediate feedback with an expert observer, present in the room. The OSCE consisted of 4 different stations, with standardized scenarios dedicated to the announcement of multiple sclerosis (MS), Parkinson's disease (PD), Alzheimer's disease (AD) and amyotrophic lateral sclerosis (ALS). Third, in a theory session, expert observers covered the essential theoretical points. All residents and expert observers completed an evaluation of the "practical session" and the "theory session". RESULTS: Residents estimated their previous level of diagnosis announcement training at 3.1/5. The most feared announcements were AD and ALS. The "practical session" was rated at a mean of 4.1/5 by the residents and 4.8/5 by the expert observers, and the "theory session" at a mean of 4.7/5 by the residents and 5/5 by the expert observers. After the OSCEs, 11 residents felt more confident about making an announcement. CONCLUSION: This study has shown a benefit of using an OSCE to learn how to make a diagnosis announcement of severe chronic disease in neurology. OSCEs could be used in many departments in routine practice and seem adapted to residents.

20.
JMIR Med Educ ; 10: e53997, 2024 04 30.
Artigo em Inglês | MEDLINE | ID: mdl-38693686

RESUMO

SaNuRN is a five-year project by the University of Rouen Normandy (URN) and the Côte d'Azur University (CAU) consortium to optimize digital health education for medical and paramedical students, professionals, and administrators. The project includes a skills framework, training modules, and teaching resources. In 2027, SaNuRN is expected to train a significant portion of the 400,000 health and paramedical professions students at the French national level. Our purpose is to give a synopsis of the SaNuRN initiative, emphasizing its novel educational methods and how they will enhance the delivery of digital health education. Our goals include showcasing SaNuRN as a comprehensive program consisting of a proficiency framework, instructional modules, and educational materials and explaining how SaNuRN is implemented in the participating academic institutions. SaNuRN is a project aimed at educating and training health-related and paramedics students in digital health. The project results from a cooperative effort between URN and CAU, covering four French departments. The project is based on the French National Referential on Digital Health (FNRDH), which defines the skills and competencies to be acquired and validated by every student in the health, paramedical, and social professions curricula. The SaNuRN team is currently adapting the existing URN and CAU syllabi to FNRDH and developing short-duration video capsules of 20 to 30 minutes to teach all the relevant material. The project aims to ensure that the largest student population earns the necessary skills, and it has developed a two-tier system involving facilitators who will enable the efficient expansion of the project's educational outreach and support the students in learning the needed material efficiently. With a focus on real-world scenarios and innovative teaching activities integrating telemedicine devices and virtual professionals, SaNuRN is committed to enabling continuous learning for healthcare professionals in clinical practice. The SaNuRN team introduced new ways of evaluating healthcare professionals by shifting from a knowledge-based to a competencies-based evaluation, aligning with the Miller teaching pyramid and using the Objective Structured Clinical Examination and Script Concordance Test in digital health education. Drawing on the expertise of URN, CAU, and their public health and digital research laboratories and partners, the SaNuRN project represents a platform for continuous innovation, including telemedicine training and living labs with virtual and interactive professional activities. The SaNuRN project provides a comprehensive, personalized 30-hour training package for health and paramedical students, addressing all 70 FNRDH competencies. The program is enhanced using AI and NLP to create virtual patients and professionals for digital healthcare simulation. SaNuRN teaching materials are open-access. The project collaborates with academic institutions worldwide to develop educational material in digital health in English and multilingual formats. SaNuRN offers a practical and persuasive training approach to meet the current digital health education requirements.


Assuntos
Educação em Saúde , Educação a Distância/métodos , Educação a Distância/tendências , Previsões , Educação em Saúde/tendências , Educação em Saúde/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA