Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 653
Filtrar
1.
Acad Radiol ; 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39289096

RESUMO

RATIONALE AND OBJECTIVES: Traditional medical student radiology experiences often lack interactivity and fail to replicate the clinical experience of being a radiologist. This study introduces SCRAPS, a novel simulation-based paradigm designed to improve the medical student experience and provide an active learning opportunity as part of their radiology rotation. MATERIALS AND METHODS: SCRAPS utilizes a consumer-grade laptop, common word processing software, a free to use PACS and resident instructors to place students in a simulated reading-room environment. Students interpret pre-selected cases, dictate reports, and discuss findings with resident debriefing. Sessions lasted 60 to 90 min. Feedback was collected from 120 participating students (23 third year and 97 fourth year) via an anonymous survey. RESULTS: Students rated SCRAPS highly for its unique nature, enjoyability, and for providing insight into the process of performing clinical radiology tasks and endorsed it as valuable to their education. CONCLUSION: SCRAPS demonstrates promise for medical student education. It aligns with constructivist educational principles and is relatively easy to implement and adapt to new educational challenges.

2.
Cureus ; 16(8): e66530, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39252737

RESUMO

Introduction Assessing clinical judgement objectively and economically presents a challenge in academic medicine. The authors developed a situational judgement test (SJT) to measure fourth-year medical students' clinical judgement. Methods A knowledge-based, single-best-answer SJT was developed by a panel of subject matter experts (SMEs). The SJT included 30 scenarios, each with five response options ranked ordinally from most to least appropriate. A computer-based format was used, and the SJT was piloted by two cohorts of fourth-year medical students at California University of Science and Medicine in 2022 and 2023 upon completion of an internship preparation course. Subsequently, students completed an optional survey. Evaluated scoring methods included original ordinal ranking, dichotomous, dichotomous with negative correction, distance from SME best answer, and distance from SME best answer squared. Results The SJT was completed by 142 fourth-year medical students. Cronbach's alpha ranged from 0.39 to 0.85, depending on the scoring method used. The distance-from-SME-best-answer-squared method yielded the highest internal consistency, which was considered acceptable. Using this scoring method, the mean score was 72.89 (SD = 48.32, range = 26-417), and the standard error of measurement was 18.41. Item analysis found that seven (23%) scenarios were of average difficulty, 13 (43%) had a good or satisfactory discrimination index, and nine (30%) had a distractor efficiency of at least 66%. Most students preferred the SJT to a traditional multiple-choice exam (16; 62%) and felt it was an appropriate tool to assess clinical judgement (15; 58%). Conclusions The authors developed and piloted an SJT to assess clinical judgement among medical students. Although not achieving validation, subsequent development of the SJT will focus on expanding the SME concordance panel, improving difficulty and discrimination indices, and conducting parallel forms reliability and adverse impact analyses.

3.
BMC Med Educ ; 24(1): 994, 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39267024

RESUMO

BACKGROUND: Breaking bad news is one of the most difficult aspects of communication in medicine. The objective of this study was to assess the relevance of a novel active learning course on breaking bad news for fifth-year students. METHODS: Students were divided into two groups: Group 1, the intervention group, participated in a multidisciplinary formative discussion workshop on breaking bad news with videos, discussions with a pluri-professional team, and concluding with the development of a guide on good practice in breaking bad news through collective intelligence; Group 2, the control group, received no additional training besides conventional university course. The relevance of discussion-group-based active training was assessed in a summative objective structured clinical examination (OSCE) station particularly through the students' communication skills. RESULTS: Thirty-one students were included: 17 in Group 1 and 14 in Group 2. The mean (range) score in the OSCE was significantly higher in Group 1 than in Group 2 (10.49 out of 15 (7; 13) vs. 7.80 (4.75; 12.5), respectively; p = 0.0007). The proportion of students assessed by the evaluator to have received additional training in breaking bad news was 88.2% (15 of the 17) in Group 1 and 21.4% (3 of the 14) in Group 2 (p = 0.001). The intergroup differences in the Rosenberg Self-Esteem Scale and Jefferson Scale of Empathy scores were not significant, and both scores were not correlated with the students' self-assessed score for success in the OSCE. CONCLUSION: Compared to the conventional course, this new active learning method for breaking bad news was associated with a significantly higher score in a summative OSCE. A longer-term validation study is needed to confirm these exploratory data.


Assuntos
Relações Médico-Paciente , Aprendizagem Baseada em Problemas , Estudantes de Medicina , Revelação da Verdade , Humanos , Estudantes de Medicina/psicologia , Feminino , Masculino , Comunicação , Educação de Graduação em Medicina/métodos , Avaliação Educacional , Competência Clínica
4.
Artigo em Inglês | MEDLINE | ID: mdl-39235519

RESUMO

In healthcare, effective communication in complex situations such as end of life conversations is critical for delivering high quality care. Whether residents learn from communication training with actors depends on whether they are able to select appropriate information or 'predictive cues' from that learning situation that accurately reflect their or their peers' performance and whether they use those cues for ensuing judgement. This study aimed to explore whether prompts can help medical residents improving use of predictive cues and judgement of communication skills. First and third year Kenyan residents (N = 41) from 8 different specialties were randomly assigned to one of two experimental groups during a mock OSCE assessing advanced communication skills. Residents in the intervention arm received paper predictive cue prompts while residents in the control arm received paper regular prompts for self-judgement. In a pre- and post- test, residents' use of predictive cues and the appropriateness of peer-judgements were evaluated against a pre-rated video of another resident. The intervention improved both the use of predictive cues in self-judgement and peer-judgement. Ensuing accuracy of peer-judgements in the pre- to post-test only partly improved: no effect from the intervention was found on overall appropriateness of judgements. However, when analyzing participants' completeness of judgements over the various themes within the consultation, a reduction in inappropriate judgments scores was seen in the intervention group. In conclusion, predictive cue prompts can help learners to concentrate on relevant cues when evaluating communication skills and partly improve monitoring accuracy. Future research should focus on offering prompts more frequently to evaluate whether this increases the effect on monitoring accuracy in communication skills.

5.
BMC Med Educ ; 24(1): 954, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39223576

RESUMO

BACKGROUND: Near-peer teaching is a popular pedagogical teaching tool however many existing models fail to demonstrate benefits in summative OSCE performance. The 3-step deconstructed (3-D)skills near-peer model was recently piloted in undergraduate medicine showing short term improvement in formative OSCE performance utilising social constructivist educational principles. This study aims to assess if 3-D skills model teaching affects summative OSCE grades. METHODS: Seventy-nine third year medical students attended a formative OSCE event at the University of Glasgow receiving an additional 3-minutes per station of either 3-D skills teaching or time-equivalent unguided practice. Students' summative OSCE results were compared against the year cohort to establish whether there was any difference in time delayed summative OSCE performance. RESULTS: 3-D skills and unguided practice cohorts had comparable demographical data and baseline formative OSCE performance. Both the 3-D skill cohort and unguided practice cohort achieved significantly higher median station pass rates at summative OSCEs than the rest of the year. This correlated to one additional station pass in the 3-D skills cohort, which would increase median grade banding from B to A. The improvement in the unguided practice cohort did not achieve educational significance. CONCLUSION: Incorporating the 3-D skills model into a formative OSCE is associated with significantly improved performance at summative OSCEs. This expands on the conflicting literature for formative OSCE sessions which have shown mixed translation to summative performance and suggests merit in institutional investment to improve clinical examination skills.


Assuntos
Competência Clínica , Educação de Graduação em Medicina , Avaliação Educacional , Humanos , Educação de Graduação em Medicina/métodos , Estudos de Casos e Controles , Estudantes de Medicina , Feminino , Masculino , Modelos Educacionais , Grupo Associado
6.
Artigo em Inglês | MEDLINE | ID: mdl-39264068

RESUMO

OBJECTIVE: The purpose of the present study was to assess the benefits of simulation for advancing knowledge and assisting healthcare staff in optimization of procedures when managing severe pre-eclampsia/eclampsia (sPE/E). METHODS: A randomized educational trial was conducted with two groups: Group I received theoretical training, while group II received the same training along with simulation scenarios based on the management of sPE/E. The study involved 199 healthcare providers, including physicians, midwives, skilled birth attendants, and nurses. The study analyzed the percentage of correct answers on both the multiple-choice questions (MCQ) and the objective structured clinical examinations (OSCE) to evaluate theoretical knowledge and clinical skills objectively. RESULTS: Statistically significant differences were found immediately after training between groups I and II, whose mean percentages were 65.0% (±11.2) versus 71.0% (±9.8) (P < 0.001). A statistically significant reduction in the percentage of correct answers was found in both groups and demonstrated a discrepancy between immediate post-training test and post-training test at 3 months scores of 11.6% (±1.3) in group I versus 7.2% (±0.6) in group II. OSCE1 and OSCE2 scores were significantly higher in group II than in group I (P < 0.001). CONCLUSION: Simulation combined with theoretical training would appear to be an interesting method of training for advancing knowledge and improving skills of healthcare providers in their management of sPE/E. Our goal is for this method to be used to reduce real-life maternal mortality in the South Kivu region of the Democratic Republic of Congo.

7.
J Gen Intern Med ; 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39103602

RESUMO

BACKGROUND: Workplace violence disproportionately affects healthcare workers and verbal aggression from patients frequently occurs. While verbal de-escalation is the first-line approach to defusing anger, there is a lack of consistent curricula or robust evaluation in undergraduate medical education. AIM: To develop a medical school curriculum focused on de-escalation skills for adult patients and evaluate effectiveness with surveys and an objective structured clinical examination (OSCE). SETTING: We implemented this curriculum in the "Get Ready for Residency Bootcamp" of a single large academic institution in 2023. PARTICIPANTS: Forty-four fourth-year medical students PROGRAM DESCRIPTION: The curriculum consisted of an interactive didactic focused on our novel CALMER framework that prioritized six evidence-based de-escalation skills and a separate standardized patient practice session. PROGRAM EVALUATION: The post-curriculum survey (82% response rate) found a significant increase from 2.79 to 4.11 out of 5 (p ≤ 0.001) in confidence using verbal de-escalation. Preparedness improved with every skill and curriculum satisfaction averaged 4.79 out of 5. The OSCE found no differences in skill level between students who received the curriculum and those who did not. DISCUSSION: This evidence-based and replicable de-escalation skill curriculum improves medical student confidence and preparedness in managing agitated patients.

8.
BMC Med Educ ; 24(1): 866, 2024 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-39135004

RESUMO

BACKGROUND: Clinical practitioners think of frequent causes of diseases first rather than expending resources searching for rare conditions. However, it is important to continue investigating when all common illnesses have been discarded. Undergraduate medical students must acquire skills to listen and ask relevant questions when seeking a potential diagnosis. METHODOLOGY: Our objective was to determine whether team-based learning (TBL) focused on clinical reasoning in the context of rare diseases combined with video vignettes (intervention) improved the clinical and generic skills of students compared with TBL alone (comparator). We followed a single-center quasi-experimental posttest-only design involving fifth-year medical students. RESULTS: The intervention group (n = 178) had a significantly higher mean overall score on the objective structured clinical examination (OSCE) (12.04 ± 2.54 vs. 11.27 ± 3.16; P = 0.021) and a higher mean percentage score in clinical skills (47.63% vs. 44.63%; P = 0.025) and generic skills (42.99% vs. 40.33%; P = 0.027) than the comparator group (n = 118). Success on the OSCE examination was significantly associated with the intervention (P = 0.002). CONCLUSIONS: The TBL with video vignettes curriculum was associated with better performance of medical students on the OSCE. The concept presented here may be beneficial to other teaching institutions.


Assuntos
Competência Clínica , Currículo , Educação de Graduação em Medicina , Avaliação Educacional , Estudantes de Medicina , Humanos , Educação de Graduação em Medicina/métodos , Feminino , Masculino , Gravação em Vídeo , Aprendizagem Baseada em Problemas , Processos Grupais
9.
Adv Simul (Lond) ; 9(1): 34, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39148109

RESUMO

BACKGROUND: Dermatological conditions are a common reason for patients to seek healthcare advice. However, they are often under-represented in Objective Structured Clinical Examinations (OSCEs). Given the visual nature of skin conditions, simulation is suited to recreate such skin conditions in assessments such as OSCEs. One such technique often used in simulation is moulage-the art and science of using special effects make-up techniques to replicate a wide range of conditions on Simulated Participants or manikins. However, the contextual nature of OSCEs places additional challenges compared to using moulage in more general forms of simulated-based education. MAIN BODY: OSCEs are high-stakes assessments and require standardisation across multiple OSCE circuits. In addition, OSCEs tend to have large numbers of candidates, so moulage needs to be durable in this context. Given the need to expand the use of moulage in OSCE stations and the unique challenges that occur in OSCEs, there is a requirement to have guiding principles to inform their use and development. CONCLUSION: Informed by evidence, and grounded in experience, this article aims to provide practical tips for health profession education faculty on how best to optimise the use of moulage in OSCEs. We will describe the process of designing an OSCE station, with a focus on including moulage. Secondly, we will provide a series of important practice points to use moulage in OSCEs-and encourage readers to integrate them into their day-to-day practice.

10.
Curr Pharm Teach Learn ; 16(11): 102159, 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39089218

RESUMO

PURPOSE: Objective structured clinical examinations (OSCE) are a valuable assessment within healthcare education, as they provide the opportunity for students to demonstrate clinical competency, but can be resource intensive to provide faculty graders. The purpose of this study was to determine how overall OSCE scores compared between faculty, peer, and self-evaluations within a Doctor of Pharmacy (PharmD) curriculum. METHODS: This study was conducted during the required nonprescription therapeutics course. Seventy-seven first-year PharmD students were included in the study, with 6 faculty members grading 10-15 students each. Students were evaluated by 3 graders: self, peer, and faculty. All evaluators utilized the same rubric. The primary endpoint of the study was to compare the overall scores between groups. Secondary endpoints included interrater reliability and quantification of feedback type based on the evaluator group. RESULTS: The maximum possible score for the OSCE was 50 points; the mean scores for self, peer, and faculty evaluations were 43.3, 43.5, and 41.7 points, respectively. No statistically significant difference was found between the self and peer raters. However, statistical significance was found in the comparison of self versus faculty (p = 0.005) and in peer versus faculty (p < 0.001). When these scores were correlated to a letter grade (A, B, C or less), higher grades had greater similarity among raters compared to lower scores. Despite differences in scoring, the interrater reliability, or W score, on overall letter grade was 0.79, which is considered strong agreement. CONCLUSIONS: This study successfully demonstrated how peer and self-evaluation of an OSCE provides a comparable alternative to traditional faculty grading, especially in higher performing students. However, due to differences in overall grades, this strategy should be reserved for low-stakes assessments and basic skill evaluations.

11.
BMC Med Educ ; 24(1): 936, 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39198877

RESUMO

INTRODUCTION: Studies have reported different results of evaluation methods of clinical competency tests. Therefore, this study aimed to design, implement, and evaluate a blended (in-person and virtual) Competency Examination for final-year Nursing Students. METHODS: This interventional study was conducted in two semesters of 2020-2021 using an educational action research method in the nursing and midwifery faculty. Thirteen faculty members and 84 final-year nursing students were included in the study using a census method. Eight programs and related activities were designed and conducted during the examination process. Students completed the Spielberger Anxiety Inventory before the examination, and both faculty members and students completed the Acceptance and Satisfaction questionnaire. FINDINGS: The results of the analysis of focused group discussions and reflections indicated that the virtual CCE was not capable of adequately assessing clinical skills. Therefore, it was decided that the CCE for final-year nursing students would be conducted using a blended method. The activities required for performing the examination were designed and implemented based on action plans. Anxiety and satisfaction were also evaluated as outcomes of the study. There was no statistically significant difference in overt, covert, and overall anxiety scores between the in-person and virtual sections of the examination (p > 0.05). The mean (SD) acceptance and satisfaction scores for students in virtual, in-person, and blended sections were 25.49 (4.73), 27.60 (4.70), and 25.57 (4.97), respectively, out of 30 points, in which there was a significant increase in the in-person section compared to the other sections. (p = 0.008). The mean acceptance and satisfaction scores for faculty members were 30.31 (4.47) in the virtual, 29.86 (3.94) in the in-person, and 30.00 (4.16) out of 33 in the blended, and there was no significant difference between the three sections (p = 0.864). CONCLUSION: Evaluating nursing students' clinical competency using a blended method was implemented and solved the problem of students' graduation. Therefore, it is suggested that the blended method be used instead of traditional in-person or entirely virtual exams in epidemics or based on conditions, facilities, and human resources. Also, the use of patient simulation, virtual reality, and the development of necessary virtual and in-person training infrastructure for students is recommended for future research. Furthermore, considering that the acceptance of traditional in-person exams among students is higher, it is necessary to develop virtual teaching strategies.


Assuntos
Competência Clínica , Avaliação Educacional , Estudantes de Enfermagem , Humanos , Avaliação Educacional/métodos , Estudantes de Enfermagem/psicologia , Bacharelado em Enfermagem , Masculino , Feminino
12.
SVOA Med Res ; 2(1): 10-18, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39144736

RESUMO

In March 2020, the University of Hawaii John A. Burns School of Medicine suspended in person clinical teaching due to the SARS-CoV-2 (COVID) pandemic. During this period, virtual cases, telehealth participation, and online cases were incorporated into medical education. We have examined the effects of educational outcomes of third and fourth year students throughout clerkship performance, national standardized test scores, and our local fourth year OSCE examination. We found that USMLE step 2 scores were higher in the COVID-affected group. Patient logs in the COVID-affected group were lower for internal medicine, family medicine, OBGYN, and psychiatry clerkships. Clerkship performance grades in the COVID-affected group were lower for OBGYN and higher for surgery and psychiatry, but not different in other clerkships. USNBME subject specific examination scores in the COVID-affected group were higher for internal medicine, surgery, family medicine and psychiatry, but not different in all other specialties. For the fourth year OSCE, students in the COVID-affected group performed better on note taking and worse on physical examination. Future investigations will be needed to explore how our COVID-affected medical students perform in residency and beyond.

13.
Cureus ; 16(7): e65643, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39205707

RESUMO

BACKGROUND: Objective structured clinical examinations (OSCE) are the gold standard of clinical assessment, and are used to conduct undergraduate family medicine clinical assessment at King Faisal Specialist Hospital and Research Centre (KFSHRC). Some studies have suggested that simulated patient (SP) ratings could provide a better measure of empathy and communication skills than physician scores. The objective of this study is to further explore the effectiveness of simulated patient (SP) ratings in undergraduate OSCE assessments. METHODS: The research employed a mixed-method approach. Three OSCE assessments for final-year students were selected. Both physicians and SPs evaluated each student, providing global ratings across four domains. The quantitative aspect involved comparing physician and SP scores and assessing correlation. The qualitative aspect involved interviewing SPs to establish what student behaviours led to higher or lower scores. RESULTS: Moderate correlation was found between physician ratings and SP ratings (r=0.53, p<0.01). Internal consistency of the SP ratings was lower than physician scores. SPs considered themselves to be patient advocates and were keen to give formative feedback. The ability of the trainee to truly listen was a major concern. Scoring for SPs was relatively holistic in nature. CONCLUSIONS: The results demonstrate that SP scores have slightly weaker reliability but are still relevant and offer a completely different perspective, enriching the assessment data. Assessment should take patient or SP perspectives into account, and not rely solely on the expert physician. Changing the assessment methods will lead to necessary changes in student approach to the OSCE and improve authenticity and validity.

14.
Nurse Educ Pract ; 80: 104120, 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39213838

RESUMO

AIM: This research aims to examine the effect of using formative assessment methods in clinical education on students' knowledge, skills and self-efficacy levels. BACKGROUND: Formative assessment is used as a method designed to identify areas where students fall short and provide feedback for improvement. Formative assessment and feedback represent fundamental characteristics of quality teaching in higher education and play a decisive role in learning in nursing education. Although educators observe students performing practical tasks during clinical education, evaluation is not made by a structured control list. Therefore, just as nursing students are evaluated with "Skill Checklists" in the OSCE exam, there is a need to evaluate nursing skills during patient care in the clinical field. DESIGN: The study was designed as a pre-test post-test randomized controlled experimental study. METHOD: Before the research, both groups filled out the self-efficacy form. The experimental group received formative assessment throughout the course. At the end of the semester, all students were given a skills test and asked to fill out the self-efficacy form again. Finally, a knowledge test was administered to the entire class. RESULTS: The average knowledge score of the experimental group is higher than the control group. It was determined that there was a statistical difference of 16.54 points in the average skill scores between the groups. Posttests showed significant differences in skills such as breathing-cough exercise, basic glycemic measurement, subcutaneous injection and blood collection skills. CONCLUSION: It was determined that the formative assessment method increased nursing students' knowledge, skills and self-efficacy levels regarding basic nursing skills.

15.
Cureus ; 16(6): e61564, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38962609

RESUMO

INTRODUCTION: Objective Structured Clinical Examinations (OSCEs) are essential assessments for evaluating the clinical competencies of medical students. The COVID-19 pandemic caused a significant disruption in medical education, prompting institutions to adopt virtual formats for academic activities. This study analyzes the feasibility, satisfaction, and experiences of pediatric board candidates and faculty during virtual or electronic OSCE (e-OSCE) training sessions using Zoom video communication (Zoom Video Communications, Inc., San Jose, USA). METHODS: This is a post-event survey assessing the perceptions of faculty and candidates and the perceived advantages and obstacles of e-OSCE. RESULTS: A total of 142 participants were invited to complete a post-event survey, and 105 (73.9%) completed the survey. There was equal gender representation. More than half of the participants were examiners. The overall satisfaction with the virtual e-OSCE was high, with a mean score of 4.7±0.67 out of 5. Most participants were likely to recommend e-OSCE to a friend or colleague (mean score 8.84±1.51/10). More faculty (66.1%) than candidates (40.8%) preferred e-OSCE (P=0.006). CONCLUSION: Transitioning to virtual OSCE training during the pandemic proved feasible, with high satisfaction rates. Further research on virtual training for OSCE in medical education is recommended to optimize its implementation and outcomes.

16.
Med Teach ; : 1-9, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38976711

RESUMO

INTRODUCTION: Ensuring equivalence in high-stakes performance exams is important for patient safety and candidate fairness. We compared inter-school examiner differences within a shared OSCE and resulting impact on students' pass/fail categorisation. METHODS: The same 6 station formative OSCE ran asynchronously in 4 medical schools, with 2 parallel circuits/school. We compared examiners' judgements using Video-based Examiner Score Comparison and Adjustment (VESCA): examiners scored station-specific comparator videos in addition to 'live' student performances, enabling 1/controlled score comparisons by a/examiner-cohorts and b/schools and 2/data linkage to adjust for the influence of examiner-cohorts. We calculated score impact and change in pass/fail categorisation by school. RESULTS: On controlled video-based comparisons, inter-school variations in examiners' scoring (16.3%) were nearly double within-school variations (8.8%). Students' scores received a median adjustment of 5.26% (IQR 2.87-7.17%). The impact of adjusting for examiner differences on students' pass/fail categorisation varied by school, with adjustment reducing failure rate from 39.13% to 8.70% (school 2) whilst increasing failure from 0.00% to 21.74% (school 4). DISCUSSION: Whilst the formative context may partly account for differences, these findings query whether variations may exist between medical schools in examiners' judgements. This may benefit from systematic appraisal to safeguard equivalence. VESCA provided a viable method for comparisons.

17.
BMC Med Educ ; 24(1): 817, 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39075511

RESUMO

CONTEXT: Objective Structured Clinical Examinations (OSCEs) are an increasingly popular evaluation modality for medical students. While the face-to-face interaction allows for more in-depth assessment, it may cause standardization problems. Methods to quantify, limit or adjust for examiner effects are needed. METHODS: Data originated from 3 OSCEs undergone by 900-student classes of 5th- and 6th-year medical students at Université Paris Cité in the 2022-2023 academic year. Sessions had five stations each, and one of the three sessions was scored by consensus by two raters (rather than one). We report OSCEs' longitudinal consistency for one of the classes and staff-related and student variability by session. We also propose a statistical method to adjust for inter-rater variability by deriving a statistical random student effect that accounts for staff-related and station random effects. RESULTS: From the four sessions, a total of 16,910 station scores were collected from 2615 student sessions, with two of the sessions undergone by the same students, and 36, 36, 35 and 20 distinct staff teams in each station for each session. Scores had staff-related heterogeneity (p<10-15), with staff-level standard errors approximately doubled compared to chance. With mixed models, staff-related heterogeneity explained respectively 11.4%, 11.6%, and 4.7% of station score variance (95% confidence intervals, 9.5-13.8, 9.7-14.1, and 3.9-5.8, respectively) with 1, 1 and 2 raters, suggesting a moderating effect of consensus grading. Student random effects explained a small proportion of variance, respectively 8.8%, 11.3%, and 9.6% (8.0-9.7, 10.3-12.4, and 8.7-10.5), and this low amount of signal resulted in student rankings being no more consistent over time with this metric, rather than with average scores (p=0.45). CONCLUSION: Staff variability impacts OSCE scores as much as student variability, and the former can be reduced with dual assessment or adjusted for with mixed models. Both are small compared to unmeasured sources of variability, making them difficult to capture consistently.


Assuntos
Competência Clínica , Avaliação Educacional , Variações Dependentes do Observador , Estudantes de Medicina , Humanos , Avaliação Educacional/métodos , Avaliação Educacional/normas , Competência Clínica/normas , Educação de Graduação em Medicina/normas , Paris , Reprodutibilidade dos Testes
18.
Artigo em Inglês | MEDLINE | ID: mdl-39042360

RESUMO

Summative assessments are often underused for feedback, despite them being rich with data of students' applied knowledge and clinical and professional skills. To better inform teaching and student support, this study aims to gain insights from summative assessments through profiling students' performance patterns and identify those students missing the basic knowledge and skills in medical specialities essential for their future career. We use Latent Profile Analysis to classify a senior undergraduate year group (n = 295) based on their performance in applied knowledge test (AKT) and OSCE, in which items and stations are pre-classified across five specialities (e.g. Acute and Critical Care, Paediatrics,…). Four distinct groups of students with increasing average performance levels in the AKT, and three such groups in the OSCE are identified. Overall, these two classifications are positively correlated. However, some students do well in one assessment format but not in the other. Importantly, in both the AKT and the OSCE there is a mixed group containing students who have met the required standard to pass, and those who have not. This suggests that a conception of a borderline group at the exam-level can be overly simplistic. There is little literature relating AKT and OSCE performance in this way, and the paper discusses how our analysis gives placement tutors key insights into providing tailored support for distinct student groups needing remediation. It also gives additional information to assessment writers about the performance and difficulty of their assessment items/stations, and to wider faculty about student overall performance and across specialities.

19.
Curr Pharm Teach Learn ; 16(11): 102152, 2024 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-39033560

RESUMO

INTRODUCTION: In Switzerland, becoming a licensed pharmacist requires succeeding a federal entry-to-practice exam that includes an Objective Structured Clinical Examination (OSCE). Candidates from the University of Geneva (UNIGE) exhibited a higher failure rate in this part of the examination in comparison to candidates from other Swiss institutions. The institution made a specific set of pedagogical changes to a 3-week pharmacy services course that is run during their Master's second year to prepare them for their entry-to-practice OSCE. One key change was a switch from a summative in-classroom OSCE to an on-line formative OSCE. METHODS: New teaching activities were introduced between 2019 2020 and 2021-2022 academic years to help students strengthen their patient-facing skills and prepare for the federal OSCE. These online activities consisted in formative OSCEs supplemented with group and individual debriefings and in 18 h clinical case simulations reproducing OSCE requirements and assessed with standardized evaluation grids. Failure rates before and after the introduction of these activities were compared, and their perceived usefulness by UNIGE candidates was collected through a questionnaire survey. RESULTS: The UNIGE failure rate decreased from 6.8% in 2018/2019 to 3.3% in 2022 following the implementation of the new teaching activities. The difference in failure rates between UNIGE and the other institutions became less pronounced in 2022 compared to 2018/2019. The redesigned Master's course was highlighted as useful for preparation, with all new activities perceived as beneficial. Questionnaire responses brought attention to challenges faced by UNIGE candidates, including stress management, insufficient information or practical training, and experiences related to quarantine. These insights informed further development of teaching methods. DISCUSSION: Although the results do not establish a direct link between participation in new teaching activities and increased performance, they suggest resolving the initial issue. Our findings relate to pedagogical concepts such as constructive alignment, formative assessment and examination anxiety, and generally support the benefits of online format. CONCLUSION: This study used a participatory action research based on mixed methods to address a challenge in pharmacy education. Online teaching activities including formative OSCEs, case simulations and debriefings were implemented. Improved performance in entry-to-practice OSCE was subsequently observed. The results highlight the potential of formative, active, and constructively aligned online activities, such as role-playing and case simulation, to enhance patient-facing skills and improve outcomes in summative assessments of these skills.

20.
BMC Med Educ ; 24(1): 801, 2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39061036

RESUMO

BACKGROUND: The administration of performance assessments during the coronavirus disease of 2019 (COVID-19) pandemic posed many challenges, especially for examinations employed as part of certification and licensure. The National Assessment Collaboration (NAC) Examination, an Objective Structured Clinical Examination (OSCE), was modified during the pandemic. The purpose of this study was to gather evidence to support the reliability and validity of the modified NAC Examination. METHODS: The modified NAC Examination was delivered to 2,433 candidates in 2020 and 2021. Cronbach's alpha, decision consistency, and accuracy values were calculated. Validity evidence includes comparisons of scores and sub-scores for demographic groups: gender (male vs. female), type of International Medical Graduate (IMG) (Canadians Studying Abroad (CSA) vs. non-CSA), postgraduate training (PGT) (no PGT vs. PGT), and language of examination (English vs. French). Criterion relationships were summarized using correlations within and between the NAC Examination and the Medical Council of Canada Qualifying Examination (MCCQE) Part I scores. RESULTS: Reliability estimates were consistent with other OSCEs similar in length and previous NAC Examination administrations. Both total score and sub-score differences for gender were statistically significant. Total score differences by type of IMG and PGT were not statistically significant, but sub-score differences were statistically significant. Administration language was not statistically significant for either the total scores or sub-scores. Correlations were all statistically significant with some relationships being small or moderate (0.20 to 0.40) or large (> 0.40). CONCLUSIONS: The NAC Examination yields reliable total scores and pass/fail decisions. Expected differences in total scores and sub-scores for defined groups were consistent with previous literature, and internal relationships amongst NAC Examination sub-scores and their external relationships with the MCCQE Part I supported both discriminant and criterion-related validity arguments. Modifications to OSCEs to address health restrictions can be implemented without compromising the overall quality of the assessment. This study outlines some of the validity and reliability analyses for OSCEs that required modifications due to COVID.


Assuntos
COVID-19 , Competência Clínica , Avaliação Educacional , Humanos , COVID-19/diagnóstico , COVID-19/epidemiologia , Reprodutibilidade dos Testes , Avaliação Educacional/métodos , Masculino , Feminino , Competência Clínica/normas , Canadá , SARS-CoV-2 , Pandemias , Educação de Pós-Graduação em Medicina/normas , Médicos Graduados Estrangeiros/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA