Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 588
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
J Gen Intern Med ; 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39103602

RESUMO

BACKGROUND: Workplace violence disproportionately affects healthcare workers and verbal aggression from patients frequently occurs. While verbal de-escalation is the first-line approach to defusing anger, there is a lack of consistent curricula or robust evaluation in undergraduate medical education. AIM: To develop a medical school curriculum focused on de-escalation skills for adult patients and evaluate effectiveness with surveys and an objective structured clinical examination (OSCE). SETTING: We implemented this curriculum in the "Get Ready for Residency Bootcamp" of a single large academic institution in 2023. PARTICIPANTS: Forty-four fourth-year medical students PROGRAM DESCRIPTION: The curriculum consisted of an interactive didactic focused on our novel CALMER framework that prioritized six evidence-based de-escalation skills and a separate standardized patient practice session. PROGRAM EVALUATION: The post-curriculum survey (82% response rate) found a significant increase from 2.79 to 4.11 out of 5 (p ≤ 0.001) in confidence using verbal de-escalation. Preparedness improved with every skill and curriculum satisfaction averaged 4.79 out of 5. The OSCE found no differences in skill level between students who received the curriculum and those who did not. DISCUSSION: This evidence-based and replicable de-escalation skill curriculum improves medical student confidence and preparedness in managing agitated patients.

2.
Artigo em Inglês | MEDLINE | ID: mdl-39042360

RESUMO

Summative assessments are often underused for feedback, despite them being rich with data of students' applied knowledge and clinical and professional skills. To better inform teaching and student support, this study aims to gain insights from summative assessments through profiling students' performance patterns and identify those students missing the basic knowledge and skills in medical specialities essential for their future career. We use Latent Profile Analysis to classify a senior undergraduate year group (n = 295) based on their performance in applied knowledge test (AKT) and OSCE, in which items and stations are pre-classified across five specialities (e.g. Acute and Critical Care, Paediatrics,…). Four distinct groups of students with increasing average performance levels in the AKT, and three such groups in the OSCE are identified. Overall, these two classifications are positively correlated. However, some students do well in one assessment format but not in the other. Importantly, in both the AKT and the OSCE there is a mixed group containing students who have met the required standard to pass, and those who have not. This suggests that a conception of a borderline group at the exam-level can be overly simplistic. There is little literature relating AKT and OSCE performance in this way, and the paper discusses how our analysis gives placement tutors key insights into providing tailored support for distinct student groups needing remediation. It also gives additional information to assessment writers about the performance and difficulty of their assessment items/stations, and to wider faculty about student overall performance and across specialities.

3.
Artigo em Inglês | MEDLINE | ID: mdl-39235519

RESUMO

In healthcare, effective communication in complex situations such as end of life conversations is critical for delivering high quality care. Whether residents learn from communication training with actors depends on whether they are able to select appropriate information or 'predictive cues' from that learning situation that accurately reflect their or their peers' performance and whether they use those cues for ensuing judgement. This study aimed to explore whether prompts can help medical residents improving use of predictive cues and judgement of communication skills. First and third year Kenyan residents (N = 41) from 8 different specialties were randomly assigned to one of two experimental groups during a mock OSCE assessing advanced communication skills. Residents in the intervention arm received paper predictive cue prompts while residents in the control arm received paper regular prompts for self-judgement. In a pre- and post- test, residents' use of predictive cues and the appropriateness of peer-judgements were evaluated against a pre-rated video of another resident. The intervention improved both the use of predictive cues in self-judgement and peer-judgement. Ensuing accuracy of peer-judgements in the pre- to post-test only partly improved: no effect from the intervention was found on overall appropriateness of judgements. However, when analyzing participants' completeness of judgements over the various themes within the consultation, a reduction in inappropriate judgments scores was seen in the intervention group. In conclusion, predictive cue prompts can help learners to concentrate on relevant cues when evaluating communication skills and partly improve monitoring accuracy. Future research should focus on offering prompts more frequently to evaluate whether this increases the effect on monitoring accuracy in communication skills.

4.
Med Teach ; 46(9): 1187-1195, 2024 09.
Artigo em Inglês | MEDLINE | ID: mdl-38285021

RESUMO

PURPOSE: To assess the Consultation And Relational Empathy (CARE) measure as a tool for examiners to assess medical students' empathy during Objective and Structured Clinical Examinations (OSCEs), as the best tool for assessing empathy during OSCEs remains unknown. METHODS: We first assessed the psychometric properties of the CARE measure, completed simultaneously by examiners and standardized patients (SP, either teachers - SPteacher - or civil society members - SPcivil society), for each student, at the end of an OSCE station. We then assessed the qualitative/quantitative agreement between examiners and SP. RESULTS: We included 129 students, distributed in eight groups, four groups for each SP type. The CARE measure showed satisfactory psychometric properties in the context of the study but moderate, and even poor inter-rater reliability for some items. Considering paired observations, examiners scored lower than SPs (p < 0.001) regardless of the SP type. However, the difference in score was greater when the SP was a SPteacher rather than a SPcivil society (p < 0.01). CONCLUSION: Despite acceptable psychometric properties, inter-rater reliability of the CARE measure between examiners and SP was unsatisfactory. The choice of examiner as well as the type of SP seems critical to ensure a fair measure of empathy during OSCEs.


Assuntos
Avaliação Educacional , Empatia , Simulação de Paciente , Psicometria , Estudantes de Medicina , Humanos , Estudantes de Medicina/psicologia , Reprodutibilidade dos Testes , Masculino , Feminino , Avaliação Educacional/métodos , Avaliação Educacional/normas , Relações Médico-Paciente , Competência Clínica/normas , Educação de Graduação em Medicina
5.
Med Teach ; : 1-9, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38976711

RESUMO

INTRODUCTION: Ensuring equivalence in high-stakes performance exams is important for patient safety and candidate fairness. We compared inter-school examiner differences within a shared OSCE and resulting impact on students' pass/fail categorisation. METHODS: The same 6 station formative OSCE ran asynchronously in 4 medical schools, with 2 parallel circuits/school. We compared examiners' judgements using Video-based Examiner Score Comparison and Adjustment (VESCA): examiners scored station-specific comparator videos in addition to 'live' student performances, enabling 1/controlled score comparisons by a/examiner-cohorts and b/schools and 2/data linkage to adjust for the influence of examiner-cohorts. We calculated score impact and change in pass/fail categorisation by school. RESULTS: On controlled video-based comparisons, inter-school variations in examiners' scoring (16.3%) were nearly double within-school variations (8.8%). Students' scores received a median adjustment of 5.26% (IQR 2.87-7.17%). The impact of adjusting for examiner differences on students' pass/fail categorisation varied by school, with adjustment reducing failure rate from 39.13% to 8.70% (school 2) whilst increasing failure from 0.00% to 21.74% (school 4). DISCUSSION: Whilst the formative context may partly account for differences, these findings query whether variations may exist between medical schools in examiners' judgements. This may benefit from systematic appraisal to safeguard equivalence. VESCA provided a viable method for comparisons.

6.
Med Teach ; : 1-6, 2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-38943517

RESUMO

PURPOSE OF ARTICLE: This paper explores issues pertinent to teaching and assessment of clinical skills at the early stages of medical training, aimed at preventing academic integrity breaches. The drivers for change, the changes themselves, and student perceptions of those changes are described. METHODS: Iterative changes to a summative high stakes Objective Structured Clinical Examination (OSCE) assessment in an undergraduate medical degree were undertaken in response to perceived/known breaches of assessment security. Initial strategies focused on implementing best practice teaching and assessment design principles, in association with increased examination security. RESULTS: These changes failed to prevent alleged sharing of examination content between students. A subsequent iteration saw a more radical deviation from classic OSCE assessment design, with students being assessed on equivalent competencies, not identical items (OSCE stations). This more recent approach was broadly acceptable to students, and did not result in breaches of academic integrity that were detectable. CONCLUSIONS: Ever increasing degrees of assessment security need not be the response to breaches of academic integrity. Use of non-identical OSCE items across a cohort, underpinned by constructive alignment of teaching and assessment may mitigate the incentives to breach academic integrity, though face validity is not universal.

7.
Med Teach ; : 1-9, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38635469

RESUMO

INTRODUCTION: Whilst rarely researched, the authenticity with which Objective Structured Clinical Exams (OSCEs) simulate practice is arguably critical to making valid judgements about candidates' preparedness to progress in their training. We studied how and why an OSCE gave rise to different experiences of authenticity for different participants under different circumstances. METHODS: We used Realist evaluation, collecting data through interviews/focus groups from participants across four UK medical schools who participated in an OSCE which aimed to enhance authenticity. RESULTS: Several features of OSCE stations (realistic, complex, complete cases, sufficient time, autonomy, props, guidelines, limited examiner interaction etc) combined to enable students to project into their future roles, judge and integrate information, consider their actions and act naturally. When this occurred, their performances felt like an authentic representation of their clinical practice. This didn't work all the time: focusing on unavoidable differences with practice, incongruous features, anxiety and preoccupation with examiners' expectations sometimes disrupted immersion, producing inauthenticity. CONCLUSIONS: The perception of authenticity in OSCEs appears to originate from an interaction of station design with individual preferences and contextual expectations. Whilst tentatively suggesting ways to promote authenticity, more understanding is needed of candidates' interaction with simulation and scenario immersion in summative assessment.

8.
Med Teach ; 46(6): 776-781, 2024 06.
Artigo em Inglês | MEDLINE | ID: mdl-38113876

RESUMO

PURPOSE: We have evaluated the final-year Psychiatry and Addiction Medicine (PAM) summative Objective Structured Clinical Examination (OSCE) examinations in a four-year graduate medical degree program, for the previous three years as a baseline comparator, and during three years of the COVID-19 pandemic (2020-2022). METHODS: A de-identified analysis of medical student summative OSCE examination performance, and comparative review for the 3 years before, and for each year of the pandemic. RESULTS: Internal reliability in test scores as measured by R-squared remained the same or increased following the start of the pandemic. There was a significant increase in mean test scores after the start of the pandemic compared to pre-pandemic for combined OSCE scores for all final-year disciplines, as well as for the PAM role-play OSCEs, but not for the PAM mental state examination OSCEs. CONCLUSIONS: Changing to online OSCEs during the pandemic was related to an increase in scores for some but not all domains of the tests. This is in line with a nascent body of literature on medical teaching and examination following the start of the pandemic. Further research is needed to optimise teaching and examination in a post-pandemic medical school environment.


Assuntos
Medicina do Vício , COVID-19 , Avaliação Educacional , Psiquiatria , Estudantes de Medicina , COVID-19/epidemiologia , Humanos , Psiquiatria/educação , Avaliação Educacional/métodos , Medicina do Vício/educação , Austrália/epidemiologia , Estudantes de Medicina/psicologia , Competência Clínica , SARS-CoV-2 , Pandemias , Reprodutibilidade dos Testes , Educação a Distância
9.
BMC Med Educ ; 24(1): 817, 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39075511

RESUMO

CONTEXT: Objective Structured Clinical Examinations (OSCEs) are an increasingly popular evaluation modality for medical students. While the face-to-face interaction allows for more in-depth assessment, it may cause standardization problems. Methods to quantify, limit or adjust for examiner effects are needed. METHODS: Data originated from 3 OSCEs undergone by 900-student classes of 5th- and 6th-year medical students at Université Paris Cité in the 2022-2023 academic year. Sessions had five stations each, and one of the three sessions was scored by consensus by two raters (rather than one). We report OSCEs' longitudinal consistency for one of the classes and staff-related and student variability by session. We also propose a statistical method to adjust for inter-rater variability by deriving a statistical random student effect that accounts for staff-related and station random effects. RESULTS: From the four sessions, a total of 16,910 station scores were collected from 2615 student sessions, with two of the sessions undergone by the same students, and 36, 36, 35 and 20 distinct staff teams in each station for each session. Scores had staff-related heterogeneity (p<10-15), with staff-level standard errors approximately doubled compared to chance. With mixed models, staff-related heterogeneity explained respectively 11.4%, 11.6%, and 4.7% of station score variance (95% confidence intervals, 9.5-13.8, 9.7-14.1, and 3.9-5.8, respectively) with 1, 1 and 2 raters, suggesting a moderating effect of consensus grading. Student random effects explained a small proportion of variance, respectively 8.8%, 11.3%, and 9.6% (8.0-9.7, 10.3-12.4, and 8.7-10.5), and this low amount of signal resulted in student rankings being no more consistent over time with this metric, rather than with average scores (p=0.45). CONCLUSION: Staff variability impacts OSCE scores as much as student variability, and the former can be reduced with dual assessment or adjusted for with mixed models. Both are small compared to unmeasured sources of variability, making them difficult to capture consistently.


Assuntos
Competência Clínica , Avaliação Educacional , Variações Dependentes do Observador , Estudantes de Medicina , Humanos , Avaliação Educacional/métodos , Avaliação Educacional/normas , Competência Clínica/normas , Educação de Graduação em Medicina/normas , Paris , Reprodutibilidade dos Testes
10.
BMC Med Educ ; 24(1): 801, 2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39061036

RESUMO

BACKGROUND: The administration of performance assessments during the coronavirus disease of 2019 (COVID-19) pandemic posed many challenges, especially for examinations employed as part of certification and licensure. The National Assessment Collaboration (NAC) Examination, an Objective Structured Clinical Examination (OSCE), was modified during the pandemic. The purpose of this study was to gather evidence to support the reliability and validity of the modified NAC Examination. METHODS: The modified NAC Examination was delivered to 2,433 candidates in 2020 and 2021. Cronbach's alpha, decision consistency, and accuracy values were calculated. Validity evidence includes comparisons of scores and sub-scores for demographic groups: gender (male vs. female), type of International Medical Graduate (IMG) (Canadians Studying Abroad (CSA) vs. non-CSA), postgraduate training (PGT) (no PGT vs. PGT), and language of examination (English vs. French). Criterion relationships were summarized using correlations within and between the NAC Examination and the Medical Council of Canada Qualifying Examination (MCCQE) Part I scores. RESULTS: Reliability estimates were consistent with other OSCEs similar in length and previous NAC Examination administrations. Both total score and sub-score differences for gender were statistically significant. Total score differences by type of IMG and PGT were not statistically significant, but sub-score differences were statistically significant. Administration language was not statistically significant for either the total scores or sub-scores. Correlations were all statistically significant with some relationships being small or moderate (0.20 to 0.40) or large (> 0.40). CONCLUSIONS: The NAC Examination yields reliable total scores and pass/fail decisions. Expected differences in total scores and sub-scores for defined groups were consistent with previous literature, and internal relationships amongst NAC Examination sub-scores and their external relationships with the MCCQE Part I supported both discriminant and criterion-related validity arguments. Modifications to OSCEs to address health restrictions can be implemented without compromising the overall quality of the assessment. This study outlines some of the validity and reliability analyses for OSCEs that required modifications due to COVID.


Assuntos
COVID-19 , Competência Clínica , Avaliação Educacional , Humanos , COVID-19/diagnóstico , COVID-19/epidemiologia , Reprodutibilidade dos Testes , Avaliação Educacional/métodos , Masculino , Feminino , Competência Clínica/normas , Canadá , SARS-CoV-2 , Pandemias , Educação de Pós-Graduação em Medicina/normas , Médicos Graduados Estrangeiros/normas
11.
BMC Med Educ ; 24(1): 954, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39223576

RESUMO

BACKGROUND: Near-peer teaching is a popular pedagogical teaching tool however many existing models fail to demonstrate benefits in summative OSCE performance. The 3-step deconstructed (3-D)skills near-peer model was recently piloted in undergraduate medicine showing short term improvement in formative OSCE performance utilising social constructivist educational principles. This study aims to assess if 3-D skills model teaching affects summative OSCE grades. METHODS: Seventy-nine third year medical students attended a formative OSCE event at the University of Glasgow receiving an additional 3-minutes per station of either 3-D skills teaching or time-equivalent unguided practice. Students' summative OSCE results were compared against the year cohort to establish whether there was any difference in time delayed summative OSCE performance. RESULTS: 3-D skills and unguided practice cohorts had comparable demographical data and baseline formative OSCE performance. Both the 3-D skill cohort and unguided practice cohort achieved significantly higher median station pass rates at summative OSCEs than the rest of the year. This correlated to one additional station pass in the 3-D skills cohort, which would increase median grade banding from B to A. The improvement in the unguided practice cohort did not achieve educational significance. CONCLUSION: Incorporating the 3-D skills model into a formative OSCE is associated with significantly improved performance at summative OSCEs. This expands on the conflicting literature for formative OSCE sessions which have shown mixed translation to summative performance and suggests merit in institutional investment to improve clinical examination skills.


Assuntos
Competência Clínica , Educação de Graduação em Medicina , Avaliação Educacional , Humanos , Educação de Graduação em Medicina/métodos , Estudos de Casos e Controles , Estudantes de Medicina , Feminino , Masculino , Modelos Educacionais , Grupo Associado
12.
BMC Med Educ ; 24(1): 866, 2024 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-39135004

RESUMO

BACKGROUND: Clinical practitioners think of frequent causes of diseases first rather than expending resources searching for rare conditions. However, it is important to continue investigating when all common illnesses have been discarded. Undergraduate medical students must acquire skills to listen and ask relevant questions when seeking a potential diagnosis. METHODOLOGY: Our objective was to determine whether team-based learning (TBL) focused on clinical reasoning in the context of rare diseases combined with video vignettes (intervention) improved the clinical and generic skills of students compared with TBL alone (comparator). We followed a single-center quasi-experimental posttest-only design involving fifth-year medical students. RESULTS: The intervention group (n = 178) had a significantly higher mean overall score on the objective structured clinical examination (OSCE) (12.04 ± 2.54 vs. 11.27 ± 3.16; P = 0.021) and a higher mean percentage score in clinical skills (47.63% vs. 44.63%; P = 0.025) and generic skills (42.99% vs. 40.33%; P = 0.027) than the comparator group (n = 118). Success on the OSCE examination was significantly associated with the intervention (P = 0.002). CONCLUSIONS: The TBL with video vignettes curriculum was associated with better performance of medical students on the OSCE. The concept presented here may be beneficial to other teaching institutions.


Assuntos
Competência Clínica , Currículo , Educação de Graduação em Medicina , Avaliação Educacional , Estudantes de Medicina , Humanos , Educação de Graduação em Medicina/métodos , Feminino , Masculino , Gravação em Vídeo , Aprendizagem Baseada em Problemas , Processos Grupais
13.
BMC Med Educ ; 24(1): 179, 2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38395807

RESUMO

BACKGROUND: Assessments, such as summative structured examinations, aim to verify whether students have acquired the necessary competencies. It is important to familiarize students with the examination format prior to the assessment to ensure that true competency is measured. However, it is unclear whether students can demonstrate their true potential or possibly perform less effectively due to the unfamiliar examination format. Hence, we questioned whether a 10-min active familiarization in the form of simulation improved medical students´ OSCE performance. Next, we wanted to elucidate whether the effect depends on whether the familiarization procedure is active or passive. METHODS: We implemented an intervention consisting of a 10-min active simulation to prepare the students for the OSCE setting. We compared the impact of this intervention on performance to no intervention in 5th-year medical students (n = 1284) from 2018 until 2022. Recently, a passive lecture, in which the OSCE setting is explained without active participation of the students, was introduced as a comparator group. Students who participated in neither the intervention nor the passive lecture group formed the control group. The OSCE performance between the groups and the impact of gender was assessed using X2, nonparametric tests and regression analysis (total n = 362). RESULTS: We found that active familiarization of students (n = 188) yields significantly better performance compared to the passive comparator (Cohen´s d = 0.857, p < 0.001, n = 52) and control group (Cohen´s d = 0.473, p < 0.001, n = 122). In multivariate regression analysis, active intervention remained the only significant variable with a 2.945-fold increase in the probability of passing the exam (p = 0.018). CONCLUSIONS: A short 10-min active intervention to familiarize students with the OSCE setting significantly improved student performance. We suggest that curricula should include simulations on the exam setting in addition to courses that increase knowledge or skills to mitigate the negative effect of nonfamiliarity with the OSCE exam setting on the students.


Assuntos
Educação de Graduação em Medicina , Estudantes de Medicina , Humanos , Avaliação Educacional/métodos , Educação de Graduação em Medicina/métodos , Competência Clínica , Exame Físico
14.
BMC Med Educ ; 24(1): 994, 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39267024

RESUMO

BACKGROUND: Breaking bad news is one of the most difficult aspects of communication in medicine. The objective of this study was to assess the relevance of a novel active learning course on breaking bad news for fifth-year students. METHODS: Students were divided into two groups: Group 1, the intervention group, participated in a multidisciplinary formative discussion workshop on breaking bad news with videos, discussions with a pluri-professional team, and concluding with the development of a guide on good practice in breaking bad news through collective intelligence; Group 2, the control group, received no additional training besides conventional university course. The relevance of discussion-group-based active training was assessed in a summative objective structured clinical examination (OSCE) station particularly through the students' communication skills. RESULTS: Thirty-one students were included: 17 in Group 1 and 14 in Group 2. The mean (range) score in the OSCE was significantly higher in Group 1 than in Group 2 (10.49 out of 15 (7; 13) vs. 7.80 (4.75; 12.5), respectively; p = 0.0007). The proportion of students assessed by the evaluator to have received additional training in breaking bad news was 88.2% (15 of the 17) in Group 1 and 21.4% (3 of the 14) in Group 2 (p = 0.001). The intergroup differences in the Rosenberg Self-Esteem Scale and Jefferson Scale of Empathy scores were not significant, and both scores were not correlated with the students' self-assessed score for success in the OSCE. CONCLUSION: Compared to the conventional course, this new active learning method for breaking bad news was associated with a significantly higher score in a summative OSCE. A longer-term validation study is needed to confirm these exploratory data.


Assuntos
Relações Médico-Paciente , Aprendizagem Baseada em Problemas , Estudantes de Medicina , Revelação da Verdade , Humanos , Estudantes de Medicina/psicologia , Feminino , Masculino , Comunicação , Educação de Graduação em Medicina/métodos , Avaliação Educacional , Competência Clínica
15.
BMC Med Educ ; 24(1): 673, 2024 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-38886698

RESUMO

OBJECTIVE: To analyze the satisfaction levels, perceptions of developing clinical competencies through objective structured clinical examination and to explore the experiences, challenges, and suggestions of undergraduate dental students. METHODS: The study adopted a mixed-method convergent design. Quantitative data were collected from 303 participants through surveys, evaluating satisfaction levels with objective structured clinical examination (OSCE). Additionally, qualitative insights were gathered through student focus group interviews, fundamental themes were developed from diverse expressions on various aspects of OSCE assessments. The Chi-Square tests, was performed to assess associations between variables. Data integration involved comparing and contrasting quantitative and qualitative findings to derive comprehensive conclusions. RESULTS: The satisfaction rates include 69.4% for the organization of OSCE stations and 57.4% for overall effectiveness. However, a crucial challenge was identified, with only 36.7% of students receiving adequate post-OSCE feedback. Furthermore, a majority of students (50%) expressed concerns about the clinical relevance of OSCEs. The study showed a significant associations (p < 0.05) between satisfaction levels and years of study as well as previous OSCE experience. Student focus group interviews revealed diverse perspectives on OSCE assessments. While students appreciate the helpfulness of OSCEs, concerns were raised regarding time constraints, stress, examiner training, and the perceived lack of clinical relevance. CONCLUSION: The students anticipated concerns about the clinical relevance of OSCEs, highlighting the need for a more aligned assessment approach. Diverse perspectives on OSCE assessments reveal perceived helpfulness alongside challenges such as lack of feedback, examiner training, time constraints, and mental stress.


Assuntos
Competência Clínica , Educação em Odontologia , Avaliação Educacional , Grupos Focais , Satisfação Pessoal , Estudantes de Odontologia , Humanos , Estudantes de Odontologia/psicologia , Feminino , Masculino , Educação em Odontologia/normas , Inquéritos e Questionários , Adulto Jovem , Adulto
16.
BMC Med Educ ; 24(1): 936, 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39198877

RESUMO

INTRODUCTION: Studies have reported different results of evaluation methods of clinical competency tests. Therefore, this study aimed to design, implement, and evaluate a blended (in-person and virtual) Competency Examination for final-year Nursing Students. METHODS: This interventional study was conducted in two semesters of 2020-2021 using an educational action research method in the nursing and midwifery faculty. Thirteen faculty members and 84 final-year nursing students were included in the study using a census method. Eight programs and related activities were designed and conducted during the examination process. Students completed the Spielberger Anxiety Inventory before the examination, and both faculty members and students completed the Acceptance and Satisfaction questionnaire. FINDINGS: The results of the analysis of focused group discussions and reflections indicated that the virtual CCE was not capable of adequately assessing clinical skills. Therefore, it was decided that the CCE for final-year nursing students would be conducted using a blended method. The activities required for performing the examination were designed and implemented based on action plans. Anxiety and satisfaction were also evaluated as outcomes of the study. There was no statistically significant difference in overt, covert, and overall anxiety scores between the in-person and virtual sections of the examination (p > 0.05). The mean (SD) acceptance and satisfaction scores for students in virtual, in-person, and blended sections were 25.49 (4.73), 27.60 (4.70), and 25.57 (4.97), respectively, out of 30 points, in which there was a significant increase in the in-person section compared to the other sections. (p = 0.008). The mean acceptance and satisfaction scores for faculty members were 30.31 (4.47) in the virtual, 29.86 (3.94) in the in-person, and 30.00 (4.16) out of 33 in the blended, and there was no significant difference between the three sections (p = 0.864). CONCLUSION: Evaluating nursing students' clinical competency using a blended method was implemented and solved the problem of students' graduation. Therefore, it is suggested that the blended method be used instead of traditional in-person or entirely virtual exams in epidemics or based on conditions, facilities, and human resources. Also, the use of patient simulation, virtual reality, and the development of necessary virtual and in-person training infrastructure for students is recommended for future research. Furthermore, considering that the acceptance of traditional in-person exams among students is higher, it is necessary to develop virtual teaching strategies.


Assuntos
Competência Clínica , Avaliação Educacional , Estudantes de Enfermagem , Humanos , Avaliação Educacional/métodos , Estudantes de Enfermagem/psicologia , Bacharelado em Enfermagem , Masculino , Feminino
17.
Rev Neurol (Paris) ; 180(7): 655-660, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38705796

RESUMO

BACKGROUND: There is little consensus on how to make a diagnosis announcement of severe chronic disease in neurology. Other medical specialties, such as oncology, have developed assessment methods similar to the Objective Structured Clinical Examination (OSCE) to address this issue. Here we report the implementation of an OSCE focused on the diagnosis announcement of chronic disease in neurology by residents. OBJECTIVE: We aimed to evaluate the acceptability, feasibility and validity in routine practice of an OSCE combined with a theoretical course focused on diagnosis announcement in neurology. METHOD: Eighteen neurology residents were prospectively included between 2019 and 2022. First, they answered a questionnaire on their previous level of training in diagnosis announcement. Second, in a practical session with a simulated patient, they made a 15-min diagnosis announcement and then had 5mins of immediate feedback with an expert observer, present in the room. The OSCE consisted of 4 different stations, with standardized scenarios dedicated to the announcement of multiple sclerosis (MS), Parkinson's disease (PD), Alzheimer's disease (AD) and amyotrophic lateral sclerosis (ALS). Third, in a theory session, expert observers covered the essential theoretical points. All residents and expert observers completed an evaluation of the "practical session" and the "theory session". RESULTS: Residents estimated their previous level of diagnosis announcement training at 3.1/5. The most feared announcements were AD and ALS. The "practical session" was rated at a mean of 4.1/5 by the residents and 4.8/5 by the expert observers, and the "theory session" at a mean of 4.7/5 by the residents and 5/5 by the expert observers. After the OSCEs, 11 residents felt more confident about making an announcement. CONCLUSION: This study has shown a benefit of using an OSCE to learn how to make a diagnosis announcement of severe chronic disease in neurology. OSCEs could be used in many departments in routine practice and seem adapted to residents.


Assuntos
Avaliação Educacional , Internato e Residência , Doenças do Sistema Nervoso , Neurologia , Humanos , Neurologia/normas , Neurologia/educação , Internato e Residência/normas , Doenças do Sistema Nervoso/diagnóstico , Avaliação Educacional/métodos , Doença Crônica , Masculino , Feminino , Adulto , Competência Clínica/normas , Estudos Prospectivos , Inquéritos e Questionários
18.
HNO ; 72(3): 182-189, 2024 Mar.
Artigo em Alemão | MEDLINE | ID: mdl-38305855

RESUMO

BACKGROUND: Due to the COVID-19 pandemic, contact restrictions occurred worldwide, which affected medical schools as well. It was not possible to hold classroom lectures. Teaching contents had to be converted to a digital curriculum within a very short time. Conditions for assessments posed an even greater challenge. For example, solutions had to be found for objective structured clinical examinations (OSCE), which were explicitly forbidden in some German states. The aim of this study was to evaluate the feasibility of an OSCE under pandemic conditions. MATERIALS AND METHODS: At the end of the 2020 summer semester, 170 students completed a combined otolaryngology and ophthalmology OSCE. Examinations were held in small groups over the course of 5 days and complied with strict hygiene regulations. The ophthalmology exam was conducted face to face, and the ENT OSCE virtually. Students were asked to rate the OSCE afterwards. RESULTS: Between 106 and 118 of the students answered the questions. Comparing the face-to-face OSCE with the virtual OSCE, about 49% preferred the face-to-face OSCE and 17% preferred the virtual OSCE; 34% found both variants equally good. Overall, the combination of an ENT and ophthalmology OSCE was rated as positive. CONCLUSION: It is possible to hold an OSCE even under pandemic conditions. For optimal preparation of the students, among other things, it is necessary to transform teaching contents to a digital curriculum. The combination of an ENT and ophthalmology OSCE was positively evaluated by the students, although the face-to-face OSCE was preferred. The overall high satisfaction of the students confirms the feasibility of a virtual examination with detailed and well-planned preparation.


Assuntos
Pandemias , Estudantes de Medicina , Humanos , Estudos de Viabilidade , Exame Físico , Currículo , Competência Clínica , Avaliação Educacional
19.
Eur J Dent Educ ; 28(2): 408-415, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37846196

RESUMO

INTRODUCTION: The Objective Structured Clinical Examination (OSCE) is a valid, reliable and reproducible assessment method traditionally carried out as a live examination but recently also provided online. The aim was to compare any differences in the perceptions of dental students participating in online and live OSCE using mixed methods. MATERIALS AND METHODS: All Finnish fourth-year undergraduate dental students (n = 172) attended the exam in April 2021. Due to the COVID-19 pandemic, the official administrative restrictions in teaching in universities still existed in April 2021. By the time of the national OSCE, the pandemic situation varied in different parts of the country. Therefore, two of the universities conducted a live OSCE and two an online version of the OSCE. Data were collected after the OSCE using a voluntary anonymous electronic questionnaire with multiple-choice and open-ended questions (response rate 58%). Differences between the OSCE versions were analysed using the Mann-Whitney U test and open answers with qualitative content analysis. RESULTS: The students considered both types of OSCE good in general. The main differences were found concerned adequate time allocation and overall technical implementation, in favour of the live OSCE. While a qualitative analysis revealed exam anxiety as the most often mentioned negative issue, overall, comments were positive. CONCLUSION: Variation in the assessments between different question entities seemed to be wider than between the implemented OSCE versions. Time management in the OSCE should be further developed by managing the assignment of tasks.


Assuntos
COVID-19 , Humanos , Pandemias , Estudantes de Odontologia , Avaliação Educacional/métodos , Competência Clínica , Educação em Odontologia/métodos
20.
Artigo em Inglês | MEDLINE | ID: mdl-37843678

RESUMO

Quantitative measures of systematic differences in OSCE scoring across examiners (often termed examiner stringency) can threaten the validity of examination outcomes. Such effects are usually conceptualised and operationalised based solely on checklist/domain scores in a station, and global grades are not often used in this type of analysis. In this work, a large candidate-level exam dataset is analysed to develop a more sophisticated understanding of examiner stringency. Station scores are modelled based on global grades-with each candidate, station and examiner allowed to vary in their ability/stringency/difficulty in the modelling. In addition, examiners are also allowed to vary in how they discriminate across grades-to our knowledge, this is the first time this has been investigated. Results show that examiners contribute strongly to variance in scoring in two distinct ways-via the traditional conception of score stringency (34% of score variance), but also in how they discriminate in scoring across grades (7%). As one might expect, candidate and station account only for a small amount of score variance at the station-level once candidate grades are accounted for (3% and 2% respectively) with the remainder being residual (54%). Investigation of impacts on station-level candidate pass/fail decisions suggest that examiner differential stringency effects combine to give false positive (candidates passing in error) and false negative (failing in error) rates in stations of around 5% each but at the exam-level this reduces to 0.4% and 3.3% respectively. This work adds to our understanding of examiner behaviour by demonstrating that examiners can vary in qualitatively different ways in their judgments. For institutions, it emphasises the key message that it is important to sample widely from the examiner pool via sufficient stations to ensure OSCE-level decisions are sufficiently defensible. It also suggests that examiner training should include discussion of global grading, and the combined effect of scoring and grading on candidate outcomes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA