Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 567
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Med Teach ; : 1-9, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38635469

RESUMO

INTRODUCTION: Whilst rarely researched, the authenticity with which Objective Structured Clinical Exams (OSCEs) simulate practice is arguably critical to making valid judgements about candidates' preparedness to progress in their training. We studied how and why an OSCE gave rise to different experiences of authenticity for different participants under different circumstances. METHODS: We used Realist evaluation, collecting data through interviews/focus groups from participants across four UK medical schools who participated in an OSCE which aimed to enhance authenticity. RESULTS: Several features of OSCE stations (realistic, complex, complete cases, sufficient time, autonomy, props, guidelines, limited examiner interaction etc) combined to enable students to project into their future roles, judge and integrate information, consider their actions and act naturally. When this occurred, their performances felt like an authentic representation of their clinical practice. This didn't work all the time: focusing on unavoidable differences with practice, incongruous features, anxiety and preoccupation with examiners' expectations sometimes disrupted immersion, producing inauthenticity. CONCLUSIONS: The perception of authenticity in OSCEs appears to originate from an interaction of station design with individual preferences and contextual expectations. Whilst tentatively suggesting ways to promote authenticity, more understanding is needed of candidates' interaction with simulation and scenario immersion in summative assessment.

2.
Med Teach ; : 1-6, 2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-38943517

RESUMO

PURPOSE OF ARTICLE: This paper explores issues pertinent to teaching and assessment of clinical skills at the early stages of medical training, aimed at preventing academic integrity breaches. The drivers for change, the changes themselves, and student perceptions of those changes are described. METHODS: Iterative changes to a summative high stakes Objective Structured Clinical Examination (OSCE) assessment in an undergraduate medical degree were undertaken in response to perceived/known breaches of assessment security. Initial strategies focused on implementing best practice teaching and assessment design principles, in association with increased examination security. RESULTS: These changes failed to prevent alleged sharing of examination content between students. A subsequent iteration saw a more radical deviation from classic OSCE assessment design, with students being assessed on equivalent competencies, not identical items (OSCE stations). This more recent approach was broadly acceptable to students, and did not result in breaches of academic integrity that were detectable. CONCLUSIONS: Ever increasing degrees of assessment security need not be the response to breaches of academic integrity. Use of non-identical OSCE items across a cohort, underpinned by constructive alignment of teaching and assessment may mitigate the incentives to breach academic integrity, though face validity is not universal.

3.
Med Teach ; : 1-9, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38976711

RESUMO

INTRODUCTION: Ensuring equivalence in high-stakes performance exams is important for patient safety and candidate fairness. We compared inter-school examiner differences within a shared OSCE and resulting impact on students' pass/fail categorisation. METHODS: The same 6 station formative OSCE ran asynchronously in 4 medical schools, with 2 parallel circuits/school. We compared examiners' judgements using Video-based Examiner Score Comparison and Adjustment (VESCA): examiners scored station-specific comparator videos in addition to 'live' student performances, enabling 1/controlled score comparisons by a/examiner-cohorts and b/schools and 2/data linkage to adjust for the influence of examiner-cohorts. We calculated score impact and change in pass/fail categorisation by school. RESULTS: On controlled video-based comparisons, inter-school variations in examiners' scoring (16.3%) were nearly double within-school variations (8.8%). Students' scores received a median adjustment of 5.26% (IQR 2.87-7.17%). The impact of adjusting for examiner differences on students' pass/fail categorisation varied by school, with adjustment reducing failure rate from 39.13% to 8.70% (school 2) whilst increasing failure from 0.00% to 21.74% (school 4). DISCUSSION: Whilst the formative context may partly account for differences, these findings query whether variations may exist between medical schools in examiners' judgements. This may benefit from systematic appraisal to safeguard equivalence. VESCA provided a viable method for comparisons.

4.
Med Teach ; : 1-9, 2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38285021

RESUMO

PURPOSE: To assess the Consultation And Relational Empathy (CARE) measure as a tool for examiners to assess medical students' empathy during Objective and Structured Clinical Examinations (OSCEs), as the best tool for assessing empathy during OSCEs remains unknown. METHODS: We first assessed the psychometric properties of the CARE measure, completed simultaneously by examiners and standardized patients (SP, either teachers - SPteacher - or civil society members - SPcivil society), for each student, at the end of an OSCE station. We then assessed the qualitative/quantitative agreement between examiners and SP. RESULTS: We included 129 students, distributed in eight groups, four groups for each SP type. The CARE measure showed satisfactory psychometric properties in the context of the study but moderate, and even poor inter-rater reliability for some items. Considering paired observations, examiners scored lower than SPs (p < 0.001) regardless of the SP type. However, the difference in score was greater when the SP was a SPteacher rather than a SPcivil society (p < 0.01). CONCLUSION: Despite acceptable psychometric properties, inter-rater reliability of the CARE measure between examiners and SP was unsatisfactory. The choice of examiner as well as the type of SP seems critical to ensure a fair measure of empathy during OSCEs.

5.
Med Teach ; 46(6): 776-781, 2024 06.
Artigo em Inglês | MEDLINE | ID: mdl-38113876

RESUMO

PURPOSE: We have evaluated the final-year Psychiatry and Addiction Medicine (PAM) summative Objective Structured Clinical Examination (OSCE) examinations in a four-year graduate medical degree program, for the previous three years as a baseline comparator, and during three years of the COVID-19 pandemic (2020-2022). METHODS: A de-identified analysis of medical student summative OSCE examination performance, and comparative review for the 3 years before, and for each year of the pandemic. RESULTS: Internal reliability in test scores as measured by R-squared remained the same or increased following the start of the pandemic. There was a significant increase in mean test scores after the start of the pandemic compared to pre-pandemic for combined OSCE scores for all final-year disciplines, as well as for the PAM role-play OSCEs, but not for the PAM mental state examination OSCEs. CONCLUSIONS: Changing to online OSCEs during the pandemic was related to an increase in scores for some but not all domains of the tests. This is in line with a nascent body of literature on medical teaching and examination following the start of the pandemic. Further research is needed to optimise teaching and examination in a post-pandemic medical school environment.


Assuntos
Medicina do Vício , COVID-19 , Avaliação Educacional , Psiquiatria , Estudantes de Medicina , COVID-19/epidemiologia , Humanos , Psiquiatria/educação , Avaliação Educacional/métodos , Medicina do Vício/educação , Austrália/epidemiologia , Estudantes de Medicina/psicologia , Competência Clínica , SARS-CoV-2 , Pandemias , Reprodutibilidade dos Testes , Educação a Distância
6.
BMC Med Educ ; 24(1): 179, 2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38395807

RESUMO

BACKGROUND: Assessments, such as summative structured examinations, aim to verify whether students have acquired the necessary competencies. It is important to familiarize students with the examination format prior to the assessment to ensure that true competency is measured. However, it is unclear whether students can demonstrate their true potential or possibly perform less effectively due to the unfamiliar examination format. Hence, we questioned whether a 10-min active familiarization in the form of simulation improved medical students´ OSCE performance. Next, we wanted to elucidate whether the effect depends on whether the familiarization procedure is active or passive. METHODS: We implemented an intervention consisting of a 10-min active simulation to prepare the students for the OSCE setting. We compared the impact of this intervention on performance to no intervention in 5th-year medical students (n = 1284) from 2018 until 2022. Recently, a passive lecture, in which the OSCE setting is explained without active participation of the students, was introduced as a comparator group. Students who participated in neither the intervention nor the passive lecture group formed the control group. The OSCE performance between the groups and the impact of gender was assessed using X2, nonparametric tests and regression analysis (total n = 362). RESULTS: We found that active familiarization of students (n = 188) yields significantly better performance compared to the passive comparator (Cohen´s d = 0.857, p < 0.001, n = 52) and control group (Cohen´s d = 0.473, p < 0.001, n = 122). In multivariate regression analysis, active intervention remained the only significant variable with a 2.945-fold increase in the probability of passing the exam (p = 0.018). CONCLUSIONS: A short 10-min active intervention to familiarize students with the OSCE setting significantly improved student performance. We suggest that curricula should include simulations on the exam setting in addition to courses that increase knowledge or skills to mitigate the negative effect of nonfamiliarity with the OSCE exam setting on the students.


Assuntos
Educação de Graduação em Medicina , Estudantes de Medicina , Humanos , Avaliação Educacional/métodos , Educação de Graduação em Medicina/métodos , Competência Clínica , Exame Físico
7.
BMC Med Educ ; 24(1): 673, 2024 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-38886698

RESUMO

OBJECTIVE: To analyze the satisfaction levels, perceptions of developing clinical competencies through objective structured clinical examination and to explore the experiences, challenges, and suggestions of undergraduate dental students. METHODS: The study adopted a mixed-method convergent design. Quantitative data were collected from 303 participants through surveys, evaluating satisfaction levels with objective structured clinical examination (OSCE). Additionally, qualitative insights were gathered through student focus group interviews, fundamental themes were developed from diverse expressions on various aspects of OSCE assessments. The Chi-Square tests, was performed to assess associations between variables. Data integration involved comparing and contrasting quantitative and qualitative findings to derive comprehensive conclusions. RESULTS: The satisfaction rates include 69.4% for the organization of OSCE stations and 57.4% for overall effectiveness. However, a crucial challenge was identified, with only 36.7% of students receiving adequate post-OSCE feedback. Furthermore, a majority of students (50%) expressed concerns about the clinical relevance of OSCEs. The study showed a significant associations (p < 0.05) between satisfaction levels and years of study as well as previous OSCE experience. Student focus group interviews revealed diverse perspectives on OSCE assessments. While students appreciate the helpfulness of OSCEs, concerns were raised regarding time constraints, stress, examiner training, and the perceived lack of clinical relevance. CONCLUSION: The students anticipated concerns about the clinical relevance of OSCEs, highlighting the need for a more aligned assessment approach. Diverse perspectives on OSCE assessments reveal perceived helpfulness alongside challenges such as lack of feedback, examiner training, time constraints, and mental stress.


Assuntos
Competência Clínica , Educação em Odontologia , Avaliação Educacional , Grupos Focais , Satisfação Pessoal , Estudantes de Odontologia , Humanos , Estudantes de Odontologia/psicologia , Feminino , Masculino , Educação em Odontologia/normas , Inquéritos e Questionários , Adulto Jovem , Adulto
8.
Rev Neurol (Paris) ; 2024 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-38705796

RESUMO

BACKGROUND: There is little consensus on how to make a diagnosis announcement of severe chronic disease in neurology. Other medical specialties, such as oncology, have developed assessment methods similar to the Objective Structured Clinical Examination (OSCE) to address this issue. Here we report the implementation of an OSCE focused on the diagnosis announcement of chronic disease in neurology by residents. OBJECTIVE: We aimed to evaluate the acceptability, feasibility and validity in routine practice of an OSCE combined with a theoretical course focused on diagnosis announcement in neurology. METHOD: Eighteen neurology residents were prospectively included between 2019 and 2022. First, they answered a questionnaire on their previous level of training in diagnosis announcement. Second, in a practical session with a simulated patient, they made a 15-min diagnosis announcement and then had 5mins of immediate feedback with an expert observer, present in the room. The OSCE consisted of 4 different stations, with standardized scenarios dedicated to the announcement of multiple sclerosis (MS), Parkinson's disease (PD), Alzheimer's disease (AD) and amyotrophic lateral sclerosis (ALS). Third, in a theory session, expert observers covered the essential theoretical points. All residents and expert observers completed an evaluation of the "practical session" and the "theory session". RESULTS: Residents estimated their previous level of diagnosis announcement training at 3.1/5. The most feared announcements were AD and ALS. The "practical session" was rated at a mean of 4.1/5 by the residents and 4.8/5 by the expert observers, and the "theory session" at a mean of 4.7/5 by the residents and 5/5 by the expert observers. After the OSCEs, 11 residents felt more confident about making an announcement. CONCLUSION: This study has shown a benefit of using an OSCE to learn how to make a diagnosis announcement of severe chronic disease in neurology. OSCEs could be used in many departments in routine practice and seem adapted to residents.

9.
HNO ; 72(3): 182-189, 2024 Mar.
Artigo em Alemão | MEDLINE | ID: mdl-38305855

RESUMO

BACKGROUND: Due to the COVID-19 pandemic, contact restrictions occurred worldwide, which affected medical schools as well. It was not possible to hold classroom lectures. Teaching contents had to be converted to a digital curriculum within a very short time. Conditions for assessments posed an even greater challenge. For example, solutions had to be found for objective structured clinical examinations (OSCE), which were explicitly forbidden in some German states. The aim of this study was to evaluate the feasibility of an OSCE under pandemic conditions. MATERIALS AND METHODS: At the end of the 2020 summer semester, 170 students completed a combined otolaryngology and ophthalmology OSCE. Examinations were held in small groups over the course of 5 days and complied with strict hygiene regulations. The ophthalmology exam was conducted face to face, and the ENT OSCE virtually. Students were asked to rate the OSCE afterwards. RESULTS: Between 106 and 118 of the students answered the questions. Comparing the face-to-face OSCE with the virtual OSCE, about 49% preferred the face-to-face OSCE and 17% preferred the virtual OSCE; 34% found both variants equally good. Overall, the combination of an ENT and ophthalmology OSCE was rated as positive. CONCLUSION: It is possible to hold an OSCE even under pandemic conditions. For optimal preparation of the students, among other things, it is necessary to transform teaching contents to a digital curriculum. The combination of an ENT and ophthalmology OSCE was positively evaluated by the students, although the face-to-face OSCE was preferred. The overall high satisfaction of the students confirms the feasibility of a virtual examination with detailed and well-planned preparation.


Assuntos
Pandemias , Estudantes de Medicina , Humanos , Estudos de Viabilidade , Exame Físico , Currículo , Competência Clínica , Avaliação Educacional
10.
Eur J Dent Educ ; 28(2): 408-415, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37846196

RESUMO

INTRODUCTION: The Objective Structured Clinical Examination (OSCE) is a valid, reliable and reproducible assessment method traditionally carried out as a live examination but recently also provided online. The aim was to compare any differences in the perceptions of dental students participating in online and live OSCE using mixed methods. MATERIALS AND METHODS: All Finnish fourth-year undergraduate dental students (n = 172) attended the exam in April 2021. Due to the COVID-19 pandemic, the official administrative restrictions in teaching in universities still existed in April 2021. By the time of the national OSCE, the pandemic situation varied in different parts of the country. Therefore, two of the universities conducted a live OSCE and two an online version of the OSCE. Data were collected after the OSCE using a voluntary anonymous electronic questionnaire with multiple-choice and open-ended questions (response rate 58%). Differences between the OSCE versions were analysed using the Mann-Whitney U test and open answers with qualitative content analysis. RESULTS: The students considered both types of OSCE good in general. The main differences were found concerned adequate time allocation and overall technical implementation, in favour of the live OSCE. While a qualitative analysis revealed exam anxiety as the most often mentioned negative issue, overall, comments were positive. CONCLUSION: Variation in the assessments between different question entities seemed to be wider than between the implemented OSCE versions. Time management in the OSCE should be further developed by managing the assignment of tasks.


Assuntos
COVID-19 , Humanos , Pandemias , Estudantes de Odontologia , Avaliação Educacional/métodos , Competência Clínica , Educação em Odontologia/métodos
11.
Artigo em Inglês | MEDLINE | ID: mdl-37843678

RESUMO

Quantitative measures of systematic differences in OSCE scoring across examiners (often termed examiner stringency) can threaten the validity of examination outcomes. Such effects are usually conceptualised and operationalised based solely on checklist/domain scores in a station, and global grades are not often used in this type of analysis. In this work, a large candidate-level exam dataset is analysed to develop a more sophisticated understanding of examiner stringency. Station scores are modelled based on global grades-with each candidate, station and examiner allowed to vary in their ability/stringency/difficulty in the modelling. In addition, examiners are also allowed to vary in how they discriminate across grades-to our knowledge, this is the first time this has been investigated. Results show that examiners contribute strongly to variance in scoring in two distinct ways-via the traditional conception of score stringency (34% of score variance), but also in how they discriminate in scoring across grades (7%). As one might expect, candidate and station account only for a small amount of score variance at the station-level once candidate grades are accounted for (3% and 2% respectively) with the remainder being residual (54%). Investigation of impacts on station-level candidate pass/fail decisions suggest that examiner differential stringency effects combine to give false positive (candidates passing in error) and false negative (failing in error) rates in stations of around 5% each but at the exam-level this reduces to 0.4% and 3.3% respectively. This work adds to our understanding of examiner behaviour by demonstrating that examiners can vary in qualitatively different ways in their judgments. For institutions, it emphasises the key message that it is important to sample widely from the examiner pool via sufficient stations to ensure OSCE-level decisions are sufficiently defensible. It also suggests that examiner training should include discussion of global grading, and the combined effect of scoring and grading on candidate outcomes.

12.
Adv Health Sci Educ Theory Pract ; 28(2): 519-536, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36053344

RESUMO

The phenomenon of first impression is well researched in social psychology, but less so in the study of OSCEs and the multiple mini interview (MMI). To explore its bearing on the MMI method we included a rating of first impression in the MMI for student selection executed 2012 at the University Medical Center Hamburg-Eppendorf, Germany (196 applicants, 26 pairs of raters) and analyzed how it was related to MMI performance ratings made by (a) the same rater, and (b) a different rater. First impression was assessed immediately after an applicant entered the test room. Each MMI-task took 5 min and was rated subsequently. Internal consistency was α = .71 for first impression and α = .69 for MMI performance. First impression and MMI performance correlated by r = .49. Both measures weakly predicted performance in two OSCEs for communication skills, assessed 18 months later. MMI performance did not increment prediction above the contribution of first impression and vice versa. Prediction was independent of whether or not the rater who rated first impression also rated MMI performance. The correlation between first impression and MMI-performance is in line with the results of corresponding social psychological studies, showing that judgements based on minimal information moderately predict behavioral measures. It is also in accordance with the notion that raters often blend their specific assessment task outlined in MMI-instructions with the self-imposed question of whether a candidate would fit the role of a medical doctor.


Assuntos
Comunicação , Critérios de Admissão Escolar , Humanos , Projetos Piloto , Faculdades de Medicina , Centros Médicos Acadêmicos
13.
Adv Health Sci Educ Theory Pract ; 28(1): 27-46, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-35943605

RESUMO

Examiners' judgements play a critical role in competency-based assessments such as objective structured clinical examinations (OSCEs). The standardised nature of OSCEs and their alignment with regulatory accountability assure their wide use as high-stakes assessment in medical education. Research into examiner behaviours has predominantly explored the desirable psychometric characteristics of OSCEs, or investigated examiners' judgements from a cognitive rather than a sociocultural perspective. This study applies cultural historical activity theory (CHAT) to address this gap in exploring examiners' judgements in a high-stakes OSCE. Based on the idea that OSCE examiners' judgements are socially constructed and mediated by their clinical roles, the objective was to explore the sociocultural factors that influenced examiners' judgements of student competence and use the findings to inform examiner training to enhance assessment practice. Seventeen semi-structured interviews were conducted with examiners who assessed medical student competence in progressing to the next stage of training in a large-scale OSCE at one Australian university. The initial thematic analysis provided a basis for applying CHAT iteratively to explore the sociocultural factors and, specifically, the contradictions created by interactions between different elements such as examiners and rules, thus highlighting the factors influencing examiners' judgements. The findings indicated four key factors that influenced examiners' judgements: examiners' contrasting beliefs about the purpose of the OSCE; their varying perceptions of the marking criteria; divergent expectations of student competence; and idiosyncratic judgement practices. These factors were interrelated with the activity systems of the medical school's assessment practices and the examiners' clinical work contexts. Contradictions were identified through the guiding principles of multi-voicedness and historicity. The exploration of the sociocultural factors that may influence the consistency of examiners' judgements was facilitated by applying CHAT as an analytical framework. Reflecting upon these factors at organisational and system levels generated insights for creating fit-for-purpose examiner training to enhance assessment practice.


Assuntos
Educação Médica , Estudantes de Medicina , Humanos , Julgamento , Austrália , Avaliação Educacional , Competência Clínica
14.
Artigo em Inglês | MEDLINE | ID: mdl-37851159

RESUMO

Objective structured clinical examination (OSCE) is widely used to assess medical students' clinical skills. Virtual OSCEs were used in place of in-person OSCEs during the COVID-19 pandemic; however, their reliability is yet to be robustly analyzed. By applying generalizability (G) theory, this study aimed to evaluate the reliability of a hybrid OSCE, which admixed in-person and online methods, and gain insights into improving OSCEs' reliability. During the 2020-2021 hybrid OSCEs, one examinee, one rater, and a vinyl mannequin for physical examination participated onsite, and a standardized simulated patient (SP) for medical interviewing and another rater joined online in one virtual breakout room on an audiovisual conferencing system. G-coefficients and 95% confidence intervals of the borderline score, namely border zone (BZ), under the standard 6-station, 2-rater, and 6-item setting were calculated. G-coefficients of in-person (2017-2019) and hybrid OSCEs (2020-2021) under the standard setting were estimated to be 0.624, 0.770, 0.782, 0.759, and 0.823, respectively. The BZ scores were estimated to be 2.43-3.57, 2.55-3.45, 2.59-3.41, 2.59-3.41, and 2.51-3.49, respectively, in the score range from 1 to 6. Although hybrid OSCEs showed reliability comparable to in-person OSCEs, they need further improvement as a very high-stakes examination. In addition to increasing clinical vignettes, having more proficient online/on-demand raters and/or online SPs for medical interviews could improve the reliability of OSCEs. Reliability can also be ensured through supplementary examination and by increasing the number of online raters for a small number of students within the BZs.

15.
Med Teach ; 45(10): 1163-1169, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37029957

RESUMO

INTRODUCTION: Alongside the usual exam-level cut-score requirement, the use of a conjunctive minimum number of stations passed (MNSP) standard in OSCE-type assessments is common practice across some parts of the world. Typically, the MNSP is fixed in advance with little justification, and does not vary from one administration to another in a particular setting-which is not congruent to best assessment practice for high stakes examinations. In this paper, we investigate empirically four methods of setting such a standard in an examinee-centred (i.e. post hoc) and criterion-based way that allows the standard to vary appropriately with station and test difficulty. METHODS AND RESULTS: Using many administrations (n = 442) from a single exam (PLAB2 in the UK), we show via mixed modelling that the total number of stations passed for each candidate has reliability close to that of the total test score (relative g-coefficient 0.73 and 0.76 respectively). We then argue that calculating the MNSP based on the predicted number of stations passed at the 'main' exam-level cut-score (i.e. for the borderline candidate) is conceptually, theoretically and practically preferred amongst the four approaches considered. Further analysis indicates that this standard does vary from administration to administration, but acts in a secondary way, with approximately a quarter of exam-level candidate failures resulting from application of the MNSP standard alone. CONCLUSION: Collectively, this work suggests that employing the identified approach to setting the MNSP standard is practically possible and, in many settings, is more defensible than using a fixed number of stations set in advance.


Assuntos
Competência Clínica , Avaliação Educacional , Humanos , Reprodutibilidade dos Testes , Avaliação Educacional/métodos
16.
Med Teach ; 45(2): 212-218, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36151754

RESUMO

OBJECTIVE: Clerkship is crucial for fourth-year medical students before entering the clinical environment. However, lack of confidence impairs clerks' performance during the clinical rotation. We assess the impact of formative Objective Structured Clinical Examination (OSCE) with immediate feedback on surgical clerks' self-confidence and clinical competence. METHODS: This is a prospective randomized controlled study. Thirty-eight fourth-year medical students starting their surgical clerkship were randomly divided into the control group (n = 19) and the OSCE group (n = 19), where an extra 6-station formative OSCE was given prior to the surgical rotation with immediate feedback on the participant's performance. Self-confidence assessment (SCA) was collected from each participant before, right after the formative OSCE and one month later. Clinical competence was assessed using a mini-clinical evaluation exercise (mini-CEX) with a case of acute abdominal pain and direct observation of procedural skills (DOPS) with incision and suture one month later. RESULTS: The SCAs were significantly improved in the OSCE group right after the training, and a month later, compared to the control group. The mini-CEX score was significantly higher in the OSCE group compared to the control group, but not the DOPS score of incision and suture. CONCLUSION: The formative OSCE with immediate feedback could significantly enhance surgical clerks' self-confidence and their clinical competence when taking the history, performing the physical examination, and in clinical reasoning; however, the formative OSCE did not improve their dexterity in performing the procedural skills.


Assuntos
Estágio Clínico , Competência Clínica , Humanos , Avaliação Educacional , Retroalimentação , Estudos Prospectivos , Exame Físico
17.
Med Teach ; 45(9): 978-983, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36786837

RESUMO

INTRODUCTION: The Ottawa Conference on the Assessment of Competence in Medicine and the Healthcare Professions was first convened in 1985 in Ottawa. Since then, what has become known as the Ottawa conference has been held in various locations around the world every 2 years. It has become an important conference for the community of assessment - including researchers, educators, administrators and leaders - to share contemporary knowledge and develop international standards for assessment in medical and health professions education. METHODS: The Ottawa 2022 conference was held in Lyon, France, in conjunction with the AMEE 2022 conference. A diverse group of international assessment experts were invited to present a symposium at the AMEE conference to summarise key concepts from the Ottawa conference. This paper was developed from that symposium. RESULTS AND DISCUSSION: This paper summarises key themes and issues that emerged from the Ottawa 2022 conference. It highlights the importance of the consensus statements and discusses challenges for assessment such as issues of equity, diversity, and inclusion, shifts in emphasis to systems of assessment, implications of 'big data' and analytics, and challenges to ensure published research and practice are based on contemporary theories and concepts.


Assuntos
Medicina , Competência Profissional , Humanos
18.
Med Teach ; 45(8): 893-905, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36940135

RESUMO

PURPOSE: New emphasis on the assessment of health professions educators' teaching competence has led to greater use of the Objective Structured Teaching Encounter (OSTE). The purpose of this study is to review and further describe the current uses and learning outcomes of the OSTE in health professions education. MATERIALS AND METHODS: PubMed, MEDLINE, and CINAHL (March 2010 to February 2022) were searched for English-language studies describing the use of an OSTE for any educational purpose within health professions education. RESULTS: Of the 29 articles that met inclusion criteria, over half of the studies (17 of 29, 58.6%) were published during or after 2017. Seven studies described OSTE use outside of the traditional medical education context. These new contexts included basic sciences, dental, pharmacy, and Health Professions Education program graduates. Eleven articles described novel OSTE content, which included leadership skills, emotional intelligence, medical ethics, inter-professional conduct, and a procedural OSTE. There is increasing evidence supporting the use of OSTEs for the assessment of clinical educators' teaching skills. CONCLUSIONS: The OSTE is a valuable tool for the improvement and assessment of teaching within a variety of health professions education contexts. Further study is required to determine the impact of OSTEs on teaching behaviors in real-life contexts.


Assuntos
Educação Médica , Avaliação Educacional , Humanos , Competência Profissional , Competência Clínica , Aprendizagem , Ensino
19.
Med Teach ; : 1-2, 2023 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-37566744

RESUMO

The Membership of the Royal College of Obstetricians and Gynaecologists (MRCOG) and the European Fellowship in Obstetrics and Gynaecology (EBCOG) exams are both well-renowned specialty qualifications that assess the competency of obstetricians and gynaecologists. In this article, an exam candidate shares his perspective on the changes made during the COVID-19 pandemic. Despite changing to an online format to allow candidates to take the exam remotely, the MRCOG Part 3 exam maintained its main exam structures: (1) simulated patient task to evaluate the candidates' interactions with well-trained patients in a tele-interview: (2) structure discussion with the clinical examiners based on some certain topics. In contrast, the EBCOG has created a brand new structure to suit the online model to assess the candidates' core clinical skills in broader aspects. Although it is unclear whether online exam will exist in future, this has been a unique experience for candidates during pandemic.

20.
BMC Med Educ ; 23(1): 803, 2023 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-37885005

RESUMO

PURPOSE: Ensuring equivalence of examiners' judgements within distributed objective structured clinical exams (OSCEs) is key to both fairness and validity but is hampered by lack of cross-over in the performances which different groups of examiners observe. This study develops a novel method called Video-based Examiner Score Comparison and Adjustment (VESCA) using it to compare examiners scoring from different OSCE sites for the first time. MATERIALS/ METHODS: Within a summative 16 station OSCE, volunteer students were videoed on each station and all examiners invited to score station-specific comparator videos in addition to usual student scoring. Linkage provided through the video-scores enabled use of Many Facet Rasch Modelling (MFRM) to compare 1/ examiner-cohort and 2/ site effects on students' scores. RESULTS: Examiner-cohorts varied by 6.9% in the overall score allocated to students of the same ability. Whilst only a tiny difference was apparent between sites, examiner-cohort variability was greater in one site than the other. Adjusting student scores produced a median change in rank position of 6 places (0.48 deciles), however 26.9% of students changed their rank position by at least 1 decile. By contrast, only 1 student's pass/fail classification was altered by score adjustment. CONCLUSIONS: Whilst comparatively limited examiner participation rates may limit interpretation of score adjustment in this instance, this study demonstrates the feasibility of using VESCA for quality assurance purposes in large scale distributed OSCEs.


Assuntos
Avaliação Educacional , Estudantes de Medicina , Humanos , Avaliação Educacional/métodos , Competência Clínica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA