Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Med Educ ; 23(1): 364, 2023 May 23.
Article in English | MEDLINE | ID: mdl-37217918

ABSTRACT

BACKGROUND: Pandemic disruptions to medical education worldwide resulted in rapid adaptations to clinical skills learning. These adaptations included moving most teaching to the online environment, decreasing the accepted "hands-on" methods of teaching and learning. While studies have shown significant impacts on student confidence in skills acquisition, there is a paucity of assessment outcome studies which would contribute a valuable perspective on whether measurable deficits were incurred. Here, a preclinical (Year 2) cohort was investigated for clinical skills learning impacts that could influence their transition to hospital-based placements. METHODS: A sequential mixed methods approach was used on the Year 2 Medicine cohort, including: focus group discussions with thematic analysis; a survey derived from the themes observed; and a cohort comparison of the clinical skills examination results of the disrupted Year 2 cohort, compared to pre-pandemic cohorts. RESULTS: Students reported experiencing benefits and disadvantages of the shift to online learning, including a decrease in confidence in their skills acquisition. End of year summative clinical assessments showed non-inferior outcomes when compared to previous cohorts for the majority of clinical skills. However, for procedural skills (venepuncture) the disrupted cohort had significantly lower scores compared to a pre-pandemic cohort. CONCLUSIONS: Rapid innovation during the COVID-19 pandemic provided the opportunity to compare online asynchronous hybrid clinical skills learning with the usual practice of face-to-face synchronous experiential learning. In this study, students' reported perceptions and assessment performance data indicate that careful selection of skills suitable for online teaching, supported by timetabled "hands-on" sessions and ample practice opportunities, is likely to provide non-inferior outcomes for clinical skills learning in students about to transition to clinical placements. The findings can be used to inform clinical skills curriculum designs that incorporate the virtual environment, and assist with future-proofing skills teaching in the case of further catastrophic disruptions.


Subject(s)
COVID-19 , Students, Medical , Humans , Clinical Competence , Pandemics , COVID-19/epidemiology , Learning
2.
Front Med (Lausanne) ; 9: 844884, 2022.
Article in English | MEDLINE | ID: mdl-35445035

ABSTRACT

Background: During 2020, the COVID-19 pandemic caused worldwide disruption to the delivery of clinical assessments, requiring medicals schools to rapidly adjust their design of established tools. Derived from the traditional face-to-face Objective Structured Clinical Examination (OSCE), the virtual OSCE (vOSCE) was delivered online, using a range of school-dependent designs. The quality of these new formats was evaluated remotely through virtual quality assurance (vQA). This study synthesizes the vOSCE and vQA experiences of stakeholders from participating Australian medical schools based on a Quality framework. Methods: This study utilized a descriptive phenomenological qualitative design. Focus group discussions (FGD) were held with 23 stakeholders, including examiners, academics, simulated patients, professional staff, students and quality assurance examiners. The data was analyzed using a theory-driven conceptual Quality framework. Results: The vOSCE was perceived as a relatively fit-for purpose assessment during pandemic physical distancing mandates. Additionally, the vOSCE was identified as being value-for-money and was noted to provide procedural benefits which lead to an enhanced experience for those involved. However, despite being largely delivered fault-free, the current designs are considered limited in the scope of skills they can assess, and thus do not meet the established quality of the traditional OSCE. Conclusions: Whilst virtual clinical assessments are limited in their scope of assessing clinical competency when compared with the traditional OSCE, their integration into programs of assessment does, in fact, have significant potential. Scholarly review of stakeholder experiences has elucidated quality aspects that can inform iterative improvements to the design and implementation of future vOSCEs.

3.
Front Med (Lausanne) ; 9: 825502, 2022.
Article in English | MEDLINE | ID: mdl-35265639

ABSTRACT

The Objective Structured Clinical Examination (OSCE) has been traditionally viewed as a highly valued tool for assessing clinical competence in health professions education. However, as the OSCE typically consists of a large-scale, face-to-face assessment activity, it has been variably criticized over recent years due to the extensive resourcing and relative expense required for delivery. Importantly, due to COVID-pandemic conditions and necessary health guidelines in 2020 and 2021, logistical issues inherent with OSCE delivery were exacerbated for many institutions across the globe. As a result, alternative clinical assessment strategies were employed to gather assessment datapoints to guide decision-making regarding student progression. Now, as communities learn to "live with COVID", health professions educators have the opportunity to consider what weight should be placed on the OSCE as a tool for clinical assessment in the peri-pandemic world. In order to elucidate this timely clinical assessment issue, this qualitative study utilized focus group discussions to explore the perceptions of 23 clinical assessment stakeholders (examiners, students, simulated patients and administrators) in relation to the future role of the traditional OSCE. Thematic analysis of the FG transcripts revealed four major themes in relation to participants' views on the future of the OSCE vis-a-vis other clinical assessments in this peri-pandemic climate. The identified themes are (a) enduring value of the OSCE; (b) OSCE tensions; (c) educational impact; and (d) the importance of programs of assessment. It is clear that the OSCE continues to play a role in clinical assessments due to its perceived fairness, standardization and ability to yield robust results. However, recent experiences have resulted in a diminishing and refining of its role alongside workplace-based assessments in the new, peri-pandemic programs of assessment. Future programs of assessment should consider the strategic positioning of the OSCE within the context of utilizing a range of tools when determining students' clinical competence.

5.
Med Teach ; 43(2): 174-181, 2021 02.
Article in English | MEDLINE | ID: mdl-33103522

ABSTRACT

BACKGROUND: The Australian Collaboration for Clinical Assessment in Medicine (ACCLAiM) is a voluntary assessment consortium, involving medical schools nationwide. The aims of ACCLAiM are to benchmark student clinical assessment outcomes and to provide quality assurance (QA) of exit-level Objective Structured Clinical Exams (OSCEs). This study aimed to evaluate the impact of the ACCLAiM QA process for optimising OSCE delivery standards at the member schools using a Community of Practice (CoP) framework. METHODS: A mixed methods sequential explanatory design, involving an online questionnaire and subsequent focus group discussions, was utilised. Questionnaire responses were analysed using descriptive statistics, while thematic analysis was employed for the qualitative data. RESULTS: Data analysis revealed that school-specific OSCE practices had evolved based on QA feedback, as well as a collaborative sharing of expertise consistent with a CoP model. Extending beyond a QA working group for accountability and demonstration of minimum standards, participation in ACCLAiM QA processes is creating a sustainable socio-academic network focused on quality improvement. CONCLUSION: Collaborative QA in clinical assessment creates opportunities for optimising standards in OSCE processes and sharing of resources for OSCE assessments. It also allows for professional development and scholarly engagement in assessment research. These benefits contribute to the existence of an emergent CoP model.


Subject(s)
Clinical Competence , Schools, Medical , Australia , Delivery of Health Care , Educational Measurement , Feedback , Humans
6.
Med Educ ; 55(3): 344-353, 2021 03.
Article in English | MEDLINE | ID: mdl-32810334

ABSTRACT

BACKGROUND: Objective structured clinical examinations (OSCEs) are commonly used to assess the clinical skills of health professional students. Examiner judgement is one acknowledged source of variation in candidate marks. This paper reports an exploration of examiner decision making to better characterise the cognitive processes and workload associated with making judgements of clinical performance in exit-level OSCEs. METHODS: Fifty-five examiners for exit-level OSCEs at five Australian medical schools completed a NASA Task Load Index (TLX) measure of cognitive load and participated in focus group interviews immediately after the OSCE session. Discussions focused on how decisions were made for borderline and clear pass candidates. Interviews were transcribed, coded and thematically analysed. NASA TLX results were quantitatively analysed. RESULTS: Examiners self-reported higher cognitive workload levels when assessing a borderline candidate in comparison with a clear pass candidate. Further analysis revealed five major themes considered by examiners when marking candidate performance in an OSCE: (a) use of marking criteria as a source of reassurance; (b) difficulty adhering to the marking sheet under certain conditions; (c) demeanour of candidates; (d) patient safety, and (e) calibration using a mental construct of the 'mythical [prototypical] intern'. Examiners demonstrated particularly higher mental demand when assessing borderline compared to clear pass candidates. CONCLUSIONS: Examiners demonstrate that judging candidate performance is a complex, cognitively difficult task, particularly when performance is of borderline or lower standard. At programme exit level, examiners intuitively want to rate candidates against a construct of a prototypical graduate when marking criteria appear not to describe both what and how a passing candidate should demonstrate when completing clinical tasks. This construct should be shared, agreed upon and aligned with marking criteria to best guide examiner training and calibration. Achieving this integration may improve the accuracy and consistency of examiner judgements and reduce cognitive workload.


Subject(s)
Clinical Competence , Educational Measurement , Australia , Humans , Physical Examination , Schools, Medical
SELECTION OF CITATIONS
SEARCH DETAIL
...