Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Br J Clin Pharmacol ; 90(2): 493-503, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-37793701

RESUMO

AIMS: The United Kingdom (UK) Prescribing Safety Assessment (PSA) is a 2-h online assessment of basic competence to prescribe and supervise the use of medicines. It has been undertaken by students and doctors in UK medical and foundation schools for the past decade. This study describes the academic characteristics and performance of the assessment; longitudinal performance of candidates and schools; stakeholder feedback; and surrogate markers of prescribing safety in UK healthcare practice. METHODS: We reviewed the performance data generated by over 70 000 medical students and 3700 foundation doctors who have participated in the PSA since its inception in 2013. These data were supplemented by Likert scale and free text feedback from candidates and a variety of stakeholder groups. Further data on medication incidents, collected by national reporting systems and the regulatory body, are reported, with permission. RESULTS: We demonstrate the feasibility, high quality and reliability of an online prescribing assessment, uniquely providing a measure of prescribing competence against a national standard. Over 90% of candidates pass the PSA on their first attempt, while a minority are identified for further training and assessment. The pass rate shows some variation between different institutions and between undergraduate and foundation cohorts. Most responders to a national survey agreed that the PSA is a useful instrument for assessing prescribing competence, and an independent review has recommended adding the PSA to the Medical Licensing Assessment. Surrogate markers suggest there has been improvement in prescribing safety in practice, temporally associated with the introduction of the PSA but other factors could be influential too. CONCLUSIONS: The PSA is a practical and cost-effective way of delivering a reliable national assessment of prescribing competence that has educational impact and is supported by the majority of stakeholders. There is a need to develop national systems to identify and report prescribing errors and the harm they cause, enabling the impact of educational interventions to be measured.


Assuntos
Competência Clínica , Avaliação Educacional , Humanos , Reprodutibilidade dos Testes , Reino Unido , Retroalimentação , Biomarcadores
2.
Clin Med (Lond) ; 22(6): 590-593, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36427881

RESUMO

Successful completion of year 1 of the UK Foundation Programme is a General Medical Council requirement that newly qualified doctors must achieve in order to gain full registration for licence to practise in the UK. We present compelling evidence that both sections of the UK Foundation Programme allocation process, consisting of the Educational Performance Measure and Situational Judgement Test scores, are not fit for purpose. The ranking process drives competitive behaviours among medical students and undermines NHS teamworking values. Furthermore, data from 2013-2020 show that UK minority ethnic students consistently receive significantly lower SJT scores than White students. The current process in the UK allocates lower ranked students, who often need more academic and social support, to undersubscribed regions. This can lead to vacancies in less popular regions, ultimately worsening health inequality. A preference-informed allocation process will improve trainee access to support and help retain trainees in underserved regions. We aim to summarise the flaws of the current system and report a potential radical solution.


Assuntos
Médicos , Estudantes de Medicina , Humanos , Disparidades nos Níveis de Saúde , Grupos Minoritários , Etnicidade
3.
BMC Med Educ ; 22(1): 708, 2022 Oct 05.
Artigo em Inglês | MEDLINE | ID: mdl-36199083

RESUMO

BACKGROUND: Standard setting for clinical examinations typically uses the borderline regression method to set the pass mark. An assumption made in using this method is that there are equal intervals between global ratings (GR) (e.g. Fail, Borderline Pass, Clear Pass, Good and Excellent). However, this assumption has never been tested in the medical literature to the best of our knowledge. We examine if the assumption of equal intervals between GR is met, and the potential implications for student outcomes. METHODS: Clinical finals examiners were recruited across two institutions to place the typical 'Borderline Pass', 'Clear Pass' and 'Good' candidate on a continuous slider scale between a typical 'Fail' candidate at point 0 and a typical 'Excellent' candidate at point 1. Results were analysed using one-sample t-testing of each interval to an equal interval size of 0.25. Secondary data analysis was performed on summative assessment scores for 94 clinical stations and 1191 medical student examination outcomes in the final 2 years of study at a single centre. RESULTS: On a scale from 0.00 (Fail) to 1.00 (Excellent), mean examiner GRs for 'Borderline Pass', 'Clear Pass' and 'Good' were 0.33, 0.55 and 0.77 respectively. All of the four intervals between GRs (Fail-Borderline Pass, Borderline Pass-Clear Pass, Clear Pass-Good, Good-Excellent) were statistically significantly different to the expected value of 0.25 (all p-values < 0.0125). An ordinal linear regression using mean examiner GRs was performed for each of the 94 stations, to determine pass marks out of 24. This increased pass marks for all 94 stations compared with the original GR locations (mean increase 0.21), and caused one additional fail by overall exam pass mark (out of 1191 students) and 92 additional station fails (out of 11,346 stations). CONCLUSIONS: Although the current assumption of equal intervals between GRs across the performance spectrum is not met, and an adjusted regression equation causes an increase in station pass marks, the effect on overall exam pass/fail outcomes is modest.


Assuntos
Competência Clínica , Avaliação Educacional , Avaliação Educacional/métodos , Humanos , Exame Físico , Análise de Regressão
4.
Adv Med Educ Pract ; 13: 123-127, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35173511

RESUMO

The General Medical Council's publication 'Outcomes for Graduates' places emphasis on doctors being able to integrate biomedical science, research and scholarship with clinical practice. In response, a new paradigm of assessment was introduced for the intercalated Bachelor of Science program at Imperial College School of Medicine in 2019. This innovative approach involves authentic "active learning" assessments analogous to tasks encountered in a research environment and intends to test a wider range of applied scientific skills than traditional examinations. Written assessments include a "Letter to the Editor", scientific abstract, and production of a lay summary. A clinical case study titled "Science in Context" presents a real or virtual patient, with evaluation of current and emerging evidence within that field. Another assessment emulates the academic publishing process: groups submit a literature review and engage in reciprocal peer review of another group's work. A rebuttal letter accompanies the final submission, detailing how feedback was addressed. Scientific presentation skills are developed through tasks including a research proposal pitch, discussion of therapies or diagnostics, or review of a paper. A data management assignment develops skills in hypothesis generation, performing analysis, and drawing conclusions. Finally, students conduct an original research project which is assessed via a written report in the format of a research paper and an oral presentation involving critical analysis of their project. We aspire to train clinicians who apply scientific principles to critique the evidence base of medical practice and possess the skillset to conduct high-quality research underpinned by the principles of best clinical and academic practice. Assessment drives learning, and active learning has been demonstrated to enhance academic performance and reduce attainment gaps in science education. We therefore believe this strategy will help to successfully shape our students as future scientists and scholars as well as clinical practitioners and professionals.

5.
Anat Sci Educ ; 14(3): 385-393, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33465814

RESUMO

In anatomical education three-dimensional (3D) visualization technology allows for active and stereoscopic exploration of anatomy and can easily be adopted into medical curricula along with traditional 3D teaching methods. However, most often knowledge is still assessed with two-dimensional (2D) paper-and-pencil tests. To address the growing misalignment between learning and assessment, this viewpoint commentary highlights the development of a virtual 3D assessment scenario and perspectives from students and teachers on the use of this assessment tool: a 10-minute session of anatomical knowledge assessment with real-time interaction between assessor and examinee, both wearing a HoloLens and sharing the same stereoscopic 3D augmented reality model. Additionally, recommendations for future directions, including implementation, validation, logistic challenges, and cost-effectiveness, are provided. Continued collaboration between developers, researchers, teachers, and students is critical to advancing these processes.


Assuntos
Anatomia , Anatomia/educação , Currículo , Escolaridade , Humanos , Imageamento Tridimensional , Aprendizagem
8.
Med Teach ; 42(4): 416-421, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31816262

RESUMO

Uncertainty is a common and increasingly acknowledged problem in clinical practice. Current single best answer (SBA) style assessments test areas where there is one correct answer, and as the approach to assessment impacts on the approach to learning, these exams may poorly prepare our future doctors to handle uncertainty. We therefore, need to modify our approach to assessment to emphasize reasoning and introduce the possibility of more than one 'correct' answer. We have developed clinical prioritization questions (CPQs), a novel formative assessment tool in which students prioritize possible responses in order of likelihood. This assessment format was piloted with a group of medical students and evaluated in comparison with the more traditional SBA question format in a team-based learning setting. Students reported that they felt ongoing use would help improve their tolerance of uncertainty (p < 0.01). Furthermore, over 80% of students felt that CPQs were more reflective of real-life clinical practice. Group based discussions were significantly longer when answering CPQs (p < 0.01), suggesting they may promote richer discourse. CPQs may have a role in formative assessment to help equip students with the skills to cope with ambiguity and strengthen clinical reasoning and decision-making. Institutions may find them more practical to implement compared with other clinical reasoning assessment tools.


Assuntos
Avaliação Educacional , Estudantes de Medicina , Competência Clínica , Humanos , Aprendizagem , Incerteza
9.
BMJ Open ; 9(9): e032550, 2019 09 26.
Artigo em Inglês | MEDLINE | ID: mdl-31558462

RESUMO

OBJECTIVES: The study aimed to compare candidate performance between traditional best-of-five single-best-answer (SBA) questions and very-short-answer (VSA) questions, in which candidates must generate their own answers of between one and five words. The primary objective was to determine if the mean positive cue rate for SBAs exceeded the null hypothesis guessing rate of 20%. DESIGN: This was a cross-sectional study undertaken in 2018. SETTING: 20 medical schools in the UK. PARTICIPANTS: 1417 volunteer medical students preparing for their final undergraduate medicine examinations (total eligible population across all UK medical schools approximately 7500). INTERVENTIONS: Students completed a 50-question VSA test, followed immediately by the same test in SBA format, using a novel digital exam delivery platform which also facilitated rapid marking of VSAs. MAIN OUTCOME MEASURES: The main outcome measure was the mean positive cue rate across SBAs: the percentage of students getting the SBA format of the question correct after getting the VSA format incorrect. Internal consistency, item discrimination and the pass rate using Cohen standard setting for VSAs and SBAs were also evaluated, and a cost analysis in terms of marking the VSA was performed. RESULTS: The study was completed by 1417 students. Mean student scores were 21 percentage points higher for SBAs. The mean positive cue rate was 42.7% (95% CI 36.8% to 48.6%), one-sample t-test against ≤20%: t=7.53, p<0.001. Internal consistency was higher for VSAs than SBAs and the median item discrimination equivalent. The estimated marking cost was £2655 ($3500), with 24.5 hours of clinician time required (1.25 s per student per question). CONCLUSIONS: SBA questions can give a false impression of students' competence. VSAs appear to have greater authenticity and can provide useful information regarding students' cognitive errors, helping to improve learning as well as assessment. Electronic delivery and marking of VSAs is feasible and cost-effective.


Assuntos
Competência Clínica , Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Aprendizagem , Faculdades de Medicina , Estudantes de Medicina , Desempenho Acadêmico , Estudos Transversais , Tomada de Decisões , Humanos , Conhecimento , Inquéritos e Questionários , Reino Unido
10.
Adv Med Educ Pract ; 10: 501-506, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31372086

RESUMO

BACKGROUND: The Prescribing Safety Assessment (PSA) is an online assessment of safe and effective prescribing, taken by final-year UK medical students. To prepare students for the PSA, we used a modified form of team-based learning, team-based revision (TBR), in which students consolidate previously learned prescribing knowledge and skills across a broad range of topics. We evaluated students' response to TBR and their perceptions of team working. METHODS: Eight TBR sessions based on the PSA blueprint were conducted over two days by three faculty members for final year medical students. During TBR sessions, students worked in small groups answering individual multiple-choice questions, followed by group multiple-choice questions. They subsequently answered open-ended questions in their groups, with answers written on a drug chart to increase authenticity. Students completed surveys using Likert-type items to determine views on TBR and their confidence in prescribing. RESULTS: The majority of respondents agreed that the sessions were useful for preparation both for the PSA (82%) and Foundation Year 1 (78%). 92% agreed that using drug-charts aided learning. Prescribing confidence increased significantly after TBR (median pre-TBR: 2, post-TBR: 5, p<0.0001). TBR significantly improved attitudes towards "team experience" (p<0.001), "team impact on quality of learning" (p<0.01) and "team impact on clinical reasoning ability" (p<0.001). CONCLUSIONS: Team-based revision is a resource-efficient addition to undergraduate prescribing teaching and can help with preparation for the PSA. A short course of TBR was effective in influencing students' attitudes towards teamwork.

11.
BMC Med Educ ; 16(1): 266, 2016 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-27737661

RESUMO

BACKGROUND: Single Best Answer (SBA) questions are widely used in undergraduate and postgraduate medical examinations. Selection of the correct answer in SBA questions may be subject to cueing and therefore might not test the student's knowledge. In contrast to this artificial construct, doctors are ultimately required to perform in a real-life setting that does not offer a list of choices. This professional competence can be tested using Short Answer Questions (SAQs), where the student writes the correct answer without prompting from the question. However, SAQs cannot easily be machine marked and are therefore not feasible as an instrument for testing a representative sample of the curriculum for a large number of candidates. We hypothesised that a novel assessment instrument consisting of very short answer (VSA) questions is a superior test of knowledge than assessment by SBA. METHODS: We conducted a prospective pilot study on one cohort of 266 medical students sitting a formative examination. All students were assessed by both a novel assessment instrument consisting of VSAs and by SBA questions. Both instruments tested the same knowledge base. Using the filter function of Microsoft Excel, the range of answers provided for each VSA question was reviewed and correct answers accepted in less than two minutes. Examination results were compared between the two methods of assessment. RESULTS: Students scored more highly in all fifteen SBA questions than in the VSA question format, despite both examinations requiring the same knowledge base. CONCLUSIONS: Valid assessment of undergraduate and postgraduate knowledge can be improved by the use of VSA questions. Such an approach will test nascent physician ability rather than ability to pass exams.


Assuntos
Competência Clínica/normas , Educação de Pós-Graduação em Medicina , Educação de Graduação em Medicina , Avaliação Educacional/métodos , Estudantes de Medicina , Currículo , Avaliação Educacional/normas , Humanos , Projetos Piloto , Estudos Prospectivos , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA