RESUMEN
BACKGROUND: We investigated whether question format and access to the correct answers affect the pass mark set by standard-setters on written examinations. METHODS: Trained educators used the Angoff method to standard set two 50-item tests with identical vignettes, one in a single best answer question (SBAQ) format (with five answer options) and the other in a very short answer question (VSAQ) format (requiring free text responses). Half the participants had access to the correct answers and half did not. The data for each group were analysed to determine if the question format or having access to the answers affected the pass mark set. RESULTS: A lower pass mark was set for the VSAQ test than the SBAQ test by the standard setters who had access to the answers (median difference of 13.85 percentage points, Z = -2.82, p = 0.002). Comparable pass marks were set for the SBAQ test by standard setters with and without access to the correct answers (60.65% and 60.90% respectively). A lower pass mark was set for the VSAQ test when participants had access to the correct answers (difference in medians -13.75 percentage points, Z = 2.46, p = 0.014). CONCLUSIONS: When given access to the potential correct answers, standard setters appear to appreciate the increased difficulty of VSAQs compared to SBAQs.
Asunto(s)
Evaluación Educacional , Evaluación Educacional/métodos , HumanosRESUMEN
OBJECTIVES: Given the absence of a common passing standard for students at UK medical schools, this paper compares independently set standards for common 'one from five' single-best-answer (multiple-choice) items used in graduation-level applied knowledge examinations and explores potential reasons for any differences. METHODS: A repeated cross-sectional study was conducted. Participating schools were sent a common set of graduation-level items (55 in 2013-2014; 60 in 2014-2015). Items were selected against a blueprint and subjected to a quality review process. Each school employed its own standard-setting process for the common items. The primary outcome was the passing standard for the common items by each medical school set using the Angoff or Ebel methods. RESULTS: Of 31 invited medical schools, 22 participated in 2013-2014 (71%) and 30 (97%) in 2014-2015. Schools used a mean of 49 and 53 common items in 2013-2014 and 2014-2015, respectively, representing around one-third of the items in the examinations in which they were embedded. Data from 19 (61%) and 26 (84%) schools, respectively, met the inclusion criteria for comparison of standards. There were statistically significant differences in the passing standards set by schools in both years (effect sizes (f2 ): 0.041 in 2013-2014 and 0.218 in 2014-2015; both p < 0.001). The interquartile range of standards was 5.7 percentage points in 2013-2014 and 6.5 percentage points in 2014-2015. There was a positive correlation between the relative standards set by schools in the 2 years (Pearson's r = 0.57, n = 18, p = 0.014). Time allowed per item, method of standard setting and timing of examination in the curriculum did not have a statistically significant impact on standards. CONCLUSIONS: Independently set standards for common single-best-answer items used in graduation-level examinations vary across UK medical schools. Further work to examine standard-setting processes in more detail is needed to help explain this variability and develop methods to reduce it.
Asunto(s)
Competencia Clínica/normas , Educación de Pregrado en Medicina/normas , Evaluación Educacional/métodos , Facultades de Medicina , Estudiantes de Medicina/estadística & datos numéricos , Estudios Transversales , Curriculum , Humanos , Competencia Profesional , Estándares de Referencia , Reino UnidoRESUMEN
In June 2023, National Health Service (NHS) England published a Long-Term Workforce Plan 'to put staffing on a sustainable footing and improve patient care.' The plan falls in to three main areas: train, retain and reform. Currently there are around 7,500 medical school places available annually in England, but it is proposed to increase this to 10,000 by 2028 and to 15,000 by 2031. Five new medical schools were approved in the 2018 expansion and others are preparing applications in anticipation of future expansion. In this article, we discuss what factors might shape a new medical school, ensuring it meets the standards required by the UK regulator (General Medical Council) set out in Promoting Excellence and in Outcomes for Graduates.
RESUMEN
BACKGROUND: Ethnic minorities account for 34% of critically ill patients with COVID-19 despite constituting 14% of the UK population. Internationally, researchers have called for studies to understand deterioration risk factors to inform clinical risk tool development. METHODS: Multicentre cohort study of hospitalised patients with COVID-19 (n=3671) exploring determinants of health, including Index of Multiple Deprivation (IMD) subdomains, as risk factors for presentation, deterioration and mortality by ethnicity. Receiver operator characteristics were plotted for CURB65 and ISARIC4C by ethnicity and area under the curve (AUC) calculated. RESULTS: Ethnic minorities were hospitalised with higher Charlson Comorbidity Scores than age, sex and deprivation matched controls and from the most deprived quintile of at least one IMD subdomain: indoor living environment (LE), outdoor LE, adult skills, wider barriers to housing and services. Admission from the most deprived quintile of these deprivation forms was associated with multilobar pneumonia on presentation and ICU admission. AUC did not exceed 0.7 for CURB65 or ISARIC4C among any ethnicity except ISARIC4C among Indian patients (0.83, 95% CI 0.73 to 0.93). Ethnic minorities presenting with pneumonia and low CURB65 (0-1) had higher mortality than White patients (22.6% vs 9.4%; p<0.001); Africans were at highest risk (38.5%; p=0.006), followed by Caribbean (26.7%; p=0.008), Indian (23.1%; p=0.007) and Pakistani (21.2%; p=0.004). CONCLUSIONS: Ethnic minorities exhibit higher multimorbidity despite younger age structures and disproportionate exposure to unscored risk factors including obesity and deprivation. Household overcrowding, air pollution, housing quality and adult skills deprivation are associated with multilobar pneumonia on presentation and ICU admission which are mortality risk factors. Risk tools need to reflect risks predominantly affecting ethnic minorities.
Asunto(s)
Contaminación del Aire/análisis , Benchmarking/métodos , COVID-19/terapia , Etnicidad , Vivienda/normas , Admisión del Paciente , Medición de Riesgo/métodos , Distribución por Edad , Factores de Edad , Anciano , COVID-19/etnología , Comorbilidad , Aglomeración , Femenino , Estudios de Seguimiento , Humanos , Masculino , Persona de Mediana Edad , Multimorbilidad , Factores de Riesgo , SARS-CoV-2 , Reino Unido/epidemiologíaRESUMEN
OBJECTIVE: To assess the utility and ability of the novel prescribing very short answer (VSA) question format to identify the sources of undergraduate prescribing errors when compared with the conventional single best answer (SBA) question format and assess the acceptability of machine marking prescribing VSAs. DESIGN: A prospective study involving analysis of data generated from a pilot two-part prescribing assessment. SETTING: Two UK medical schools. PARTICIPANTS: 364 final year medical students took part. Participation was voluntary. There were no other inclusion or exclusion criteria. OUTCOMES: (1) Time taken to mark and verify VSA questions (acceptability), (2) differences between VSA and SBA scores, (3) performance in VSA and (4) SBA format across different subject areas and types of prescribing error made in the VSA format. RESULTS: 18 200 prescribing VSA questions were marked and verified in 91 min. The median percentage score for the VSA test was significantly lower than the SBA test (28% vs 64%, p<0.0001). Significantly more prescribing errors were detected in the VSA format than the SBA format across all domains, notably in prescribing insulin (96.4% vs 50.3%, p<0.0001), fluids (95.6% vs 55%, p<0.0001) and analgesia (85.7% vs 51%, p<0.0001). Of the incorrect VSA responses, 33.1% were due to the medication prescribed, 6.0% due to the dose, 1.4% due to the route and 4.8% due to the frequency. CONCLUSIONS: Prescribing VSA questions represent an efficient tool for providing detailed insight into the sources of significant prescribing errors, which are not identified by SBA questions. This makes the prescribing VSA a valuable formative assessment tool to enhance students' skills in safe prescribing and to potentially reduce prescribing errors.