Your browser doesn't support javascript.
loading
Exploring the use of Rasch modelling in "common content" items for multi-site and multi-year assessment.
Hope, David; Kluth, David; Homer, Matthew; Dewar, Avril; Goddard-Fuller, Rikki; Jaap, Alan; Cameron, Helen.
Afiliação
  • Hope D; Medical Education Unit, The Chancellor's Building, College of Medicine and Veterinary Medicine, The University of Edinburgh, 49 Little France Crescent, Edinburgh, Scotland, EH16 4SB, UK. david.hope@ed.ac.uk.
  • Kluth D; Medical Education Unit, The Chancellor's Building, College of Medicine and Veterinary Medicine, The University of Edinburgh, 49 Little France Crescent, Edinburgh, Scotland, EH16 4SB, UK.
  • Homer M; Leeds Institute of Medical Education, Leeds School of Medicine, Worsley Building, University of Leeds, Woodhouse, Leeds, LS2 9JT, UK.
  • Dewar A; Medical Education Unit, The Chancellor's Building, College of Medicine and Veterinary Medicine, The University of Edinburgh, 49 Little France Crescent, Edinburgh, Scotland, EH16 4SB, UK.
  • Goddard-Fuller R; Christie Education, The Christie NHS Foundation Trust, Manchester, M20 4BX, UK.
  • Jaap A; Medical Education Unit, The Chancellor's Building, College of Medicine and Veterinary Medicine, The University of Edinburgh, 49 Little France Crescent, Edinburgh, Scotland, EH16 4SB, UK.
  • Cameron H; Aston Medical School, Aston University, 295 Aston Express Way, Birmingham, B4 7ET, UK.
Article em En | MEDLINE | ID: mdl-38977526
ABSTRACT
Rasch modelling is a powerful tool for evaluating item performance, measuring drift in difficulty over time, and comparing students who sat assessments at different times or at different sites. Here, we use data from thirty UK medical schools to describe the benefits of Rasch modelling in quality assurance and the barriers to using it. Sixty "common content" multiple choice items were offered to all UK medical schools in 2016-17, and a further sixty in 2017-18, with five available in both years. Thirty medical schools participated, for sixty total datasets across two sessions, and 14,342 individual sittings. Schools selected items to embed in written assessment near the end of their programmes. We applied Rasch modelling to evaluate unidimensionality, model fit statistics and item quality, horizontal equating to compare performance across schools, and vertical equating to compare item performance across time. Of the sixty sittings, three provided non-unidimensional data, and eight violated goodness of fit measures. Item-level statistics identified potential improvements in item construction and provided quality assurance. Horizontal equating demonstrated large differences in scores across schools, while vertical equating showed item characteristics were stable across sessions. Rasch modelling provides significant advantages in model- and item- level reporting compared to classical approaches. However, the complexity of the analysis and the smaller number of educators familiar with Rasch must be addressed locally for a programme to benefit. Furthermore, due to the comparative novelty of Rasch modelling, there is greater ambiguity on how to proceed when a Rasch model identifies misfitting or problematic data.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Adv Health Sci Educ Theory Pract Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Adv Health Sci Educ Theory Pract Ano de publicação: 2024 Tipo de documento: Article