Your browser doesn't support javascript.
loading
Crowdsourcing for assessment items to support adaptive learning.
Tackett, Sean; Raymond, Mark; Desai, Rishi; Haist, Steven A; Morales, Amy; Gaglani, Shiv; Clyman, Stephen G.
Afiliação
  • Tackett S; a Department of Medicine, Johns Hopkins Bayview Medical Center , Baltimore , Maryland.
  • Raymond M; b Osmosis , Baltimore, MD , USA.
  • Desai R; c National Board of Medical Examiners , Philadelphia, PA , USA.
  • Haist SA; b Osmosis , Baltimore, MD , USA.
  • Morales A; d Stanford University School of Medicine , Palo Alto, CA , USA.
  • Gaglani S; c National Board of Medical Examiners , Philadelphia, PA , USA.
  • Clyman SG; c National Board of Medical Examiners , Philadelphia, PA , USA.
Med Teach ; 40(8): 838-841, 2018 08.
Article em En | MEDLINE | ID: mdl-30096987
PURPOSE: Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) "crowdsourced" from medical learners could meet the standards of many large-scale testing programs. METHODS: Users of a medical education app (Osmosis.org, Baltimore, MD) volunteered to submit case-based MCQs. Eleven volunteers were selected to submit MCQs targeted to second year medical students. Two hundred MCQs were subjected to duplicate review by a panel of internal medicine faculty who rated each item for relevance, content accuracy, and quality of response option explanations. A sample of 121 items was pretested on clinical subject exams completed by a national sample of U.S. medical students. RESULTS: Seventy-eight percent of the 200 MCQs met faculty reviewer standards based on relevance, accuracy, and quality of explanations. Of the 121 pretested MCQs, 50% met acceptable statistical criteria. The most common reasons for exclusion were that the item was too easy or had a low discrimination index. CONCLUSIONS: Crowdsourcing can efficiently yield high-quality assessment items that meet rigorous judgmental and statistical criteria. Similar models may be adopted by students and educators to augment item pools that support adaptive learning.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Educação de Graduação em Medicina / Avaliação Educacional / Feedback Formativo Tipo de estudo: Guideline / Prognostic_studies Limite: Humans Idioma: En Revista: Med Teach Ano de publicação: 2018 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Educação de Graduação em Medicina / Avaliação Educacional / Feedback Formativo Tipo de estudo: Guideline / Prognostic_studies Limite: Humans Idioma: En Revista: Med Teach Ano de publicação: 2018 Tipo de documento: Article