Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
2.
Perspect Med Educ ; 4(5): 244-251, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26350082

RESUMO

BACKGROUND: This study investigated the impact of addressing item writing flaws, testing at low cognitive level and non-functioning distractors (< 5 % selection frequency) in multiple-choice assessment in preclinical medical education. METHOD: Multiple-choice questions with too high or too low difficulty (difficulty index < 0.4 or > 0.8) and insufficient discriminatory ability (point-biserial correlation < 0.2) on previous administration were identified. Items in Experimental Subgroup A underwent removal of item writing flaws along with enhancement of tested cognitive level (21 multiple-choice questions), while Experimental Subgroup B underwent replacement or removal of non-functioning distractors (11 multiple-choice questions). A control group of items (Group C) did not undergo any intervention (23 multiple-choice questions). RESULT: Post-intervention, the average number of functioning distractors (≥ 5 % selection frequency) per multiple-choice question increased from 0.67 to 0.81 in Subgroup A and from 0.91 to 1.09 in Subgroup B; a statistically significant increase in the number of multiple-choice questions with sufficient point-biserial correlation was also noted. No significant changes were noted in psychometric characteristics of the control group of items. CONCLUSION: Correction of item flaws, removal or replacement of non-functioning distractors, and enhancement of tested cognitive level positively impact the discriminatory ability of multiple-choice questions. This helps prevent construct-irrelevant variance from affecting the evidence of validity of scores obtained in multiple-choice questions.

3.
Acad Med ; 78(8): 844-50, 2003 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-12915381

RESUMO

PURPOSE: To measure the inter-rater reliability of the Sequenced Performance Inventory and Reflective Assessment of Learning (SPIRAL), a twenty-three item scoring rubric designed to assess first and second-year students' competencies such as "acquisition of knowledge," "peer teaching and communication skills," and "professional behavior" in a problem-based learning (PBL) curriculum at the University of North Dakota School of Medicine and Health Sciences. METHOD: In 2001, the authors constructed a 69-item multiple-choice questionnaire consisting of descriptions of "prototypical" (representing real students) PBL student performances. For each of the 23 SPIRAL items, the authors chose competency descriptions of an "emerging," a "developing," and an "advanced" student (69 prototypical students). These descriptions were obtained from narrative comments by PBL facilitators of real students in their classes in 2000-2001. Seventeen experienced facilitators (PBL tutors) were subsequently asked to rank the 69 prototypical students based on SPIRAL anchor points in the three competency levels. Kendall's coefficient of concordance was used to test inter-rater reliability. Modes were also determined to illustrate the extent to which the ratings of the 17 facilitators aligned with the ratings anticipated by the authors. RESULTS: Overall, inter-rater scoring for all items combined was considered to be reliable (Kendall's coefficient of concordance W =.75). Inter-rater scoring was also determined for each of the 23 SPIRAL items, with Ws ranging from.97 to.53. Facilitators rated students according to 55 of the 69 SPIRAL descriptors as the researchers anticipated. CONCLUSIONS: The scoring rubric is reliable. Further criterion-related validation study is warranted.


Assuntos
Competência Clínica , Educação de Graduação em Medicina , Aprendizagem , Variações Dependentes do Observador , Aprendizagem Baseada em Problemas , Reprodutibilidade dos Testes , Avaliação Educacional , Humanos , North Dakota , Faculdades de Medicina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA