Examiner error in curriculum-based measurement of oral reading.
J Sch Psychol
; 52(4): 361-75, 2014 Aug.
Article
em En
| MEDLINE
| ID: mdl-25107409
ABSTRACT
Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status. Fit indices indicated that a cross-classified random effects model (CCREM) best fits the data with measures nested within students, students nested within schools, and examiners crossing schools. Intraclass correlations of the CCREM revealed that roughly 16% of the variance in student CBM-R scores was associated between examiners. The remaining variance was associated with the measurement level, 3.59%; between students, 75.23%; and between schools, 5.21%. Results were moderated by grade level but not by EL status. The discussion addresses the implications of this error for low-stakes and high-stakes decisions about students, teacher evaluation systems, and hypothesis testing in reading intervention research.
Palavras-chave
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Assunto principal:
Leitura
/
Avaliação Educacional
/
Testes de Linguagem
Tipo de estudo:
Prognostic_studies
Limite:
Child
/
Female
/
Humans
/
Male
Idioma:
En
Revista:
J Sch Psychol
Ano de publicação:
2014
Tipo de documento:
Article