Improving the Reliability of Student Scores from Speeded Assessments: An Illustration of Conditional Item Response Theory Using a Computer-Administered Measure of Vocabulary.
Read Writ
; 28(1): 31-56, 2015 Jan.
Article
en En
| MEDLINE
| ID: mdl-27721568
A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed.
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Tipo de estudio:
Prognostic_studies
Idioma:
En
Revista:
Read Writ
Año:
2015
Tipo del documento:
Article
País de afiliación:
Estados Unidos