Adaptive testing for psychological assessment: how many items are enough to run an adaptive testing algorithm?
J Appl Meas
; 14(2): 106-17, 2013.
Article
em En
| MEDLINE
| ID: mdl-23816590
Although the principles of adaptive testing were established in the psychometric literature many years ago (e.g., Weiss, 1977), and practice of adaptive testing is established in educational assessment, it not yet widespread in psychological assessment. One obstacle to adaptive psychological testing is a lack of clarity about the necessary number of items to run an adaptive algorithm. The study explores the relationship between item bank size, test length and measurement precision. Simulated adaptive test runs (allowing a maximum of 30 items per person) out of an item bank with 10 items per ability level (covering .5 logits, 150 items total) yield a standard error of measurement (SEM) of .47 (.39) after an average of 20 (29) items for 85-93% (64-82%) of the simulated rectangular sample. Expanding the bank to 20 items per level (300 items total) did not improve the algorithm's performance significantly. With a small item bank (5 items per ability level, 75 items total) it is possible to reach the same SEM as with a conventional test, but with fewer items or a better SEM with the same number of items.
Buscar no Google
Coleções:
01-internacional
Base de dados:
MEDLINE
Assunto principal:
Testes Psicológicos
/
Psicometria
/
Algoritmos
/
Interpretação Estatística de Dados
/
Modelos Estatísticos
/
Tamanho da Amostra
/
Avaliação Educacional
Tipo de estudo:
Prognostic_studies
/
Risk_factors_studies
Idioma:
En
Revista:
J Appl Meas
Ano de publicação:
2013
Tipo de documento:
Article
País de afiliação:
Áustria