Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Behav Res Methods ; 51(2): 507-522, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30478802

RESUMO

The validity of studies investigating interventions to enhance fluid intelligence (Gf) depends on the adequacy of the Gf measures administered. Such studies have yielded mixed results, with a suggestion that Gf measurement issues may be partly responsible. The purpose of this study was to develop a Gf test battery comprising tests meeting the following criteria: (a) strong construct validity evidence, based on prior research; (b) reliable and sensitive to change; (c) varying in item types and content; (d) producing parallel tests, so that pretest-posttest comparisons could be made; (e) appropriate time limits; (f) unidimensional, to facilitate interpretation; and (g) appropriate in difficulty for a high-ability population, to detect change. A battery comprising letter, number, and figure series and figural matrix item types was developed and evaluated in three large-N studies (N = 3,067, 2,511, and 801, respectively). Items were generated algorithmically on the basis of proven item models from the literature, to achieve high reliability at the targeted difficulty levels. An item response theory approach was used to calibrate the items in the first two studies and to establish conditional reliability targets for the tests and the battery. On the basis of those calibrations, fixed parallel forms were assembled for the third study, using linear programming methods. Analyses showed that the tests and test battery achieved the proposed criteria. We suggest that the battery as constructed is a promising tool for measuring the effectiveness of cognitive enhancement interventions, and that its algorithmic item construction enables tailoring the battery to different difficulty targets, for even wider applications.


Assuntos
Testes de Inteligência , Inteligência , Resolução de Problemas , Humanos , Reprodutibilidade dos Testes
2.
Psychol Assess ; 30(3): 328-338, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28447813

RESUMO

Achievement estimates are often based on either number correct scores or IRT-based ability parameters. Van der Linden (2007) and other researchers (e.g., Fox, Klein Entink, & van der Linden, 2007; Ranger, 2013) have developed psychometric models that allow for joint estimation of speed and item parameters using both response times and response data. This paper presents an application of this type of approach to a battery of 4 types of fluid reasoning measures, administered to a large sample of a highly educated examinees. We investigate the extent to which incorporation of response times in ability estimates can be used to inform the potential development of shorter test forms. In addition to exploratory analyses and response time data visualizations, we specifically consider the increase in precision of ability estimates given the addition of response time data relative to use of item responses alone. Our findings indicate that there may be instances where test forms can be substantially shortened without any reduction in score reliability, when response time information is incorporated into the item response model. (PsycINFO Database Record


Assuntos
Cognição , Testes Psicológicos , Tempo de Reação , Adolescente , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Psicológicos , Psicometria , Reprodutibilidade dos Testes , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa