Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros











Intervalo de año de publicación
1.
Front Psychol ; 10: 2930, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31998189

RESUMEN

The present study aimed at investigating invariance of a diagnostic classification model (DCM) for reading comprehension across gender. In contrast to models with continuous traits, diagnostic classification models inform mastery of a finite set of latent attributes, e.g., vocabulary or syntax in the reading context, and allow to provide fine grained feedback to learners and instructors. The generalized deterministic, noisy "and" gate (G-DINA) model was fit to item responses of 1000 male and female individuals to a high-stakes reading comprehension test. Use of the G-DINA model allowed for minimal assumption on the relationship of latent attribute profiles and item-specific response probabilities, i.e., the G-DINA model can represent compensatory or non-compensatory relationships of latent attributes and response probabilities. Item parameters were compared across the two samples, and only a small number of item parameters were statistically different between the two groups, corroborating the result of a formal measurement invariance test based on the multigroup G-DINA model. Neither correlations between latent attributes were significantly different across the two groups, nor mastery probabilities for any of the attributes. Model selection at item level showed that from among the 18 items that required multiple attributes, 16 items picked different rules (DCMs) across the groups. While this seems to suggest that the relationship among the attributes of reading comprehension differs across the two groups, a closer inspection of the rules picked by the items showed that almost in all cases the relationships were very similar. If a compensatory DCM was suggested by the G-DINA framework for an item in the female group, a model belonging to the same family resulted for the male group.

2.
Eur J Investig Health Psychol Educ ; 10(1): 59-69, 2019 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-34542469

RESUMEN

In this study, we examine the psychometric properties of the Persian translation of the Children's Test Anxiety Scale (CTAS) using the Rasch rating scale model. In the first step, rating scale diagnostics revealed that the thresholds were disordered. To remedy this problem, two categories were collapsed and a rating scale structure with three points turned out to have optimal properties. Principal component analysis (PCA) of standardized residuals showed that the scale is not unidimensional. Since the scale is designed to measure three distinct dimensions of test anxiety we fitted a correlated three-dimensional Rasch model. A likelihood ratio test showed that the three-dimensional model fits significantly better than a unidimensional model. Principal component analysis of standardized residuals indicated that three subscales are unidimensional. Infit and outfit statistics indicated that one item misfitted the model in all the analyses. Medium correlations between the dimensions was evidence of the distinctness of the subscales and justifiability of the multidimensional structure for the scale. Criterion-related evidence was provided by correlating the scale with the Spence Children's Anxiety Scale (SCAS). The patterns of correlations provided evidence of convergent-discriminant validity. Findings suggest that a three-dimensional instrument with a 3-point Likert scale works best in the Persian language.

3.
Percept Mot Skills ; 126(1): 70-86, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30501376

RESUMEN

The d2 test is a cancellation test to measure attention, visual scanning, and processing speed. It is the most frequently used test of attention in Europe. Although it has been validated using factor analytic techniques and correlational analyses, its fit to item response theory models has not been examined. We evaluated the fit of the d2 test to the Rasch Poisson Counts Model (RPCM) by examining the fit of six different scoring techniques. Only two scoring techniques-concentration performance scores and total number of characters canceled-fit the RPCM. The individual items fit the RPCM, with negligible differential item functioning across sex. Graphical model check and likelihood ratio test confirmed the overall fit of the two scoring techniques to RPCM.


Asunto(s)
Atención/fisiología , Modelos Psicológicos , Modelos Estadísticos , Pruebas Neuropsicológicas/estadística & datos numéricos , Psicometría/estadística & datos numéricos , Desempeño Psicomotor/fisiología , Adulto , Femenino , Humanos , Masculino , Psicometría/métodos
5.
Front Psychol ; 8: 897, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28611721

RESUMEN

The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices.

6.
Psicol Reflex Crit ; 30(1): 16, 2017 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-32026973

RESUMEN

Baddeley's grammatical reasoning test is a quick and efficient measure of fluid reasoning which is commonly used in research on cognitive abilities and the impact of stresses and environmental factors on cognitive performance. The test, however, is verbal and can only be used with native speakers of English. In this study, we adapted the test for application in the Persian language using a different pair of verbs and geometrical shapes instead of English letters. The adapted test had high internal consistency and retest reliability estimates. It also had an excellent fit to a one-factor confirmatory factor model and correlated acceptably with other measures of fluid intelligence and participants' grade point average (GPA).

7.
Psicológica (Valencia, Ed. impr.) ; 38(1): 93-109, 2017. tab, graf
Artículo en Inglés | IBECS | ID: ibc-161214

RESUMEN

In large scale multiple-choice (MC) tests alternate forms of a test may be developed to prevent cheating by changing the order of items or by changing the position of the response options. The assumption is that since the content of the test forms are the same the order of items or the positions of the response options do not have any effect on item difficulty and other psychometric characteristics of the test. The purpose of the present investigation is to model the difficulty of the options’ positions (a, b, c, and d) in a high-stakes MC test using the linear logistic test model (Fischer, 1973). Findings revealed that options’ positions have very slight differences in difficulty and as the position of the correct option moves toward the end of the set of response options it becomes slightly more difficult (AU)


No disponible


Asunto(s)
Humanos , Masculino , Femenino , Psicoterapia/métodos , Psicometría/métodos , Estudiantes/psicología , Estudiantes/estadística & datos numéricos , Modelos Logísticos , Pruebas Psicológicas/estadística & datos numéricos , Análisis de Datos/estadística & datos numéricos
8.
Psicol. reflex. crit ; 30: 16, 2017. tab
Artículo en Inglés | LILACS, Index Psicología - Revistas | ID: biblio-909842

RESUMEN

Baddeley's grammatical reasoning test is a quick and efficient measure of fluid reasoning which is commonly used in research on cognitive abilities and the impact of stresses and environmental factors on cognitive performance. The test, however, is verbal and can only be used with native speakers of English. In this study, we adapted the test for application in the Persian language using a different pair of verbs and geometrical shapes instead of English letters. The adapted test had high internal consistency and retest reliability estimates. It also had an excellent fit to a one-factor confirmatory factor model and correlated acceptably with other measures of fluid intelligence and participants' grade point average (GPA). (AU)


Asunto(s)
Humanos , Masculino , Femenino , Adulto , Cognición , Lingüística , Psicometría , Reproducibilidad de los Resultados , Encuestas y Cuestionarios , Traducciones
9.
Psicológica (Valencia, Ed. impr.) ; 37(1): 85-104, 2016. tab
Artículo en Inglés | IBECS | ID: ibc-148722

RESUMEN

In this study the magnitudes of local dependence generated by cloze test items and reading comprehension items were compared and their impact on parameter estimates and test precision was investigated. An advanced English as a foreign language reading comprehension test containing three reading passages and a cloze test was analyzed with a two-parameter logistic testlet response model and a two-parameter logistic item response model. Results showed that the cloze test produced substantially higher magnitudes of local dependence than reading items, albeit the levels of local dependency produced by reading items was not ignorable. Further analyses demonstrated that while even substantial magnitudes of testlet effect does not impact parameter estimates it does influence test reliability and information. Implications of the research for foreign language proficiency testing, where testlets are regularly used, are discussed (AU)


No disponible


Asunto(s)
Humanos , Masculino , Femenino , Comprensión/ética , Comprensión/fisiología , Habla/fisiología , Encuestas y Cuestionarios/clasificación , Ingeniería/métodos , Lingüística/educación , Psicología Educacional/educación , Comprensión/clasificación , Habla/clasificación , Encuestas y Cuestionarios , Ingeniería/clasificación , Lingüística/tendencias , Psicología Educacional/métodos
10.
Psychol Rep ; 114(2): 315-25, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24897892

RESUMEN

The validity and psychometric properties of a new Persian adaptation of the Foreign Language Reading Anxiety Scale were investigated. The scale was translated into Persian and administered to 160 undergraduate students (131 women, 29 men; M age = 23.4 yr., SD = 4.3). Rasch model analysis on the scale's original 20 items revealed that the data do not fit the partial credit model. Principal components analysis identified three factors: one related to feelings of anxiety about reading, the second reflected the reverse-worded items, and the third related to general ideas about reading in a foreign language. In a re-analysis, the 12 items that loaded on the first factor showed a good fit with the partial credit model.


Asunto(s)
Ansiedad/diagnóstico , Multilingüismo , Lectura , Adulto , Femenino , Humanos , Masculino , Análisis de Componente Principal , Escalas de Valoración Psiquiátrica , Psicometría/instrumentación , Reproducibilidad de los Resultados , Encuestas y Cuestionarios , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA