Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Behav Res Methods ; 54(1): 54-74, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34100201

RESUMEN

A statistical procedure is assumed to produce comparable results across programs. Using the case of an exploratory factor analysis procedure-principal axis factoring (PAF) and promax rotation-we show that this assumption is not always justified. Procedures with equal names are sometimes implemented differently across programs: a jingle fallacy. Focusing on two popular statistical analysis programs, we indeed discovered a jingle jungle for the above procedure: Both PAF and promax rotation are implemented differently in the psych R package and in SPSS. Based on analyses with 247 real and 216,000 simulated data sets implementing 108 different data structures, we show that these differences in implementations can result in fairly different factor solutions for a variety of different data structures. Differences in the solutions for real data sets ranged from negligible to very large, with 42% displaying at least one different indicator-to-factor correspondence. A simulation study revealed systematic differences in accuracies between different implementations, and large variation between data structures, with small numbers of indicators per factor, high factor intercorrelations, and weak factors resulting in the lowest accuracies. Moreover, although there was no single combination of settings that was superior for all data structures, we identified implementations of PAF and promax that maximize performance on average. We recommend researchers to use these implementations as best way through the jungle, discuss model averaging as a potential alternative, and highlight the importance of adhering to best practices of scale construction.


Asunto(s)
Análisis Factorial , Simulación por Computador , Humanos , Psicometría/métodos
2.
J Intell ; 10(4)2022 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-36412792

RESUMEN

Autistic individuals often show impairments in cognitive and developmental domains beyond the core symptoms of lower social communication skills and restricted repetitive behaviors. Consequently, the assessment of cognitive and developmental functions constitutes an essential part of the diagnostic evaluation. Yet, evidence on differential validity from intelligence and developmental tests, which are commonly used with autistic individuals, varies widely. In the current study, we investigated the cognitive (i.e., intelligence, executive functions) and developmental (i.e., psychomotor skills, social-emotional skills, basic skills, motivation and attitude, participation during testing) functions of autistic and non-autistic children and adolescents using the Intelligence and Development Scales-2 (IDS-2). We compared 43 autistic (Mage = 12.30 years) with 43 non-autistic (Mage = 12.51 years) participants who were matched for age, sex, and maternal education. Autistic participants showed significantly lower mean values in psychomotor skills, language skills, and the evaluation of participation during testing of the developmental functions compared to the control sample. Our findings highlight that autistic individuals show impairments particularly in motor and language skills using the IDS-2, which therefore merit consideration in autism treatment in addition to the core symptoms and the individuals' intellectual functioning. Moreover, our findings indicate that particularly motor skills might be rather neglected in autism diagnosis and may be worthy of receiving more attention. Nonsignificant group differences in social-emotional skills could have been due to compensatory effects of average cognitive abilities in our autistic sample.

3.
Assessment ; 29(6): 1172-1189, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-33794710

RESUMEN

Research on comparability of general intelligence composites (GICs) is scarce and has focused exclusively on comparing GICs from different test batteries, revealing limited individual-level comparability. We add to these findings, investigating the group- and individual-level comparability of different GICs within test batteries (i.e., internal score comparability), thereby minimizing transient error and ruling out between-battery variance completely. We (a) determined the magnitude of intraindividual IQ differences, (b) investigated their impact on external validity, (c) explored possible predictors for these differences, and (d) examined ways to deal with incomparability. Results are based on the standardization samples of three intelligence test batteries, spanning from early childhood to late adulthood. Despite high group-level comparability, individual-level comparability was often unsatisfactory, especially toward the tails of the IQ distribution. This limited comparability has consequences for external validity, as GICs were differentially related to and often less predictive for school grades for individuals with high IQ differences. Of several predictors, only IQ level and age were systematically related to comparability. Consequently, findings challenge the use of overall internal consistencies for confidence intervals and suggest using confidence intervals based on test-retest reliabilities or age- and IQ-specific internal consistencies for clinical interpretation. Implications for test construction and application are discussed.


Asunto(s)
Inteligencia , Instituciones Académicas , Adulto , Preescolar , Humanos , Pruebas de Inteligencia , Reproducibilidad de los Resultados
4.
Assessment ; 28(2): 327-352, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-32613844

RESUMEN

The latent factor structure of the German Wechsler Intelligence Scale for Children-Fifth edition (German WISC-V) was examined using complementary hierarchical exploratory factor analyses (EFAs) with Schmid and Leiman transformation and confirmatory factor analyses (CFAs) for all reported models from the German WISC-V Technical Manual and rival bifactor models using the standardization sample (N = 1,087) correlation matrix of the 15 primary and secondary subtests. EFA results did not support a fifth factor (Fluid Reasoning). A four-factor model with the dominant general intelligence (g) factor resembling the WISC-IV was supported by EFA. CFA results indicated the best representation was a bifactor model with four group factors, complementing EFA results. Present EFA and CFA results replicated other independent assessments of standardization and clinical samples of the United States and international versions of the WISC-V and indicated primary, if not exclusive, interpretation of the Full Scale IQ as an estimate of g.


Asunto(s)
Inteligencia , Niño , Análisis Factorial , Humanos , Psicometría , Reproducibilidad de los Resultados , Escalas de Wechsler
5.
J Sch Psychol ; 88: 101-117, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34625207

RESUMEN

A significant body of research has demonstrated that IQs obtained from different intelligence tests substantially correlate at the group level. Yet, there is minimal research investigating whether different intelligence tests yield comparable results for individuals. Examining this issue is paramount given that high-stakes decisions are based on individual test results. Consequently, we investigated whether seven current and widely used intelligence tests yielded comparable results for individuals between the ages of 4-20 years. Results mostly indicated substantial correlations between tests, although several significant mean differences at the group level were identified. Results associated with individual-level comparability indicated that the interpretation of exact IQ scores cannot be empirically supported, as the 95% confidence intervals could not be reliably replicated with different intelligence tests. Similar patterns also appeared for the individual-level comparability of nonverbal and verbal intelligence factor scores. Furthermore, the nominal level of intelligence systematically predicted IQ differences between tests, with above- and below-average IQ scores associated with larger differences as compared to average IQ scores. Analyses based on continuous data confirmed that differences appeared to increase toward the above-average IQ score range. These findings are critical as these are the ranges in which diagnostic questions most often arise in practice. Implications for test interpretation and test construction are discussed.


Asunto(s)
Inteligencia , Adolescente , Adulto , Niño , Preescolar , Humanos , Pruebas de Inteligencia , Adulto Joven
6.
Assessment ; 27(8): 1853-1869, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-31023061

RESUMEN

The factor structure of the intelligence and scholastic skills domains of the Intelligence and Development Scales-2 was examined using exploratory factor analyses with the standardization and validation sample (N = 2,030, aged 5 to 20 years). Results partly supported the seven proposed intelligence group factors. However, the theoretical factors Visual Processing and Abstract Reasoning as well as Verbal Reasoning and Long-Term Memory collapsed, resulting in a five-factor structure for intelligence. Adding the three scholastic skills subtests resulted in an additional factor Reading/Writing and in Logical-Mathematical Reasoning showing a loading on abstract Visual Reasoning and the highest general factor loading. A data-driven separation of intelligence and scholastic skills is not evident. Omega reliability estimates based on Schmid-Leiman transformations revealed a strong general factor that accounted for most of the true score variance both overall and at the group factor level. The possible usefulness of factor scores is discussed.


Asunto(s)
Inteligencia , Análisis Factorial , Humanos , Psicometría , Reproducibilidad de los Resultados , Escalas de Wechsler
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA