Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
J Appl Psychol ; 104(12): 1514-1534, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31094540

RESUMEN

The stereotype threat literature primarily comprises lab studies, many of which involve features that would not be present in high-stakes testing settings. We meta-analyze the effect of stereotype threat on cognitive ability tests, focusing on both laboratory and operational studies with features likely to be present in high stakes settings. First, we examine the features of cognitive ability test metric, stereotype threat cue activation strength, and type of nonthreat control group, and conduct a focal analysis removing conditions that would not be present in high stakes settings. We also take into account a previously unrecognized methodological error in how data are analyzed in studies that control for scores on a prior cognitive ability test, which resulted in a biased estimate of stereotype threat. The focal sample, restricting the database to samples utilizing operational testing-relevant conditions, displayed a threat effect of d = -.14 (k = 45, N = 3,532, SDδ = .31). Second, we present a comprehensive meta-analysis of stereotype threat. Third, we examine a small subset of studies in operational test settings and studies utilizing motivational incentives, which yielded d-values ranging from .00 to -.14. Fourth, the meta-analytic database is subjected to tests of publication bias, finding nontrivial evidence for publication bias. Overall, results indicate that the size of the stereotype threat effect that can be experienced on tests of cognitive ability in operational scenarios such as college admissions tests and employment testing may range from negligible to small. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Evaluación Educacional/métodos , Selección de Personal/métodos , Estereotipo , Señales (Psicología) , Humanos
2.
J Appl Psychol ; 102(10): 1435-1447, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28530416

RESUMEN

Separate meta-analyses of the cognitive ability and assessment center (AC) literatures report higher criterion-related validity for cognitive ability tests in predicting job performance. We instead focus on 17 samples in which both AC and ability scores are obtained for the same examinees and used to predict the same criterion. Thus, we control for differences in job type and in criteria that may have affected prior conclusions. In contrast to Schmidt and Hunter's (1998) meta-analysis, reporting mean validity of .51 for ability and .37 for ACs, we found using random-effects models mean validity of .22 for ability and .44 for ACs using comparable corrections for range restriction and measurement error in the criterion. We posit that 2 factors contribute to the differences in findings: (a) ACs being used on populations already restricted on cognitive ability and (b) the use of less cognitively loaded criteria in AC validation research. (PsycINFO Database Record


Asunto(s)
Pruebas de Aptitud/estadística & datos numéricos , Aptitud/fisiología , Cognición/fisiología , Selección de Personal/estadística & datos numéricos , Rendimiento Laboral/estadística & datos numéricos , Adulto , Humanos
3.
J Appl Psychol ; 102(10): 1421-1434, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28530417

RESUMEN

It is common to add an additional predictor to a selection system with the goal of increasing criterion-related validity. Research on the incremental validity of a second predictor is generally based on forming a regression-weighted composite of the predictors. However, in practice predictors are commonly used in ways other than regression-weighted composites, and we examine the robustness of incremental validity findings to other ways of using predictors, namely, unit weighting and multiple hurdles. We show that there are settings in which the incremental value of a second predictor disappears, and can even produce lower validity than the first predictor alone, when these alternatives to regression weighting are used. First, we examine conditions under which unit weighting will negate gain in predictive power attainable via regression weights. Second, we revisit Schmidt and Hunter's (1998) summary of incremental validity of predictors over cognitive ability, evaluating whether the reported incremental value of a second predictor is different when predictors are unit weighted rather than regression weighted. Third, we analyze data reported in the published literature to discern the frequency with which unit weighting might affect conclusions about whether there is value in adding a second predictor to a first. Finally, we shift from unit weighting to multiple hurdle selection, examining conditions under which conclusions about incremental validity differ when regression weighting is replaced by multiple-hurdle selection. (PsycINFO Database Record


Asunto(s)
Selección de Personal/estadística & datos numéricos , Reproducibilidad de los Resultados , Adulto , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...