Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Psychol Methods ; 2023 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-37166853

RESUMO

Meaningful interpretations of scores derived from psychological scales depend on the replicability of psychometric properties. Despite this, and unexpected inconsistencies in psychometric results across studies, psychometrics has often been overlooked in the replication literature. In this article, we begin to address replication issues in exploratory factor analysis (EFA). We use a Monte Carlo simulation to investigate methodological choices made throughout the EFA process that have the potential to add heterogeneity to results. Our findings show that critical decision points for EFA include the method for determining the number of factors as well as rotation. The results also demonstrate the relevancy of data characteristics, as some contexts are more susceptible to the effects of methodological choice on the heterogeneity of results. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

3.
Educ Psychol Meas ; 82(5): 967-988, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35989729

RESUMO

When fitting unidimensional item response theory (IRT) models, the population distribution of the latent trait (θ) is often assumed to be normally distributed. However, some psychological theories would suggest a nonnormal θ. For example, some clinical traits (e.g., alcoholism, depression) are believed to follow a positively skewed distribution where the construct is low for most people, medium for some, and high for few. Failure to account for nonnormality may compromise the validity of inferences and conclusions. Although corrections have been developed to account for nonnormality, these methods can be computationally intensive and have not yet been widely adopted. Previous research has recommended implementing nonnormality corrections when θ is not "approximately normal." This research focused on examining how far θ can deviate from normal before the normality assumption becomes untenable. Specifically, our goal was to identify the type(s) and degree(s) of nonnormality that result in unacceptable parameter recovery for the graded response model (GRM) and 2-parameter logistic model (2PLM).

4.
Psychol Methods ; 2022 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-35404630

RESUMO

Concerns about replication failures can be partially recast as concerns about excessive heterogeneity in research results. Although this heterogeneity is an inherent part of science (e.g., sampling variability; studying different conditions), not all heterogeneity results from unavoidable sources. In particular, the flexibility researchers have when designing studies and analyzing data adds additional heterogeneity. This flexibility has been the topic of considerable discussion in the last decade. Ideas, and corresponding phrases, have been introduced to help unpack researcher behaviors, including researcher degrees of freedom and questionable research practices. Using these concepts and phrases, methodological and substantive researchers have considered how researchers' choices impact statistical conclusions and reduce clarity in the research literature. While progress has been made, inconsistent, vague, and overlapping use of the terminology surrounding these choices has made it difficult to have clear conversations about the most pressing issues. Further refinement of the language conveying the underlying concepts can catalyze further progress. We propose a revised, expanded taxonomy for assessing research and reporting practices. In addition, we redefine several crucial terms in a way that reduces overlap and enhances conceptual clarity, with particular focus on distinguishing practices along two lines: research versus reporting practices and choices involving multiple empirically supported options versus choices known to be subpar. We illustrate the effectiveness of these changes using conceptual and simulated demonstrations, and we discuss how this taxonomy can be valuable to substantive researchers by helping to navigate this flexibility and to methodological researchers by motivating research toward areas of greatest need. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

5.
Assessment ; 28(2): 395-412, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-31786956

RESUMO

The Brief Self-Control Scale (BSCS) is a widely used measure of self-control, a construct associated with beneficial psychological outcomes. Several studies have investigated the psychometric properties of the BSCS but have failed to reach consensus. This has resulted in an unstable and ambiguous understanding of the scale and its psychometric properties. The current study sought resolution by implementing scale evaluation approaches guided by modern psychometric literature. Additionally, our goal was to provide a more comprehensive item analysis via the item response theory (IRT) framework. Results from the current study support both unidimensional and multidimensional factor structures for the 13-item version of the BSCS. The addition of an IRT analysis provided a new perspective on item- and test-level functioning. The goal of a more defensible psychometric grounding for the BSCS is to promote greater consistency, stability, and trust in future results.


Assuntos
Autocontrole , Análise Fatorial , Humanos , Psicometria , Reprodutibilidade dos Testes , Inquéritos e Questionários
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...