Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Psychometrika ; 2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-38963537

RESUMO

Wu and Browne (Psychometrika 80(3):571-600, 2015. https://doi.org/10.1007/s11336-015-9451-3 ; henceforth W &B) introduced the notion of adventitious error to explicitly take into account approximate goodness of fit of covariance structure models (CSMs). Adventitious error supposes that observed covariance matrices are not directly sampled from a theoretical population covariance matrix but from an operational population covariance matrix. This operational matrix is randomly distorted from the theoretical matrix due to differences in study implementations. W &B showed how adventitious error is linked to the root mean square error of approximation (RMSEA) and how the standard errors (SEs) of parameter estimates are augmented. Our contribution is to consider adventitious error as a general phenomenon and to illustrate its consequences. Using simulations, we illustrate that its impact on SEs can be generalized to pairwise relations between variables beyond the CSM context. Using derivations, we conjecture that heterogeneity of effect sizes across studies and overestimation of statistical power can both be interpreted as stemming from adventitious error. We also show that adventitious error, if it occurs, has an impact on the uncertainty of composite measurement outcomes such as factor scores and summed scores. The results of a simulation study show that the impact on measurement uncertainty is rather small although larger for factor scores than for summed scores. Adventitious error is an assumption about the data generating mechanism; the notion offers a statistical framework for understanding a broad range of phenomena, including approximate fit, varying research findings, heterogeneity of effects, and overestimates of power.

2.
J Exp Psychol Gen ; 153(4): 1139-1151, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38587935

RESUMO

The calculation of statistical power has been taken up as a simple yet informative tool to assist in designing an experiment, particularly in justifying sample size. A difficulty with using power for this purpose is that the classical power formula does not incorporate sources of uncertainty (e.g., sampling variability) that can impact the computed power value, leading to a false sense of precision and confidence in design choices. We use simulations to demonstrate the consequences of adding two common sources of uncertainty to the calculation of power. Sampling variability in the estimated effect size (Cohen's d) can introduce a large amount of uncertainty (e.g., sometimes producing rather flat distributions) in power and sample-size determination. The addition of random fluctuations in the population effect size can cause values of its estimates to take on a sign opposite the population value, making calculated power values meaningless. These results suggest that calculated power values or use of such values to justify sample size add little to planning a study. As a result, researchers should put little confidence in power-based choices when planning future studies. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Incerteza , Humanos , Tamanho da Amostra
3.
Psychol Methods ; 2024 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-38330341

RESUMO

Observed scores (e.g., summed scores and estimated factor scores) are assumed to reflect underlying constructs and have many uses in psychological science. Constructs are often operationalized as latent variables (LVs), which are mathematically defined by their relations with manifest variables in an LV measurement model (e.g., common factor model). We examine the performance of several types of observed scores for the purposes of (a) estimating latent scores and classifying people and (b) recovering structural relations among LVs. To better reflect practice, our evaluation takes into account different sources of uncertainty (i.e., sampling error and model error). We review psychometric properties of observed scores based on the classical test theory applied to common factor models, report on a simulation study examining their performance, and provide two empirical examples to illustrate how different scores perform under different conditions of reliability, sample size, and model error. We conclude with general recommendations for using observed scores and discuss future research directions. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

4.
Annu Rev Clin Psychol ; 19: 155-176, 2023 05 09.
Artigo em Inglês | MEDLINE | ID: mdl-36750263

RESUMO

Partialing is a statistical approach researchers use with the goal of removing extraneous variance from a variable before examining its association with other variables. Controlling for confounds through analysis of covariance or multiple regression analysis and residualizing variables for use in subsequent analyses are common approaches to partialing in clinical research. Despite its intuitive appeal, partialing is fraught with undesirable consequences when predictors are correlated. After describing effects of partialing on variables, we review analytic approaches commonly used in clinical research to make inferences about the nature and effects of partialed variables. We then use two simulations to show how partialing can distort variables and their relations with other variables. Having concluded that, with rare exception, partialing is ill-advised, we offer recommendations for reducing or eliminating problematic uses of partialing. We conclude that the best alternative to partialing is to define and measure constructs so that it is not needed.

5.
Multivariate Behav Res ; 58(3): 543-559, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35263213

RESUMO

There are several approaches to incorporating uncertainty in power analysis. We review these approaches and highlight the Bayesian-classical hybrid approach that has been implemented in the R package hybridpower. Calculating Bayesian-classical hybrid power circumvents the problem of local optimality in which calculated power is valid if and only if the specified inputs are perfectly correct. hybridpower can compute classical and Bayesian-classical hybrid power for popular testing procedures including the t-test, correlation, simple linear regression, one-way ANOVA (with equal or unequal variances), and the sign test. Using several examples, we demonstrate features of hybridpower and illustrate how to elicit subjective priors, how to determine sample size from the Bayesian-classical approach, and how this approach is distinct from related methods. hybridpower can conduct power analysis for the classical approach, and more importantly, the novel Bayesian-classical hybrid approach that returns more realistic calculations by taking into account local optimality that the classical approach ignores. For users unfamiliar with R, we provide a limited number of RShiny applications based on hybridpower to promote the accessibility of this novel approach to power analysis. We end with a discussion on future developments in hybridpower.


Assuntos
Projetos de Pesquisa , Teorema de Bayes , Tamanho da Amostra , Modelos Lineares , Incerteza
6.
Am Psychol ; 77(4): 576-588, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35482669

RESUMO

Currently, there is little guidance for navigating measurement challenges that threaten construct validity in replication research. To identify common challenges and ultimately strengthen replication research, we conducted a systematic review of the measures used in the 100 original and replication studies from the Reproducibility Project: Psychology (Open Science Collaboration, 2015). Results indicate that it was common for scales used in the original studies to have little or no validity evidence. Our systematic review demonstrates and corroborates evidence that issues of construct validity are sorely neglected in original and replicated research. We identify four measurement challenges replicators are likely to face: a lack of essential measurement information, a lack of validity evidence, measurement differences, and translation. Next, we offer solutions for addressing these challenges that will improve measurement practices in original and replication research. Finally, we close with a discussion of the need to develop measurement methodologies for the next generation of replication research. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Reprodutibilidade dos Testes
8.
Pers Soc Psychol Bull ; 48(7): 1105-1117, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34308722

RESUMO

Traditionally, statistical power was viewed as relevant to research planning but not evaluation of completed research. However, following discussions of high false finding rates (FFRs) associated with low statistical power, the assumed level of statistical power has become a key criterion for research acceptability. Yet, the links between power and false findings are not as straightforward as described. Assumptions underlying FFR calculations do not reflect research realities in personality and social psychology. Even granting the assumptions, the FFR calculations identify important limitations to any general influences of statistical power. Limits for statistical power in inflating false findings can also be illustrated through the use of FFR calculations to (a) update beliefs about the null or alternative hypothesis and (b) assess the relative support for the null versus alternative hypothesis when evaluating a set of studies. Taken together, statistical power should be de-emphasized in comparison to current uses in research evaluation.


Assuntos
Personalidade , Psicologia Social , Humanos
9.
Proc Natl Acad Sci U S A ; 117(24): 13199-13200, 2020 06 16.
Artigo em Inglês | MEDLINE | ID: mdl-32546627
10.
Psychol Methods ; 24(5): 590-605, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30816728

RESUMO

Power analysis serves as the gold standard for evaluating study feasibility and justifying sample size. However, mainstream power analysis is often oversimplified, poorly reflecting complex reality during data analysis. This article highlights the complexities inherent in power analysis, especially when uncertainties present in data analysis are realistically taken into account. We introduce a Bayesian-classical hybrid approach to power analysis, which formally incorporates three sources of uncertainty into power estimates: (a) epistemic uncertainty regarding the unknown values of the effect size of interest, (b) sampling variability, and (c) uncertainty due to model approximation (i.e., models fit data imperfectly; Box, 1979; MacCallum, 2003). To illustrate the nature of estimated power from the Bayesian-classical hybrid method, we juxtapose its power estimates with those obtained from traditional (i.e., classical or frequentist) and Bayesian approaches. We employ an example in lexical processing (e.g., Yap & Seow, 2014) to illustrate underlying concepts and provide accompanying R and Rcpp code for computing power via the Bayesian-classical hybrid method. In general, power estimates become more realistic and much more varied after uncertainties are incorporated into their computation. As such, sample sizes should be determined by assurance (i.e., the mean of the power distribution) and the extent of variability in power estimates (e.g., interval width between 20th and 80th percentiles of the power distribution). We discuss advantages and challenges of incorporating the three stated sources of uncertainty into power analysis and, more broadly, research design. Finally, we conclude with future research directions. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Teorema de Bayes , Modelos Estatísticos , Psicologia/métodos , Incerteza , Humanos
12.
Front Psychol ; 9: 2104, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30459683

RESUMO

The linear model often serves as a starting point for applying statistics in psychology. Often, formal training beyond the linear model is limited, creating a potential pedagogical gap because of the pervasiveness of data non-normality. We reviewed 61 recently published undergraduate and graduate textbooks on introductory statistics and the linear model, focusing on their treatment of non-normality. This review identified at least eight distinct methods suggested to address non-normality, which we organize into a new taxonomy according to whether the approach: (a) remains within the linear model, (b) changes the data, and (c) treats normality as informative or as a nuisance. Because textbook coverage of these methods was often cursory, and methodological papers introducing these approaches are usually inaccessible to non-statisticians, this review is designed to be the happy medium. We provide a relatively non-technical review of advanced methods which can address non-normality (and heteroscedasticity), thereby serving a starting point to promote best practice in the application of the linear model. We also present three empirical examples to highlight distinctions between these methods' motivations and results. The paper also reviews the current state of methodological research in addressing non-normality within the linear modeling framework. It is anticipated that our taxonomy will provide a useful overview and starting place for researchers interested in extending their knowledge in approaches developed to address non-normality from the perspective of the linear model.

13.
Psychol Methods ; 23(4): 635-653, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29300097

RESUMO

Current concerns regarding the dependability of psychological findings call for methodological developments to provide additional evidence in support of scientific conclusions. This article highlights the value and importance of two distinct kinds of parameter uncertainty, which are quantified by confidence sets (CSs) and fungible parameter estimates (FPEs; Lee, MacCallum, & Browne, 2017); both provide essential information regarding the defensibility of scientific findings. Using the structural equation model, we introduce a general perturbation framework based on the likelihood function that unifies CSs and FPEs and sheds new light on the conceptual distinctions between them. A targeted illustration is then presented to demonstrate the factors which differentially influence CSs and FPEs, further highlighting their theoretical differences. With 3 empirical examples on initiating a conversation with a stranger (Bagozzi & Warshaw, 1988), posttraumatic growth of caregivers in the context of pediatric palliative care (Cadell et al., 2014), and the direct and indirect effects of spirituality on thriving among youth (Dowling, Gestsdottir, Anderson, von Eye, & Lerner, 2004), we illustrate how CSs and FPEs provide unique information which lead to better informed scientific conclusions. Finally, we discuss the importance of considering information afforded by CSs and FPEs in strengthening the basis of interpreting statistical results in substantive research, conclude with future research directions, and provide example OpenMx code for the computation of CSs and FPEs. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Assuntos
Interpretação Estatística de Dados , Modelos Estatísticos , Psicologia/métodos , Incerteza , Humanos
14.
Psychol Methods ; 23(2): 208-225, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28277690

RESUMO

Statistical practice in psychological science is undergoing reform which is reflected in part by strong recommendations for reporting and interpreting effect sizes and their confidence intervals. We present principles and recommendations for research reporting and emphasize the variety of ways effect sizes can be reported. Additionally, we emphasize interpreting and reporting unstandardized effect sizes because of common misconceptions regarding standardized effect sizes which we elucidate. Effect sizes should directly answer their motivating research questions, be comprehensible to the average reader, and be based on meaningful metrics of their constituent variables. We illustrate our recommendations with empirical examples involving a One-way ANOVA, a categorical variable analysis, an interaction effect in linear regression, and a simple mediation model, emphasizing the interpretation of effect sizes. (PsycINFO Database Record


Assuntos
Pesquisa Biomédica/métodos , Interpretação Estatística de Dados , Modelos Estatísticos , Psicologia/métodos , Humanos
15.
Multivariate Behav Res ; 52(5): 533-550, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28594582

RESUMO

Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.


Assuntos
Intervalos de Confiança , Modelos Psicológicos , Interpretação Estatística de Dados , Funções Verossimilhança , Método de Monte Carlo
16.
Multivariate Behav Res ; 51(6): 719-739, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27754699

RESUMO

When statistical models are employed to provide a parsimonious description of empirical relationships, the extent to which strong conclusions can be drawn rests on quantifying the uncertainty in parameter estimates. In multiple linear regression (MLR), regression weights carry two kinds of uncertainty represented by confidence sets (CSs) and exchangeable weights (EWs). Confidence sets quantify uncertainty in estimation whereas the set of EWs quantify uncertainty in the substantive interpretation of regression weights. As CSs and EWs share certain commonalities, we clarify the relationship between these two kinds of uncertainty about regression weights. We introduce a general framework describing how CSs and the set of EWs for regression weights are estimated from the likelihood-based and Wald-type approach, and establish the analytical relationship between CSs and sets of EWs. With empirical examples on posttraumatic growth of caregivers (Cadell et al., 2014; Schneider, Steele, Cadell & Hemsworth, 2011) and on graduate grade point average (Kuncel, Hezlett & Ones, 2001), we illustrate the usefulness of CSs and EWs for drawing strong scientific conclusions. We discuss the importance of considering both CSs and EWs as part of the scientific process, and provide an Online Appendix with R code for estimating Wald-type CSs and EWs for k regression weights.


Assuntos
Modelos Lineares , Análise Multivariada , Algoritmos , Cuidadores/psicologia , Interpretação Estatística de Dados , Educação de Pós-Graduação , Escolaridade , Humanos , Funções Verossimilhança , Software , Estresse Psicológico , Incerteza
17.
Soc Personal Psychol Compass ; 10(3): 150-163, 2016 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-26985234

RESUMO

Mediation analysis is a popular framework for identifying underlying mechanisms in social psychology. In the context of simple mediation, we review and discuss the implications of three facets of mediation analysis: (a) conceptualization of the relations between the variables, (b) statistical approaches, and (c) relevant elements of design. We also highlight the issue of equivalent models that are inherent in simple mediation. The extent to which results are meaningful stem directly from choices regarding these three facets of mediation analysis. We conclude by discussing how mediation analysis can be better applied to examine causal processes, highlight the limits of simple mediation, and make recommendations for better practice.

18.
Am J Hosp Palliat Care ; 33(6): 574-84, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26169520

RESUMO

UNLABELLED: The purpose of this study was to identify predictors of preference for hospice care and explore whether the effect of these predictors on preference for hospice care were moderated by race. METHODS: An analysis of the North Carolina AARP End of Life Survey (N = 3035) was conducted using multinomial logistic modeling to identify predictors of preference for hospice care. Response options included yes, no, or don't know. RESULTS: Fewer black respondents reported a preference for hospice (63.8% vs 79.2% for white respondents, P < .001). While the proportion of black and white respondents expressing a clear preference against hospice was nearly equal (4.5% and 4.0%, respectively), black individuals were nearly twice as likely to report a preference of "don't know" (31.5% vs 16.8%). Gender, race, age, income, knowledge of Medicare coverage of hospice, presence of an advance directive, end-of-life care concerns, and religiosity/spirituality predicted hospice care preference. Religiosity/spirituality however, was moderated by race. Race interacted with religiosity/spirituality in predicting hospice care preference such that religiosity/spirituality promoted hospice care preference among White respondents, but not black respondents. CONCLUSIONS: Uncertainties about hospice among African Americans may contribute to disparities in utilization. Efforts to improve access to hospice should consider pre-existing preferences for end-of-life care and account for the complex demographic, social, and cultural factors that help shape these preferences.


Assuntos
Negro ou Afro-Americano/estatística & dados numéricos , Cuidados Paliativos na Terminalidade da Vida/estatística & dados numéricos , Medicare/estatística & dados numéricos , Preferência do Paciente/estatística & dados numéricos , População Branca/estatística & dados numéricos , Diretivas Antecipadas/estatística & dados numéricos , Negro ou Afro-Americano/psicologia , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Conhecimentos, Atitudes e Prática em Saúde , Cuidados Paliativos na Terminalidade da Vida/psicologia , Humanos , Masculino , Pessoa de Meia-Idade , Preferência do Paciente/psicologia , Religião , Fatores Sexuais , Fatores Socioeconômicos , Estados Unidos , População Branca/psicologia
19.
Psychometrika ; 80(4): 1123-45, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25925009

RESUMO

Structural equation models (SEM) are widely used for modeling complex multivariate relationships among measured and latent variables. Although several analytical approaches to interval estimation in SEM have been developed, there lacks a comprehensive review of these methods. We review the popular Wald-type and lesser known likelihood-based methods in linear SEM, emphasizing profile likelihood-based confidence intervals (CIs). Existing algorithms for computing profile likelihood-based CIs are described, including two newer algorithms which are extended to construct profile likelihood-based confidence regions (CRs). Finally, we illustrate the use of these CIs and CRs with two empirical examples, and provide practical recommendations on when to use Wald-type CIs and CRs versus profile likelihood-based CIs and CRs. OpenMx example code is provided in an Online Appendix for constructing profile likelihood-based CIs and CRs for SEM.


Assuntos
Algoritmos , Intervalos de Confiança , Funções Verossimilhança , Modelos Estatísticos , Simulação por Computador , Humanos , Psicometria/estatística & dados numéricos
20.
Support Care Cancer ; 23(3): 809-18, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25194877

RESUMO

PURPOSE: Knowing how to improve the dying experience for patients with end-stage cancer is essential for cancer professionals. However, there is little evidence on the relationship between clinically relevant factors and quality of death. Also, while hospice has been linked with improved outcomes, our understanding of factors that contribute to a "good death" when hospice is involved remains limited. This study (1) identified correlates of a good death and (2) provided evidence on the impact of hospice on quality of death. METHODS: Using data from a survey of US households affected by cancer (N = 930, response rate 51 %), we fit regression models with a subsample of 158 respondents who had experienced the death of a family member with cancer. Measures included quality of death (good/bad) and clinically relevant factors including: hospice involvement, symptoms during treatment, whether wishes were followed, provider knowledge/expertise, and compassion. RESULTS: Respondents were 60 % female, 89 % White, and averaged 57 years old. Decedents were most often a respondent's spouse (46 %). While 73 % of respondents reported a good death, Hispanics were less likely to experience good death (p = 0.007). Clinically relevant factors, including hospice, were associated with good death (p < 0.05)--an exception being whether the physician said the cancer was curable/fatal. With adjustments, perceptions of provider knowledge/expertise was the only clinical factor that remained associated with good death. CONCLUSIONS: Enhanced provider training/communication, referrals to hospice and greater attention to symptom management may facilitate improved quality of dying. Additionally, the cultural relevance of the concept of a "good death" warrants further research.


Assuntos
Morte , Família , Cuidados Paliativos na Terminalidade da Vida/normas , Neoplasias/psicologia , Neoplasias/terapia , Adulto , Idoso , Idoso de 80 Anos ou mais , Atitude Frente a Morte , Comunicação , Cultura , Coleta de Dados , Família/psicologia , Feminino , Hospitais para Doentes Terminais/normas , Humanos , Masculino , Pessoa de Meia-Idade , Neoplasias/mortalidade , Cuidados Paliativos/normas , Relações Profissional-Família
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...