Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 139
Filtrar
1.
J Alzheimers Dis ; 99(1): 321-332, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38669544

RESUMO

Background: Practice effects on cognitive testing in mild cognitive impairment (MCI) and Alzheimer's disease (AD) remain understudied, especially with how they compare to biomarkers of AD. Objective: The current study sought to add to this growing literature. Methods: Cognitively intact older adults (n = 68), those with amnestic MCI (n = 52), and those with mild AD (n = 45) completed a brief battery of cognitive tests at baseline and again after one week, and they also completed a baseline amyloid PET scan, a baseline MRI, and a baseline blood draw to obtain APOE ɛ4 status. Results: The intact participants showed significantly larger baseline cognitive scores and practice effects than the other two groups on overall composite measures. Those with MCI showed significantly larger baseline scores and practice effects than AD participants on the composite. For amyloid deposition, the intact participants had significantly less tracer uptake, whereas MCI and AD participants were comparable. For total hippocampal volumes, all three groups were significantly different in the expected direction (intact > MCI > AD). For APOE ɛ4, the intact had significantly fewer copies of ɛ4 than MCI and AD. The effect sizes of the baseline cognitive scores and practice effects were comparable, and they were significantly larger than effect sizes of biomarkers in 7 of the 9 comparisons. Conclusion: Baseline cognition and short-term practice effects appear to be sensitive markers in late life cognitive disorders, as they separated groups better than commonly-used biomarkers in AD. Further development of baseline cognition and short-term practice effects as tools for clinical diagnosis, prognostic indication, and enrichment of clinical trials seems warranted.


Assuntos
Doença de Alzheimer , Biomarcadores , Disfunção Cognitiva , Imageamento por Ressonância Magnética , Testes Neuropsicológicos , Tomografia por Emissão de Pósitrons , Humanos , Doença de Alzheimer/sangue , Doença de Alzheimer/diagnóstico , Doença de Alzheimer/diagnóstico por imagem , Masculino , Feminino , Idoso , Disfunção Cognitiva/diagnóstico , Disfunção Cognitiva/sangue , Biomarcadores/sangue , Idoso de 80 Anos ou mais , Apolipoproteína E4/genética , Prática Psicológica , Cognição/fisiologia , Hipocampo/diagnóstico por imagem , Hipocampo/patologia
2.
Front Bioinform ; 4: 1305969, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38390304

RESUMO

The rise of research synthesis and systematic reviews over the last 25 years has been aided by a series of software packages providing simple and accessible GUI interfaces which are intuitively easy to use by novice analysts and users. Development of many of these packages has been abandoned over time due to a variety of factors, leaving a gap in the software infrastructure available for meta-analysis. To fulfill the continued demand for a GUI-based meta-analytic system, we have now released MetaWin 3 as free, open-source, multi-platform software. MetaWin3 is written in Python and developed from scratch relative to earlier versions. The codebase is available on Github, with pre-compiled executables for both Windows and macOS available from the MetaWin website. MetaWin includes standardized effect size calculations, exploratory and publication bias analyses, and allows for both simple and complex explanatory models of variation within a meta-analytic framework, including meta-regression, using traditional least-squares/moments estimation.

3.
Res Synth Methods ; 15(3): 500-511, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38327122

RESUMO

Publication selection bias undermines the systematic accumulation of evidence. To assess the extent of this problem, we survey over 68,000 meta-analyses containing over 700,000 effect size estimates from medicine (67,386/597,699), environmental sciences (199/12,707), psychology (605/23,563), and economics (327/91,421). Our results indicate that meta-analyses in economics are the most severely contaminated by publication selection bias, closely followed by meta-analyses in environmental sciences and psychology, whereas meta-analyses in medicine are contaminated the least. After adjusting for publication selection bias, the median probability of the presence of an effect decreased from 99.9% to 29.7% in economics, from 98.9% to 55.7% in psychology, from 99.8% to 70.7% in environmental sciences, and from 38.0% to 29.7% in medicine. The median absolute effect sizes (in terms of standardized mean differences) decreased from d = 0.20 to d = 0.07 in economics, from d = 0.37 to d = 0.26 in psychology, from d = 0.62 to d = 0.43 in environmental sciences, and from d = 0.24 to d = 0.13 in medicine.


Assuntos
Economia , Metanálise como Assunto , Psicologia , Viés de Publicação , Humanos , Ecologia , Projetos de Pesquisa , Viés de Seleção , Probabilidade , Medicina
4.
Entropy (Basel) ; 26(1)2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38275503

RESUMO

The paper makes a case that the current discussions on replicability and the abuse of significance testing have overlooked a more general contributor to the untrustworthiness of published empirical evidence, which is the uninformed and recipe-like implementation of statistical modeling and inference. It is argued that this contributes to the untrustworthiness problem in several different ways, including [a] statistical misspecification, [b] unwarranted evidential interpretations of frequentist inference results, and [c] questionable modeling strategies that rely on curve-fitting. What is more, the alternative proposals to replace or modify frequentist testing, including [i] replacing p-values with observed confidence intervals and effects sizes, and [ii] redefining statistical significance, will not address the untrustworthiness of evidence problem since they are equally vulnerable to [a]-[c]. The paper calls for distinguishing between unduly data-dependant 'statistical results', such as a point estimate, a p-value, and accept/reject H0, from 'evidence for or against inferential claims'. The post-data severity (SEV) evaluation of the accept/reject H0 results, converts them into evidence for or against germane inferential claims. These claims can be used to address/elucidate several foundational issues, including (i) statistical vs. substantive significance, (ii) the large n problem, and (iii) the replicability of evidence. Also, the SEV perspective sheds light on the impertinence of the proposed alternatives [i]-[iii], and oppugns [iii] the alleged arbitrariness of framing H0 and H1 which is often exploited to undermine the credibility of frequentist testing.

5.
Perception ; 53(3): 208-210, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38055992

RESUMO

The replication crisis has taught us to expect small-to-medium effects in psychological research. But this is based on effect sizes calculated over single variables. Mahalanobis D, the multivariate equivalent of Cohen's d, can enable very large group differences to emerge from a collection of small-to-medium effects (here, reanalysing multivariate datasets from synaesthetes and controls). The use of multivariate effect sizes is not a slight of hand but may instead be a truer reflection of the degree of psychological differences between people that has been largely underappreciated.


Assuntos
Cognição , Percepção de Cores , Humanos , Sinestesia
6.
Qual Life Res ; 33(2): 293-315, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37702809

RESUMO

PURPOSE: The objective of this systematic review was to describe the prevalence and magnitude of response shift effects, for different response shift methods, populations, study designs, and patient-reported outcome measures (PROM)s. METHODS: A literature search was performed in MEDLINE, PSYCINFO, CINAHL, EMBASE, Social Science Citation Index, and Dissertations & Theses Global to identify longitudinal quantitative studies that examined response shift using PROMs, published before 2021. The magnitude of each response shift effect (effect sizes, R-squared or percentage of respondents with response shift) was ascertained based on reported statistical information or as stated in the manuscript. Prevalence and magnitudes of response shift effects were summarized at two levels of analysis (study and effect levels), for recalibration and reprioritization/reconceptualization separately, and for different response shift methods, and population, study design, and PROM characteristics. Analyses were conducted twice: (a) including all studies and samples, and (b) including only unrelated studies and independent samples. RESULTS: Of the 150 included studies, 130 (86.7%) detected response shift effects. Of the 4868 effects investigated, 793 (16.3%) revealed response shift. Effect sizes could be determined for 105 (70.0%) of the studies for a total of 1130 effects, of which 537 (47.5%) resulted in detection of response shift. Whereas effect sizes varied widely, most median recalibration effect sizes (Cohen's d) were between 0.20 and 0.30 and median reprioritization/reconceptualization effect sizes rarely exceeded 0.15, across the characteristics. Similar results were obtained from unrelated studies. CONCLUSION: The results draw attention to the need to focus on understanding variability in response shift results: Who experience response shifts, to what extent, and under which circumstances?


Assuntos
Qualidade de Vida , Projetos de Pesquisa , Humanos , Qualidade de Vida/psicologia , Medidas de Resultados Relatados pelo Paciente
7.
Behav Modif ; 48(1): 51-74, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-37650389

RESUMO

Single case research is a viable way to obtain evidence for social and psychological interventions on an individual level. Across single case research studies various analysis strategies are employed, varying from visual analysis to the calculation of effect sizes. To calculate effect sizes in studies with few measurements per time period (<40 data points with a minimum of five data points in each phase), non-parametric indices such as Nonoverlap of All Pairs (NAP) and Tau-U are recommended. However, both indices have restrictions. This article discusses the restrictions of NAP and Tau-U and presents the description, calculation, and benefits of an additional effect size, called the Typicality of Level Change (TLC) index. In comparison to NAP and Tau-U, the TLC index is more aligned to visual analysis, not restricted by a ceiling effect, and does not overcompensate for problematic trends in data. The TLC index is also sensitive to the typicality of an effect. TLC is an important addition to ease the restrictions of current nonoverlap methods when comparing effect sizes between cases and studies.


Assuntos
Projetos de Pesquisa , Humanos , Tamanho da Amostra
8.
J Sport Exerc Psychol ; 45(5): 293-296, 2023 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-37684013

RESUMO

Meta-analysis is a powerful tool in sport and exercise psychology. However, it has a number of pitfalls, and some lead to ill-advised comparisons and overestimation of effects. The impetus for this research note is provided by a recent systematic review of meta-analyses that examined the correlates of sport performance and has fallen foul of some of the pitfalls. Although the systematic review potentially has great value for researchers and practitioners alike, it treats effects from correlational and intervention studies as yielding equivalent information, double-counts multiple studies, and uses an effect size for correlational studies (Cohen's d) that provides an extreme contrast of unclear practical relevance. These issues impact interpretability, bias, and usefulness of the findings. This methodological note explains each pitfall and illustrates use of an appropriate equivalent effect size for correlational studies (Mathur and VanderWeele's d) to help researchers avoid similar issues in future work.

9.
Struct Equ Modeling ; 30(4): 672-685, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37588162

RESUMO

The effect of an independent variable on random slopes in growth modeling with latent variables is conventionally used to examine predictors of change over the course of a study. This tutorial demonstrates that the same effect of a covariate on growth can be obtained by using final status centering for parameterization and regressing the random intercepts (or the intercept factor scores) on both the independent variable and a baseline covariate--the framework used to study change with classical regression analysis. Examples are provided that illustrate the application of an intercept-focused approach to obtain effect sizes--the unstandardized regression coefficient, the standardized regression coefficient, squared semi-partial correlation, and Cohen's f2 --that estimate the same parameters as respective effect sizes from a classical regression analysis. Moreover, statistical power to detect the effect of the predictor on growth was greater when using random intercepts than the conventionally used random slopes.

10.
Am J Hum Genet ; 110(8): 1319-1329, 2023 08 03.
Artigo em Inglês | MEDLINE | ID: mdl-37490908

RESUMO

Polygenic scores (PGSs) have emerged as a standard approach to predict phenotypes from genotype data in a wide array of applications from socio-genomics to personalized medicine. Traditional PGSs assume genotype data to be error-free, ignoring possible errors and uncertainties introduced from genotyping, sequencing, and/or imputation. In this work, we investigate the effects of genotyping error due to low coverage sequencing on PGS estimation. We leverage SNP array and low-coverage whole-genome sequencing data (lcWGS, median coverage 0.04×) of 802 individuals from the Dana-Farber PROFILE cohort to show that PGS error correlates with sequencing depth (p = 1.2 × 10-7). We develop a probabilistic approach that incorporates genotype error in PGS estimation to produce well-calibrated PGS credible intervals and show that the probabilistic approach increases classification accuracy by up to 6% as compared to traditional PGSs that ignore genotyping error. Finally, we use simulations to explore the combined effect of genotyping and effect size errors and their implication on PGS-based risk-stratification. Our results illustrate the importance of considering genotyping error as a source of PGS error especially for cohorts with varying genotyping technologies and/or low-coverage sequencing.


Assuntos
Genômica , Polimorfismo de Nucleotídeo Único , Incerteza , Genótipo , Genômica/métodos , Sequenciamento Completo do Genoma , Polimorfismo de Nucleotídeo Único/genética
11.
Front Psychol ; 14: 1185012, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37408962

RESUMO

Multivariate meta-analysis (MMA) is a powerful statistical technique that can provide more reliable and informative results than traditional univariate meta-analysis, which allows for comparisons across outcomes with increased statistical power. However, implementing appropriate statistical methods for MMA can be challenging due to the requirement of various specific tasks in data preparation. The metavcov package aims for model preparation, data visualization, and missing data solutions to provide tools for different methods that cannot be found in accessible software. It provides sufficient constructs for estimating coefficients from other well-established packages. For model preparation, users can compute both effect sizes of various types and their variance-covariance matrices, including correlation coefficients, standardized mean difference, mean difference, log odds ratio, log risk ratio, and risk difference. The package provides a tool to plot the confidence intervals for the primary studies and the overall estimates. When specific effect sizes are missing, single imputation is available in the model preparation stage; a multiple imputation method is also available for pooling the results in a statistically principled manner from models of users' choice. The package is demonstrated in two real data applications and a simulation study to assess methods for handling missing data.

12.
J Intell ; 11(6)2023 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-37367506

RESUMO

Mindset theory assumes that students' beliefs about their intelligence-whether these are fixed or can grow-affects students' academic performance. Based on this assumption, mindset theorists have developed growth mindset interventions to teach students that their intelligence or another attribute can be developed, with the goal of improving academic outcomes. Though many papers have reported benefits from growth mindset interventions, others have reported no effects or even detrimental effects. Recently, proponents of mindset theory have called for a "heterogeneity revolution" to understand when growth mindset interventions are effective and when-and for whom-they are not. We sought to examine the whole picture of heterogeneity of treatment effects, including benefits, lack of impacts, and potential detriments of growth mindset interventions on academic performance. We used a recently proposed approach that considers persons as effect sizes; this approach can reveal individual-level heterogeneity often lost in aggregate data analyses. Across three papers, we find that this approach reveals substantial individual-level heterogeneity unobservable at the group level, with many students and teachers exhibiting mindset and performance outcomes that run counter to the authors' claims. Understanding and reporting heterogeneity, including benefits, null effects, and detriments, will lead to better guidance for educators and policymakers considering the role of growth mindset interventions in schools.

13.
Am J Hum Genet ; 110(6): 927-939, 2023 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-37224807

RESUMO

Genome-wide association studies (GWASs) have identified thousands of variants for disease risk. These studies have predominantly been conducted in individuals of European ancestries, which raises questions about their transferability to individuals of other ancestries. Of particular interest are admixed populations, usually defined as populations with recent ancestry from two or more continental sources. Admixed genomes contain segments of distinct ancestries that vary in composition across individuals in the population, allowing for the same allele to induce risk for disease on different ancestral backgrounds. This mosaicism raises unique challenges for GWASs in admixed populations, such as the need to correctly adjust for population stratification. In this work we quantify the impact of differences in estimated allelic effect sizes for risk variants between ancestry backgrounds on association statistics. Specifically, while the possibility of estimated allelic effect-size heterogeneity by ancestry (HetLanc) can be modeled when performing a GWAS in admixed populations, the extent of HetLanc needed to overcome the penalty from an additional degree of freedom in the association statistic has not been thoroughly quantified. Using extensive simulations of admixed genotypes and phenotypes, we find that controlling for and conditioning effect sizes on local ancestry can reduce statistical power by up to 72%. This finding is especially pronounced in the presence of allele frequency differentiation. We replicate simulation results using 4,327 African-European admixed genomes from the UK Biobank for 12 traits to find that for most significant SNPs, HetLanc is not large enough for GWASs to benefit from modeling heterogeneity in this way.


Assuntos
Genética Populacional , Estudo de Associação Genômica Ampla , Humanos , Estudo de Associação Genômica Ampla/métodos , Frequência do Gene/genética , Genótipo , Fenótipo , Polimorfismo de Nucleotídeo Único/genética
14.
Child Abuse Negl ; 139: 106095, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36989983

RESUMO

Scholarly journals increasingly request that authors include effect size (ES) estimates when reporting statistical results. However, there is little guidance on how authors should interpret ESs. Consequently, some authors do not provide ES interpretations, or, when interpretations are provided, they often fail to use appropriate reference groups, using instead the ES benchmarks suggested by Cohen (1988). After discussing the most commonly used ES estimates, we describe the method used by Cohen (1962) to develop ES benchmarks (i.e., small, medium, and large) for use in power analyses and describe the limitations associated with using these benchmarks. Next, we establish general benchmarks for family violence (FV) research. That is, we followed Cohen's approach to establishing his original ES benchmarks using family violence research published in 2021 in Child Abuse & Neglect, which produced a medium ES (d = 0.354) that was smaller than Cohen's recommended medium ES (d = 0.500). Then, we examined the ESs in different subspecialty areas of FV research to provide benchmarks for contextualized FV ESs and to provide information that can be used to conduct power analyses when planning future FV research. Finally, some of the challenges to developing ES benchmarks in any scholarly discipline are discussed. For professionals who are not well informed about ESs, the present review is designed to increase their understanding of ESs and what ES benchmarks tell them (and do not tell them) with respect to understanding the meaningfulness of FV research findings.


Assuntos
Maus-Tratos Infantis , Violência Doméstica , Humanos , Criança , Benchmarking
15.
BMC Bioinformatics ; 24(1): 48, 2023 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-36788550

RESUMO

BACKGROUND: An appropriate sample size is essential for obtaining a precise and reliable outcome of a study. In machine learning (ML), studies with inadequate samples suffer from overfitting of data and have a lower probability of producing true effects, while the increment in sample size increases the accuracy of prediction but may not cause a significant change after a certain sample size. Existing statistical approaches using standardized mean difference, effect size, and statistical power for determining sample size are potentially biased due to miscalculations or lack of experimental details. This study aims to design criteria for evaluating sample size in ML studies. We examined the average and grand effect sizes and the performance of five ML methods using simulated datasets and three real datasets to derive the criteria for sample size. We systematically increase the sample size, starting from 16, by randomly sampling and examine the impact of sample size on classifiers' performance and both effect sizes. Tenfold cross-validation was used to quantify the accuracy. RESULTS: The results demonstrate that the effect sizes and the classification accuracies increase while the variances in effect sizes shrink with the increment of samples when the datasets have a good discriminative power between two classes. By contrast, indeterminate datasets had poor effect sizes and classification accuracies, which did not improve by increasing sample size in both simulated and real datasets. A good dataset exhibited a significant difference in average and grand effect sizes. We derived two criteria based on the above findings to assess a decided sample size by combining the effect size and the ML accuracy. The sample size is considered suitable when it has appropriate effect sizes (≥ 0.5) and ML accuracy (≥ 80%). After an appropriate sample size, the increment in samples will not benefit as it will not significantly change the effect size and accuracy, thereby resulting in a good cost-benefit ratio. CONCLUSION: We believe that these practical criteria can be used as a reference for both the authors and editors to evaluate whether the selected sample size is adequate for a study.


Assuntos
Aprendizado de Máquina , Projetos de Pesquisa , Tamanho da Amostra , Probabilidade
16.
Behav Res Methods ; 55(2): 474-490, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35292932

RESUMO

Researchers can generate bootstrap confidence intervals for some statistics in SPSS using the BOOTSTRAP command. However, this command can only be applied to selected procedures, and only to selected statistics in these procedures. We developed an extension command and prepared some sample syntax files based on existing approaches from the Internet to illustrate how researchers can (a) generate a large number of nonparametric bootstrap samples, (b) do desired analysis on all these samples, and (c) form the bootstrap confidence intervals for selected statistics using the OMS commands. We developed these tools to help researchers apply nonparametric bootstrapping to any statistics for which this method is appropriate, including statistics derived from other statistics, such as standardized effect size measures computed from the t test results. We also discussed how researchers can extend the tools for other statistics and scenarios they encounter.


Assuntos
Intervalos de Confiança , Estatística como Assunto
17.
Behav Res Methods ; 55(2): 646-656, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35411476

RESUMO

The probability of superiority (PS) has been recommended as a simple-to-interpret effect size for comparing two independent samples-there are several methods for computing the PS for this particular study design. However, educational and psychological interventions increasingly occur in clustered data contexts; and a review of the literature returned only one method for computing the PS in such contexts. In this paper, we propose a method for estimating the PS in clustered data contexts. Specifically, the proposal addresses study designs that compare two groups and group membership is determined at the cluster level. A cluster may be: (i) a group of cases with each case measured once, or (ii) a single case with each case measured multiple times, resulting in longitudinal data. The proposal relies on nonparametric point estimates of the PS coupled with cluster-robust variance estimation, such that the proposed approach should remain adequate regardless of the distribution of the response data. Using Monte Carlo simulation, we show the approach to be unbiased for continuous and binary outcomes, while maintaining adequate frequentist properties. Moreover, our proposal performs better than the single extant method we found in the literature. The proposal is simple to implement in commonplace statistical software and we provide accompanying R code. Hence, it is our hope that the method we present helps applied researchers better estimate group differences when comparing two groups and group membership is determined at the cluster level.


Assuntos
Projetos de Pesquisa , Software , Humanos , Probabilidade , Simulação por Computador , Escolaridade , Análise por Conglomerados , Método de Monte Carlo
18.
Behav Res Methods ; 55(4): 1942-1964, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35798918

RESUMO

Multilevel models are used ubiquitously in the social and behavioral sciences and effect sizes are critical for contextualizing results. A general framework of R-squared effect size measures for multilevel models has only recently been developed. Rights and Sterba (2019) distinguished each source of explained variance for each possible kind of outcome variance. Though researchers have long desired a comprehensive and coherent approach to computing R-squared measures for multilevel models, the use of this framework has a steep learning curve. The purpose of this tutorial is to introduce and demonstrate using a new R package - r2mlm - that automates the intensive computations involved in implementing the framework and provides accompanying graphics to visualize all multilevel R-squared measures together. We use accessible illustrations with open data and code to demonstrate how to use and interpret the R package output.


Assuntos
Ciências do Comportamento , Humanos , Análise Multinível
19.
Perspect Psychol Sci ; 18(2): 508-512, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36126652

RESUMO

In the January 2022 issue of Perspectives, Götz et al. argued that small effects are "the indispensable foundation for a cumulative psychological science." They supported their argument by claiming that (a) psychology, like genetics, consists of complex phenomena explained by additive small effects; (b) psychological-research culture rewards large effects, which means small effects are being ignored; and (c) small effects become meaningful at scale and over time. We rebut these claims with three objections: First, the analogy between genetics and psychology is misleading; second, p values are the main currency for publication in psychology, meaning that any biases in the literature are (currently) caused by pressure to publish statistically significant results and not large effects; and third, claims regarding small effects as important and consequential must be supported by empirical evidence or, at least, a falsifiable line of reasoning. If accepted uncritically, we believe the arguments of Götz et al. could be used as a blanket justification for the importance of any and all "small" effects, thereby undermining best practices in effect-size interpretation. We end with guidance on evaluating effect sizes in relative, not absolute, terms.


Assuntos
Psicologia , Humanos
20.
Behav Res Methods ; 55(5): 2467-2484, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36002625

RESUMO

The a priori calculation of statistical power has become common practice in behavioral and social sciences to calculate the necessary sample size for detecting an expected effect size with a certain probability (i.e., power). In multi-factorial repeated measures ANOVA, these calculations can sometimes be cumbersome, especially for higher-order interactions. For designs that only involve factors with two levels each, the paired t test can be used for power calculations, but some pitfalls need to be avoided. In this tutorial, we provide practical advice on how to express main and interaction effects in repeated measures ANOVA as single difference variables. In particular, we demonstrate how to calculate the effect size Cohen's d of this difference variable either based on means, variances, and covariances of conditions or by transforming [Formula: see text] or [Formula: see text] from the ANOVA framework into d. With the effect size correctly specified, we then show how to use the t test for sample size considerations by means of an empirical example. The relevant R code is provided in an online repository for all example calculations covered in this article.


Assuntos
Projetos de Pesquisa , Humanos , Tamanho da Amostra , Probabilidade , Análise de Variância
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA