Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 15.674
Filtrar
1.
JAMA Netw Open ; 7(9): e2432296, 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39240561

RESUMO

Importance: Mega-trials can provide large-scale evidence on important questions. Objective: To explore how the results of mega-trials compare with the meta-analysis results of trials with smaller sample sizes. Data Sources: ClinicalTrials.gov was searched for mega-trials until January 2023. PubMed was searched until June 2023 for meta-analyses incorporating the results of the eligible mega-trials. Study Selection: Mega-trials were eligible if they were noncluster nonvaccine randomized clinical trials, had a sample size over 10 000, and had a peer-reviewed meta-analysis publication presenting results for the primary outcome of the mega-trials and/or all-cause mortality. Data Extraction and Synthesis: For each selected meta-analysis, we extracted results of smaller trials and mega-trials included in the summary effect estimate and combined them separately using random effects. These estimates were used to calculate the ratio of odds ratios (ROR) between mega-trials and smaller trials in each meta-analysis. Next, the RORs were combined using random effects. Risk of bias was extracted for each trial included in our analyses (or when not available, assessed only for mega-trials). Data analysis was conducted from January to June 2024. Main Outcomes and Measures: The main outcomes were the summary ROR for the primary outcome and all-cause mortality between mega-trials and smaller trials. Sensitivity analyses were performed with respect to the year of publication, masking, weight, type of intervention, and specialty. Results: Of 120 mega-trials identified, 41 showed a significant result for the primary outcome and 22 showed a significant result for all-cause mortality. In 35 comparisons of primary outcomes (including 85 point estimates from 69 unique mega-trials and 272 point estimates from smaller trials) and 26 comparisons of all-cause mortality (including 70 point estimates from 65 unique mega-trials and 267 point estimates from smaller trials), no difference existed between the outcomes of the mega-trials and smaller trials for primary outcome (ROR, 1.00; 95% CI, 0.97-1.04) nor for all-cause mortality (ROR, 1.00; 95% CI, 0.97-1.04). For the primary outcomes, smaller trials published before the mega-trials had more favorable results than the mega-trials (ROR, 1.05; 95% CI, 1.01-1.10) and subsequent smaller trials published after the mega-trials (ROR, 1.10; 95% CI, 1.04-1.18). Conclusions and Relevance: In this meta-research analysis, meta-analyses of smaller studies showed overall comparable results with mega-trials, but smaller trials published before the mega-trials gave more favorable results than mega-trials. These findings suggest that mega-trials need to be performed more often given the relative low number of mega-trials found, their low significant rates, and the fact that smaller trials published prior to mega-trial report more beneficial results than mega-trials and subsequent smaller trials.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Ensaios Clínicos como Assunto , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Tamanho da Amostra
2.
AAPS J ; 26(6): 105, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39285085

RESUMO

A recent FDA draft guidance discusses statistical considerations for demonstrating comparability of cell and gene therapy products and processes. One experimental study described in the guidance is the split-apheresis design. The FDA draft guidance recommends a paired data analysis for such a design. This paper demonstrates that the paired analysis is under powered for some quality attributes for practical sample sizes of three to five donors unless a significant portion of variability is attributed to donor. Addition of historic lots from the pre-change process can increase the power for these attributes. This paper provides appropriate statistical methods for including this information.


Assuntos
Terapia Genética , United States Food and Drug Administration , Humanos , Terapia Genética/métodos , Estados Unidos , Remoção de Componentes Sanguíneos/métodos , Projetos de Pesquisa , Terapia Baseada em Transplante de Células e Tecidos/métodos , Tamanho da Amostra
3.
Bioinformatics ; 40(9)2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39231036

RESUMO

MOTIVATION: Differential expression analysis in single-cell transcriptomics unveils cell type-specific responses to various treatments or biological conditions. To ensure the robustness and reliability of the analysis, it is essential to have a solid experimental design with ample statistical power and sample size. However, existing methods for power and sample size calculation often assume a specific distribution for single-cell transcriptomics data, potentially deviating from the true data distribution. Moreover, they commonly overlook cell-cell correlations within individual samples, posing challenges in accurately representing biological phenomena. Additionally, due to the complexity of deriving an analytic formula, most methods employ time-consuming simulation-based strategies. RESULTS: We propose an analytic-based method named scPS for calculating power and sample sizes based on generalized estimating equations. scPS stands out by making no assumptions about the data distribution and considering cell-cell correlations within individual samples. scPS is a rapid and powerful approach for designing experiments in single-cell differential expression analysis. AVAILABILITY AND IMPLEMENTATION: scPS is freely available at https://github.com/cyhsuTN/scPS and Zenodo https://zenodo.org/records/13375996.


Assuntos
Perfilação da Expressão Gênica , Análise de Célula Única , Análise de Célula Única/métodos , Tamanho da Amostra , Perfilação da Expressão Gênica/métodos , Software , Transcriptoma , Humanos , Algoritmos
4.
PLoS One ; 19(9): e0307607, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39288160

RESUMO

Advancements in sensor technology have brought a revolution in data generation. Therefore, the study variable and several linearly related auxiliary variables are recorded due to cost-effectiveness and ease of recording. These auxiliary variables are commonly observed as quantitative and qualitative (attributes) variables and are jointly used to estimate the study variable's population mean using a mixture estimator. For this purpose, this work proposes a family of generalized mixture estimators under stratified sampling to increase efficiency under symmetrical and asymmetrical distributions and study the estimator's behavior for different sample sizes for its convergence to the Normal distribution. It is found that the proposed estimator estimates the population mean of the study variable with more precision than the competitor estimators under Normal, Uniform, Weibull, and Gamma distributions. It is also revealed that the proposed estimator follows the Cauchy distribution when the sample size is less than 35; otherwise, it converges to normality. Furthermore, the implementation of two real-life datasets related to the health and finance sectors is also presented to support the proposed estimator's significance.


Assuntos
Modelos Estatísticos , Tamanho da Amostra , Humanos , Algoritmos , Distribuição Aleatória
5.
Trials ; 25(1): 608, 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39261887

RESUMO

BACKGROUND: Multi-Arm, Multi-Stage (MAMS) clinical trial designs allow for multiple therapies to be compared across a spectrum of clinical trial phases. MAMS designs fall under several overarching design groups, including adaptive designs (AD) and multi-arm (MA) designs. Factorial clinical trials designs represent a combination of factorial and MAMS trial designs and can provide increased efficiency relative to fixed, traditional designs. We explore design choices associated with Factorial Adaptive Multi-Arm Multi-Stage (FAST) designs, which represent the combination of factorial and MAMS designs. METHODS: Simulation studies were conducted to assess the impact of the type of analyses, the timing of analyses, and the effect size observed across multiple outcomes on trial operating characteristics for a FAST design. Given multiple outcomes types assessed within the hypothetical trial, the primary analysis approach for each assessment varied depending on data type. RESULTS: The simulation studies demonstrate that the proposed class of FAST trial designs can offer a framework to potentially provide improvements relative to other trial designs, such as a MAMS or factorial trial. Further, we note that the design implementation decisions, such as the timing and type of analyses conducted throughout trial, can have a great impact on trial operating characteristics. CONCLUSIONS: Motivated by a trial currently under design, our work shows that the FAST category of trial can potentially offer benefits similar to both MAMS and factorial designs; however, the chosen design aspects which can be included in a FAST trial need to be thoroughly explored during the planning phase.


Assuntos
Ensaios Clínicos como Assunto , Simulação por Computador , Projetos de Pesquisa , Humanos , Ensaios Clínicos como Assunto/métodos , Interpretação Estatística de Dados , Fatores de Tempo , Resultado do Tratamento , Determinação de Ponto Final , Tamanho da Amostra , Modelos Estatísticos
6.
BMC Med Res Methodol ; 24(1): 197, 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39251907

RESUMO

PURPOSE: In the context of clinical research, there is an increasing need for new study designs that help to incorporate already available data. With the help of historical controls, the existing information can be utilized to support the new study design, but of course, inclusion also carries the risk of bias in the study results. METHODS: To combine historical and randomized controls we investigate the Fill-it-up-design, which in the first step checks the comparability of the historical and randomized controls performing an equivalence pre-test. If equivalence is confirmed, the historical control data will be included in the new RCT. If equivalence cannot be confirmed, the historical controls will not be considered at all and the randomization of the original study will be extended. We are investigating the performance of this study design in terms of type I error rate and power. RESULTS: We demonstrate how many patients need to be recruited in each of the two steps in the Fill-it-up-design and show that the family wise error rate of the design is kept at 5 % . The maximum sample size of the Fill-it-up-design is larger than that of the single-stage design without historical controls and increases as the heterogeneity between the historical controls and the concurrent controls increases. CONCLUSION: The two-stage Fill-it-up-design represents a frequentist method for including historical control data for various study designs. As the maximum sample size of the design is larger, a robust prior belief is essential for its use. The design should therefore be seen as a way out in exceptional situations where a hybrid design is considered necessary.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Tamanho da Amostra , Estudo Historicamente Controlado , Grupos Controle
8.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39248122

RESUMO

The geometric median, which is applicable to high-dimensional data, can be viewed as a generalization of the univariate median used in 1-dimensional data. It can be used as a robust estimator for identifying the location of multi-dimensional data and has a wide range of applications in real-world scenarios. This paper explores the problem of high-dimensional multivariate analysis of variance (MANOVA) using the geometric median. A maximum-type statistic that relies on the differences between the geometric medians among various groups is introduced. The distribution of the new test statistic is derived under the null hypothesis using Gaussian approximations, and its consistency under the alternative hypothesis is established. To approximate the distribution of the new statistic in high dimensions, a wild bootstrap algorithm is proposed and theoretically justified. Through simulation studies conducted across a variety of dimensions, sample sizes, and data-generating models, we demonstrate the finite-sample performance of our geometric median-based MANOVA method. Additionally, we implement the proposed approach to analyze a breast cancer gene expression dataset.


Assuntos
Algoritmos , Neoplasias da Mama , Simulação por Computador , Humanos , Análise Multivariada , Neoplasias da Mama/genética , Modelos Estatísticos , Feminino , Interpretação Estatística de Dados , Perfilação da Expressão Gênica/estatística & dados numéricos , Tamanho da Amostra , Biometria/métodos
9.
Res Social Adm Pharm ; 20(11): 1070-1074, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39142906

RESUMO

The concept of saturation is commonly mentioned in pharmacy research, but there has been recent debate among the applied qualitative research community that challenges the appropriateness of this construct for many qualitative research efforts. This begins by describing the origins of saturation as a grounded theory construct and discusses how saturation is currently being used. Three challenges are discussed related to the use of saturation in pharmacy related to the epistemological, methodological, and practical use of saturation by pharmacy researchers and how they relate to the goals and reporting quality of pharmacy practice research The commentary describes how the concept of information power and established guidance on analysis quality can better justify sample size inform decisions about when to cease further data collection, hopefully increasing the transparency of reporting and supporting rigorous and coherent analyses.


Assuntos
Pesquisa em Farmácia , Projetos de Pesquisa , Humanos , Pesquisa Qualitativa , Coleta de Dados/métodos , Teoria Fundamentada , Tamanho da Amostra
10.
Trials ; 25(1): 532, 2024 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-39128997

RESUMO

OBJECTIVE: To assess the cost-effectiveness of using cheaper-but-noisier outcome measures, such as a short questionnaire, for large simple clinical trials. BACKGROUND: To detect associations reliably, trials must avoid bias and random error. To reduce random error, we can increase the size of the trial and increase the accuracy of the outcome measurement process. However, with fixed resources, there is a trade-off between the number of participants a trial can enrol and the amount of information that can be collected on each participant during data collection. METHODS: To consider the effect on measurement error of using outcome scales with varying numbers of categories, we define and calculate the variance from categorisation that would be expected from using a category midpoint; define the analytic conditions under which such a measure is cost-effective; use meta-regression to estimate the impact of participant burden, defined as questionnaire length, on response rates; and develop an interactive web-app to allow researchers to explore the cost-effectiveness of using such a measure under plausible assumptions. RESULTS: An outcome scale with only a few categories greatly reduced the variance of non-measurement. For example, a scale with five categories reduced the variance of non-measurement by 96% for a uniform distribution. We show that a simple measure will be more cost-effective than a gold-standard measure if the relative increase in variance due to using it is less than the relative increase in cost from the gold standard, assuming it does not introduce bias in the measurement. We found an inverse power law relationship between participant burden and response rates such that a doubling the burden on participants reduces the response rate by around one third. Finally, we created an interactive web-app ( https://benjiwoolf.shinyapps.io/cheapbutnoisymeasures/ ) to allow exploration of when using a cheap-but-noisy measure will be more cost-effective using realistic parameters. CONCLUSION: Cheaper-but-noisier questionnaires containing just a few questions can be a cost-effective way of maximising power. However, their use requires a judgement on the trade-off between the potential increase in risk of information bias and the reduction in the potential of selection bias due to the expected higher response rates.


Assuntos
Ensaios Clínicos como Assunto , Análise Custo-Benefício , Projetos de Pesquisa , Humanos , Inquéritos e Questionários , Projetos de Pesquisa/normas , Ensaios Clínicos como Assunto/economia , Ensaios Clínicos como Assunto/normas , Reprodutibilidade dos Testes , Tamanho da Amostra , Resultado do Tratamento , Modelos Econômicos , Determinação de Ponto Final
12.
Stud Health Technol Inform ; 316: 1896-1900, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176862

RESUMO

Health registries are an established methodology in health services research. Regarded as cheap bidder for observation studies in earlier times, many registries now benefit from an appropriate funding. However, reference figures for costs of registries in health services research are missing. Based on literature data as well as figures from a German funding initiative, the relationship between costs per newly recruited case and the annual sample size was analyzed. One can assume that a standard multicenter registry in health services research with 1,000 new patients each year will be appropriately financed with 1.1 to 2.2 million Euro in a period of five years. So smaller the sample size, so higher the costs for a newly recruited patient. The cost model of health registries should be further elaborated taking into account specifics of individual registries and differences in the level of case money.


Assuntos
Pesquisa sobre Serviços de Saúde , Sistema de Registros , Alemanha , Humanos , Custos de Cuidados de Saúde , Tamanho da Amostra
13.
PLoS One ; 19(8): e0296207, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39088468

RESUMO

Polygenic risk scores (PRS) are instrumental in genetics, offering insights into an individual level genetic risk to a range of diseases based on accumulated genetic variations. These scores rely on Genome-Wide Association Studies (GWAS). However, precision in PRS is often challenged by the requirement of extensive sample sizes and the potential for overlapping datasets that can inflate PRS calculations. In this study, we present a novel methodology, Meta-Reductive Approach (MRA), that was derived algebraically to adjust GWAS results, aiming to neutralize the influence of select cohorts. Our approach recalibrates summary statistics using algebraic derivations. Validating our technique with datasets from Alzheimer disease studies, we showed that the summary statistics of the MRA and those derived from individual-level data yielded the exact same values. This innovative method offers a promising avenue for enhancing the accuracy of PRS, especially when derived from meta-analyzed GWAS data.


Assuntos
Doença de Alzheimer , Predisposição Genética para Doença , Estudo de Associação Genômica Ampla , Herança Multifatorial , Estudo de Associação Genômica Ampla/métodos , Humanos , Doença de Alzheimer/genética , Herança Multifatorial/genética , Polimorfismo de Nucleotídeo Único , Tamanho da Amostra , Fatores de Risco
14.
Biom J ; 66(6): e202300271, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39132909

RESUMO

Many clinical trials assess time-to-event endpoints. To describe the difference between groups in terms of time to event, we often employ hazard ratios. However, the hazard ratio is only informative in the case of proportional hazards (PHs) over time. There exist many other effect measures that do not require PHs. One of them is the average hazard ratio (AHR). Its core idea is to utilize a time-dependent weighting function that accounts for time variation. Though propagated in methodological research papers, the AHR is rarely used in practice. To facilitate its application, we unfold approaches for sample size calculation of an AHR test. We assess the reliability of the sample size calculation by extensive simulation studies covering various survival and censoring distributions with proportional as well as nonproportional hazards (N-PHs). The findings suggest that a simulation-based sample size calculation approach can be useful for designing clinical trials with N-PHs. Using the AHR can result in increased statistical power to detect differences between groups with more efficient sample sizes.


Assuntos
Modelos de Riscos Proporcionais , Tamanho da Amostra , Humanos , Ensaios Clínicos como Assunto , Biometria/métodos
15.
Rev Neurol ; 79(5): 143-145, 2024 Sep 29.
Artigo em Espanhol | MEDLINE | ID: mdl-39207129

RESUMO

The original idea of rejecting studies with low power and authorising them if their power is sufficiently high is reasonable and even an obligation, although in practice this reasoning is heavily constrained by the fact that the power of a study depends on several factors, rather than a single one. Furthermore, there is no threshold separating 'high' power values from 'low' power values'. However, if the result is very significant, considering how powerful it was it makes little sense after the study has been carried out. It is only possible to take advantage of the result. Situations in which this result is not statistically significant warrant further consideration. Consideration of the power may be useful in these circumstances. This article focuses on the position that should be adopted in these cases, and it shows that in order to draw reasonable conclusions about the effect size of the population, calculating the confidence interval is more useful than calculating the power, and its interpretation is more easily understood by physicians who lack training in statistical analysis.


TITLE: Potencia estadística de una investigación médica. ¿Qué postura tomar cuando los resultados de la investigación no son significativos?La idea original de rechazar estudios con baja potencia y autorizarlos si es suficientemente alta es razonable e incluso obligada, aunque en la práctica este razonamiento se ve muy limitado por el hecho de que la potencia de un estudio depende de varios factores y, por tanto, no es única. Además, no hay un valor frontera que separe los valores 'altos' de potencia de los 'bajos'. Pese a esto, una vez realizado el estudio, si su resultado es muy significativo, no tiene sentido preguntarnos por la potencia que tenía. Sólo cabe aprovechar su resultado. Consideración aparte merece el caso en que dicho resultado no sea estadísticamente significativo. Entonces sí puede ser pertinente considerar su potencia. A continuación, se hace una reflexión sobre qué postura adoptar en estos casos y se muestra que, para sacar conclusiones razonables sobre el efecto poblacional, el cálculo de su intervalo de confianza es más útil que el cálculo de la potencia y su interpretación más fácilmente entendible por el médico sin formación en análisis estadístico.


Assuntos
Pesquisa Biomédica , Interpretação Estatística de Dados , Humanos , Tamanho da Amostra , Projetos de Pesquisa , Estatística como Assunto , Intervalos de Confiança
16.
Pharmacol Res Perspect ; 12(5): e70001, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39180172

RESUMO

When planning pediatric clinical trials, optimizing the sample size of neonates/infants is essential because it is difficult to enroll these subjects. In this simulation study, we evaluated the sample size of neonates/infants using a model-based optimal approach for identifying their pharmacokinetics for cefiderocol. We assessed the usefulness of data for estimation performance (accuracy and variance of parameter estimation) from adults and the impact of data from very young subjects, including preterm neonates. Stochastic simulation and estimation were utilized to assess the impact of sample size allocation for age categories in estimation performance for population pharmacokinetic parameters in pediatrics. The inclusion of adult pharmacokinetic information improved the estimation performance of population pharmacokinetic parameters as the coefficient of variation (CV) range of parameter estimation decreased from 4.9%-593.7% to 2.3%-17.3%. When sample size allocation was based on the age groups of gestational age and postnatal age, the data showed 15 neonates/infants would be necessary to appropriately estimate pediatric pharmacokinetic parameters (<20%CV). By using the postmenstrual age (PMA), which is theoretically considered to be associated with the maturation of organs, the number of neonates/infants required for appropriate parameter estimation could be reduced to seven (one and six with <32 and >32 weeks PMA, respectively) to nine (three and six with <37 and >37 weeks PMA, respectively) subjects. The model-based optimal design approach allowed efficient evaluation of the sample size of neonates/infants for estimation of pediatric pharmacokinetic parameters. This approach to assessment should be useful when designing pediatric clinical trials, especially those including young children.


Assuntos
Antibacterianos , Cefiderocol , Cefalosporinas , Humanos , Recém-Nascido , Cefalosporinas/farmacocinética , Tamanho da Amostra , Lactente , Antibacterianos/farmacocinética , Antibacterianos/administração & dosagem , Modelos Biológicos , Ensaios Clínicos como Assunto , Simulação por Computador , Adulto , Idade Gestacional , Fatores Etários
17.
Trials ; 25(1): 527, 2024 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-39107853

RESUMO

BACKGROUND: Mediation analysis, often completed as secondary analysis to estimating the main treatment effect, investigates situations where an exposure may affect an outcome both directly and indirectly through intervening mediator variables. Although there has been much research on power in mediation analyses, most of this has focused on the power to detect indirect effects. Little consideration has been given to the extent to which the strength of the mediation pathways, i.e., the intervention-mediator path and the mediator-outcome path respectively, may affect the power to detect the total effect, which would correspond to the intention-to-treat effect in a randomized trial. METHODS: We conduct a simulation study to evaluate the relation between the mediation pathways and the power of testing the total treatment effect, i.e., the intention-to-treat effect. Consider a sample size that is computed based on the usual formula for testing the total effect in a two-arm trial. We generate data for a continuous mediator and a normal outcome using the conventional mediation models. We estimate the total effect using simple linear regression and evaluate the power of a two-sided test. We explore multiple data generating scenarios by varying the magnitude of the mediation paths whilst keeping the total effect constant. RESULTS: Simulations show the estimated total effect is unbiased across the considered scenarios as expected, but the mean of its standard error increases with the magnitude of the mediator-outcome path and the variability in the residual error of the mediator, respectively. Consequently, this affects the power of testing the total effect, which is always lower than planned when the mediator-outcome path is non-trivial and a naive sample size was employed. Analytical explanation confirms that the intervention-mediator path does not affect the power of testing the total effect but the mediator-outcome path. The usual effect size consideration can be adjusted to account for the magnitude of the mediator-outcome path and its residual error. CONCLUSIONS: The sample size calculation for studies with efficacy and mechanism evaluation should account for the mediator-outcome association or risk the power to detect the total effect/intention-to-treat effect being lower than planned.


Assuntos
Simulação por Computador , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Tamanho da Amostra , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Análise de Mediação , Análise de Intenção de Tratamento , Resultado do Tratamento , Interpretação Estatística de Dados , Modelos Lineares , Modelos Estatísticos
18.
PLoS One ; 19(8): e0301301, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39110741

RESUMO

Interrupted time series (ITS) designs are increasingly used for estimating the effect of shocks in natural experiments. Currently, ITS designs are often used in scenarios with many time points and simple data structures. This research investigates the performance of ITS designs when the number of time points is limited and with complex data structures. Using a Monte Carlo simulation study, we empirically derive the performance-in terms of power, bias and precision- of the ITS design. Scenarios are considered with multiple interventions, a low number of time points and different effect sizes based on a motivating example of the learning loss due to COVID school closures. The results of the simulation study show the power of the step change depends mostly on the sample size, while the power of the slope change depends on the number of time points. In the basic scenario, with both a step and a slope change and an effect size of 30% of the pre-intervention slope, the required sample size for detecting a step change is 1,100 with a minimum of twelve time points. For detecting a slope change the required sample size decreases to 500 with eight time points. To decide if there is enough power researchers should inspect their data, hypothesize about effect sizes and consider an appropriate model before applying an ITS design to their research. This paper contributes to the field of methodology in two ways. Firstly, the motivation example showcases the difficulty of employing ITS designs in cases which do not adhere to a single intervention. Secondly, models are proposed for more difficult ITS designs and their performance is tested.


Assuntos
COVID-19 , Análise de Séries Temporais Interrompida , Método de Monte Carlo , Pandemias , Instituições Acadêmicas , COVID-19/epidemiologia , COVID-19/prevenção & controle , Humanos , SARS-CoV-2/isolamento & purificação , Aprendizagem , Simulação por Computador , Tamanho da Amostra
19.
PLoS One ; 19(8): e0306911, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39178270

RESUMO

Large sample size (N) is seen as a key criterion in judging the replicability of psychological research, a phenomenon we refer to as the N-Heuristic. This heuristic has led to the incentivization of fast, online, non-behavioral studies-to the potential detriment of psychological science. While large N should in principle increase statistical power and thus the replicability of effects, in practice it may not. Large-N studies may have other attributes that undercut their power or validity. Consolidating data from all systematic, large-scale attempts at replication (N = 307 original-replication study pairs), we find that the original study's sample size did not predict its likelihood of being replicated (rs = -0.02, p = 0.741), even with study design and research area controlled. By contrast, effect size emerged as a substantial predictor (rs = 0.21, p < 0.001), which held regardless of the study's sample size. N may be a poor predictor of replicability because studies with larger N investigated smaller effects (rs = -0.49, p < 0.001). Contrary to these results, a survey of 215 professional psychologists, presenting them with a comprehensive list of methodological criteria, found sample size to be rated as the most important criterion in judging a study's replicability. Our findings strike a cautionary note with respect to the prioritization of large N in judging the replicability of psychological science.


Assuntos
Psicologia , Tamanho da Amostra , Humanos , Reprodutibilidade dos Testes , Heurística
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...