Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 132
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Hum Genomics ; 18(1): 69, 2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38902839

RESUMEN

BACKGROUND: Single cell RNA sequencing technology (scRNA-seq) has been proven useful in understanding cell-specific disease mechanisms. However, identifying genes of interest remains a key challenge. Pseudo-bulk methods that pool scRNA-seq counts in the same biological replicates have been commonly used to identify differentially expressed genes. However, such methods may lack power due to the limited sample size of scRNA-seq datasets, which can be prohibitively expensive. RESULTS: Motivated by this, we proposed to use the Bayesian-frequentist hybrid (BFH) framework to increase the power and we showed in simulated scenario, the proposed BFH would be an optimal method when compared with other popular single cell differential expression methods if both FDR and power were considered. As an example, the method was applied to an idiopathic pulmonary fibrosis (IPF) case study. CONCLUSION: In our IPF example, we demonstrated that with a proper informative prior, the BFH approach identified more genes of interest. Furthermore, these genes were reasonable based on the current knowledge of IPF. Thus, the BFH offers a unique and flexible framework for future scRNA-seq analyses.


Asunto(s)
Teorema de Bayes , RNA-Seq , Análisis de Secuencia de ARN , Análisis de la Célula Individual , Análisis de la Célula Individual/métodos , Humanos , RNA-Seq/métodos , Análisis de Secuencia de ARN/métodos , Fibrosis Pulmonar Idiopática/genética , Fibrosis Pulmonar Idiopática/patología , Perfilación de la Expresión Génica/métodos , Algoritmos
2.
Stat Med ; 43(1): 156-172, 2024 01 15.
Artículo en Inglés | MEDLINE | ID: mdl-37919834

RESUMEN

A basket trial aims to expedite the drug development process by evaluating a new therapy in multiple populations within the same clinical trial. Each population, referred to as a "basket", can be defined by disease type, biomarkers, or other patient characteristics. The objective of a basket trial is to identify the subset of baskets for which the new therapy shows promise. The conventional approach would be to analyze each of the baskets independently. Alternatively, several Bayesian dynamic borrowing methods have been proposed that share data across baskets when responses appear similar. These methods can achieve higher power than independent testing in exchange for a risk of some inflation in the type 1 error rate. In this paper we propose a frequentist approach to dynamic borrowing for basket trials using adaptive lasso. Through simulation studies we demonstrate adaptive lasso can achieve similar power and type 1 error to the existing Bayesian methods. The proposed approach has the benefit of being easier to implement and faster than existing methods. In addition, the adaptive lasso approach is very flexible: it can be extended to basket trials with any number of treatment arms and any type of endpoint.


Asunto(s)
Proyectos de Investigación , Humanos , Teorema de Bayes , Simulación por Computador
3.
Stat Med ; 2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39193805

RESUMEN

This study presents a hybrid (Bayesian-frequentist) approach to sample size re-estimation (SSRE) for cluster randomised trials with continuous outcome data, allowing for uncertainty in the intra-cluster correlation (ICC). In the hybrid framework, pre-trial knowledge about the ICC is captured by placing a Truncated Normal prior on it, which is then updated at an interim analysis using the study data, and used in expected power control. On average, both the hybrid and frequentist approaches mitigate against the implications of misspecifying the ICC at the trial's design stage. In addition, both frameworks lead to SSRE designs with approximate control of the type I error-rate at the desired level. It is clearly demonstrated how the hybrid approach is able to reduce the high variability in the re-estimated sample size observed within the frequentist framework, based on the informativeness of the prior. However, misspecification of a highly informative prior can cause significant power loss. In conclusion, a hybrid approach could offer advantages to cluster randomised trials using SSRE. Specifically, when there is available data or expert opinion to help guide the choice of prior for the ICC, the hybrid approach can reduce the variance of the re-estimated required sample size compared to a frequentist approach. As SSRE is unlikely to be employed when there is substantial amounts of such data available (ie, when a constructed prior is highly informative), the greatest utility of a hybrid approach to SSRE likely lies when there is low-quality evidence available to guide the choice of prior.

4.
Stat Med ; 43(11): 2096-2121, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38488240

RESUMEN

Excessive zeros in multivariate count data are often observed in scenarios of biomedicine and public health. To provide a better analysis on this type of data, we first develop a marginalized multivariate zero-inflated Poisson (MZIP) regression model to directly interpret the overall exposure effects on marginal means. Then, we define a multiple Pearson residual for our newly developed MZIP regression model by simultaneously taking heterogeneity and correlation into consideration. Furthermore, a new model averaging prediction method is introduced based on the multiple Pearson residual, and the asymptotical optimality of this model averaging prediction is proved. Simulations and two empirical applications in medicine are used to illustrate the effectiveness of the proposed method.


Asunto(s)
Simulación por Computador , Modelos Estadísticos , Humanos , Distribución de Poisson , Análisis Multivariante , Análisis de Regresión , Interpretación Estadística de Datos
5.
BMC Med Res Methodol ; 24(1): 99, 2024 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-38678213

RESUMEN

PURPOSE: In the literature, the propriety of the meta-analytic treatment-effect produced by combining randomized controlled trials (RCT) and non-randomized studies (NRS) is questioned, given the inherent confounding in NRS that may bias the meta-analysis. The current study compared an implicitly principled pooled Bayesian meta-analytic treatment-effect with that of frequentist pooling of RCT and NRS to determine how well each approach handled the NRS bias. MATERIALS & METHODS: Binary outcome Critical-Care meta-analyses, reflecting the importance of such outcomes in Critical-Care practice, combining RCT and NRS were identified electronically. Bayesian pooled treatment-effect and 95% credible-intervals (BCrI), posterior model probabilities indicating model plausibility and Bayes-factors (BF) were estimated using an informative heavy-tailed heterogeneity prior (half-Cauchy). Preference for pooling of RCT and NRS was indicated for Bayes-factors > 3 or < 0.333 for the converse. All pooled frequentist treatment-effects and 95% confidence intervals (FCI) were re-estimated using the popular DerSimonian-Laird (DSL) random effects model. RESULTS: Fifty meta-analyses were identified (2009-2021), reporting pooled estimates in 44; 29 were pharmaceutical-therapeutic and 21 were non-pharmaceutical therapeutic. Re-computed pooled DSL FCI excluded the null (OR or RR = 1) in 86% (43/50). In 18 meta-analyses there was an agreement between FCI and BCrI in excluding the null. In 23 meta-analyses where FCI excluded the null, BCrI embraced the null. BF supported a pooled model in 27 meta-analyses and separate models in 4. The highest density of the posterior model probabilities for 0.333 < Bayes factor < 1 was 0.8. CONCLUSIONS: In the current meta-analytic cohort, an integrated and multifaceted Bayesian approach gave support to including NRS in a pooled-estimate model. Conversely, caution should attend the reporting of naïve frequentist pooled, RCT and NRS, meta-analytic treatment effects.


Asunto(s)
Teorema de Bayes , Metaanálisis como Asunto , Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Ensayos Clínicos Controlados no Aleatorios como Asunto/métodos , Sesgo , Modelos Estadísticos
6.
BMC Med Res Methodol ; 24(1): 110, 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38714936

RESUMEN

Bayesian statistics plays a pivotal role in advancing medical science by enabling healthcare companies, regulators, and stakeholders to assess the safety and efficacy of new treatments, interventions, and medical procedures. The Bayesian framework offers a unique advantage over the classical framework, especially when incorporating prior information into a new trial with quality external data, such as historical data or another source of co-data. In recent years, there has been a significant increase in regulatory submissions using Bayesian statistics due to its flexibility and ability to provide valuable insights for decision-making, addressing the modern complexity of clinical trials where frequentist trials are inadequate. For regulatory submissions, companies often need to consider the frequentist operating characteristics of the Bayesian analysis strategy, regardless of the design complexity. In particular, the focus is on the frequentist type I error rate and power for all realistic alternatives. This tutorial review aims to provide a comprehensive overview of the use of Bayesian statistics in sample size determination, control of type I error rate, multiplicity adjustments, external data borrowing, etc., in the regulatory environment of clinical trials. Fundamental concepts of Bayesian sample size determination and illustrative examples are provided to serve as a valuable resource for researchers, clinicians, and statisticians seeking to develop more complex and innovative designs.


Asunto(s)
Teorema de Bayes , Ensayos Clínicos como Asunto , Humanos , Ensayos Clínicos como Asunto/métodos , Ensayos Clínicos como Asunto/estadística & datos numéricos , Proyectos de Investigación/normas , Tamaño de la Muestra , Interpretación Estadística de Datos , Modelos Estadísticos
7.
Infection ; 2024 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-39017997

RESUMEN

BACKGROUND: WHO postulates the application of adaptive design features in the global clinical trial ecosystem. However, the adaptive platform trial (APT) methodology has not been widely adopted in clinical research on vaccines. METHODS: The VACCELERATE Consortium organized a two-day workshop to discuss the applicability of APT methodology in vaccine trials under non-pandemic as well as pandemic conditions. Core aspects of the discussions are summarized in this article. RESULTS: An "ever-warm" APT appears ideally suited to improve efficiency and speed of vaccine research. Continuous learning based on accumulating APT trial data allows for pre-planned adaptations during its course. Given the relative design complexity, alignment of all stakeholders at all stages of an APT is central. Vaccine trial modelling is crucial, both before and in a pandemic emergency. Various inferential paradigms are possible (frequentist, likelihood, or Bayesian). The focus in the interpandemic interval may be on research gaps left by industry trials. For activation in emergency, template Disease X protocols of syndromal design for pathogens yet unknown need to be stockpiled and updated regularly. Governance of a vaccine APT should be fully integrated into supranational pandemic response mechanisms. DISCUSSION: A broad range of adaptive features can be applied in platform trials on vaccines. Faster knowledge generation comes with increased complexity of trial design. Design complexity should not preclude simple execution at trial sites. Continuously generated evidence represents a return on investment that will garner societal support for sustainable funding. Adaptive design features will naturally find their way into platform trials on vaccines.

8.
Clin Trials ; : 17407745241244801, 2024 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-38760932

RESUMEN

BACKGROUND: The coronavirus disease 2019 pandemic highlighted the need to conduct efficient randomized clinical trials with interim monitoring guidelines for efficacy and futility. Several randomized coronavirus disease 2019 trials, including the Multiplatform Randomized Clinical Trial (mpRCT), used Bayesian guidelines with the belief that they would lead to quicker efficacy or futility decisions than traditional "frequentist" guidelines, such as spending functions and conditional power. We explore this belief using an intuitive interpretation of Bayesian methods as translating prior opinion about the treatment effect into imaginary prior data. These imaginary observations are then combined with actual observations from the trial to make conclusions. Using this approach, we show that the Bayesian efficacy boundary used in mpRCT is actually quite similar to the frequentist Pocock boundary. METHODS: The mpRCT's efficacy monitoring guideline considered stopping if, given the observed data, there was greater than 99% probability that the treatment was effective (odds ratio greater than 1). The mpRCT's futility monitoring guideline considered stopping if, given the observed data, there was greater than 95% probability that the treatment was less than 20% effective (odds ratio less than 1.2). The mpRCT used a normal prior distribution that can be thought of as supplementing the actual patients' data with imaginary patients' data. We explore the effects of varying probability thresholds and the prior-to-actual patient ratio in the mpRCT and compare the resulting Bayesian efficacy monitoring guidelines to the well-known frequentist Pocock and O'Brien-Fleming efficacy guidelines. We also contrast Bayesian futility guidelines with a more traditional 20% conditional power futility guideline. RESULTS: A Bayesian efficacy and futility monitoring boundary using a neutral, weakly informative prior distribution and a fixed probability threshold at all interim analyses is more aggressive than the commonly used O'Brien-Fleming efficacy boundary coupled with a 20% conditional power threshold for futility. The trade-off is that more aggressive boundaries tend to stop trials earlier, but incur a loss of power. Interestingly, the Bayesian efficacy boundary with 99% probability threshold is very similar to the classic Pocock efficacy boundary. CONCLUSIONS: In a pandemic where quickly weeding out ineffective treatments and identifying effective treatments is paramount, aggressive monitoring may be preferred to conservative approaches, such as the O'Brien-Fleming boundary. This can be accomplished with either Bayesian or frequentist methods.

9.
Proc Natl Acad Sci U S A ; 118(15)2021 04 13.
Artículo en Inglés | MEDLINE | ID: mdl-33876748

RESUMEN

Adaptive experimental designs can dramatically improve efficiency in randomized trials. But with adaptively collected data, common estimators based on sample means and inverse propensity-weighted means can be biased or heavy-tailed. This poses statistical challenges, in particular when the experimenter would like to test hypotheses about parameters that were not targeted by the data-collection mechanism. In this paper, we present a class of test statistics that can handle these challenges. Our approach is to adaptively reweight the terms of an augmented inverse propensity-weighting estimator to control the contribution of each term to the estimator's variance. This scheme reduces overall variance and yields an asymptotically normal test statistic. We validate the accuracy of the resulting estimates and their CIs in numerical experiments and show that our methods compare favorably to existing alternatives in terms of mean squared error, coverage, and CI size.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Algoritmos , Interpretación Estadística de Datos
10.
Biochem Genet ; 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38951354

RESUMEN

The genomic evaluation process relies on the assumption of linkage disequilibrium between dense single-nucleotide polymorphism (SNP) markers at the genome level and quantitative trait loci (QTL). The present study was conducted with the aim of evaluating four frequentist methods including Ridge Regression, Least Absolute Shrinkage and Selection Operator (LASSO), Elastic Net, and Genomic Best Linear Unbiased Prediction (GBLUP) and five Bayesian methods including Bayes Ridge Regression (BRR), Bayes A, Bayesian LASSO, Bayes C, and Bayes B, in genomic selection using simulation data. The difference between prediction accuracy was assessed in pairs based on statistical significance (p-value) (i.e., t test and Mann-Whitney U test) and practical significance (Cohen's d effect size) For this purpose, the data were simulated based on two scenarios in different marker densities (4000 and 8000, in the whole genome). The simulated data included a genome with four chromosomes, 1 Morgan each, on which 100 randomly distributed QTL and two different densities of evenly distributed SNPs (1000 and 2000), at the heritability level of 0.4, was considered. For the frequentist methods except for GBLUP, the regularization parameter λ was calculated using a five-fold cross-validation approach. For both scenarios, among the frequentist methods, the highest prediction accuracy was observed by Ridge Regression and GBLUP. The lowest and the highest bias were shown by Ridge Regression and GBLUP, respectively. Also, among the Bayesian methods, Bayes B and BRR showed the highest and lowest prediction accuracy, respectively. The lowest bias in both scenarios was registered by Bayesian LASSO and the highest bias in the first and the second scenario were shown by BRR and Bayes B, respectively. Across all the studied methods in both scenarios, the highest and the lowest accuracy were shown by Bayes B and LASSO and Elastic Net, respectively. As expected, the greatest similarity in performance was observed between GBLUP and BRR ( d = 0.007 , in the first scenario and d = 0.003 , in the second scenario). The results obtained from parametric t and non-parametric Mann-Whitney U tests were similar. In the first and second scenario, out of 36 t test between the performance of the studied methods in each scenario, 14 ( P < . 001 ) and 2 ( P < . 05 ) comparisons were significant, respectively, which indicates that with the increase in the number of predictors, the difference in the performance of different methods decreases. This was proven based on the Cohen's d effect size, so that with the increase in the complexity of the model, the effect size was not seen as very large. The regularization parameters in frequentist methods should be optimized by cross-validation approach before using these methods in genomic evaluation.

11.
Pharm Stat ; 23(1): 4-19, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-37632266

RESUMEN

Borrowing information from historical or external data to inform inference in a current trial is an expanding field in the era of precision medicine, where trials are often performed in small patient cohorts for practical or ethical reasons. Even though methods proposed for borrowing from external data are mainly based on Bayesian approaches that incorporate external information into the prior for the current analysis, frequentist operating characteristics of the analysis strategy are often of interest. In particular, type I error rate and power at a prespecified point alternative are the focus. We propose a procedure to investigate and report the frequentist operating characteristics in this context. The approach evaluates type I error rate of the test with borrowing from external data and calibrates the test without borrowing to this type I error rate. On this basis, a fair comparison of power between the test with and without borrowing is achieved. We show that no power gains are possible in one-sided one-arm and two-arm hybrid control trials with normal endpoint, a finding proven in general before. We prove that in one-arm fixed-borrowing situations, unconditional power (i.e., when external data is random) is reduced. The Empirical Bayes power prior approach that dynamically borrows information according to the similarity of current and external data avoids the exorbitant type I error inflation occurring with fixed borrowing. In the hybrid control two-arm trial we observe power reductions as compared to the test calibrated to borrowing that increase when considering unconditional power.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Humanos , Teorema de Bayes , Simulación por Computador , Ensayos Clínicos como Asunto
12.
Entropy (Basel) ; 26(9)2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39330127

RESUMEN

Variable selection methods have been extensively developed for and applied to cancer genomics data to identify important omics features associated with complex disease traits, including cancer outcomes. However, the reliability and reproducibility of the findings are in question if valid inferential procedures are not available to quantify the uncertainty of the findings. In this article, we provide a gentle but systematic review of high-dimensional frequentist and Bayesian inferential tools under sparse models which can yield uncertainty quantification measures, including confidence (or Bayesian credible) intervals, p values and false discovery rates (FDR). Connections in high-dimensional inferences between the two realms have been fully exploited under the "unpenalized loss function + penalty term" formulation for regularization methods and the "likelihood function × shrinkage prior" framework for regularized Bayesian analysis. In particular, we advocate for robust Bayesian variable selection in cancer genomics studies due to its ability to accommodate disease heterogeneity in the form of heavy-tailed errors and structured sparsity while providing valid statistical inference. The numerical results show that robust Bayesian analysis incorporating exact sparsity has yielded not only superior estimation and identification results but also valid Bayesian credible intervals under nominal coverage probabilities compared with alternative methods, especially in the presence of heavy-tailed model errors and outliers.

13.
J Sleep Res ; 32(4): e13844, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36814416

RESUMEN

Video games are a popular form of entertainment. However, there is mixed evidence for the association between video game usage and poor sleep quality, short sleep duration, or delayed sleep timing. The current study examined associations between daily sleep behaviours and video game usage via a Bayesian and frequentist statistical approach. Caffeine and alcohol consumption were also assessed as moderators, as these behaviours may co-occur with video game usage and poor sleep. 1032 (72% female) undergraduate students were recruited between 2006-2007. Participants completed questionnaires examining video game and substance use, as well as sleep diaries for 1 week. Frequentist analyses revealed that video game usage was related to increased variability in the total sleep time, and a later average sleep midpoint, but not sleep efficiency. Alcohol use moderated the relationships between video game usage and both average and variability in total sleep time. Caffeine use was related to shorter average total sleep time and more variability in sleep efficiency. Alcohol consumption was related to more variability in the total sleep time and sleep midpoint, and a later average sleep midpoint. Bayesian models suggested strong evidence that video game playing was associated with later average sleep midpoint. Like the frequentist approach, alcohol consumption moderated the relationship between video game usage and both average and variability in total sleep time, but the evidence was weak. The effect sizes for both approaches tended to be small. Using a rigorous statistical approach and a large sample, this study provides robust evidence that video game usage may not be strongly associated with poor sleep among undergraduate students.


Asunto(s)
Trastornos del Inicio y del Mantenimiento del Sueño , Trastornos Relacionados con Sustancias , Juegos de Video , Humanos , Femenino , Masculino , Teorema de Bayes , Cafeína , Trastornos Relacionados con Sustancias/epidemiología , Encuestas y Cuestionarios , Estudiantes
14.
Philos Trans A Math Phys Eng Sci ; 381(2247): 20220144, 2023 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-36970830

RESUMEN

I discuss the benefits of looking through the 'Bayesian lens' (seeking a Bayesian interpretation of ostensibly non-Bayesian methods), and the dangers of wearing 'Bayesian blinkers' (eschewing non-Bayesian methods as a matter of philosophical principle). I hope that the ideas may be useful to scientists trying to understand widely used statistical methods (including confidence intervals and [Formula: see text]-values), as well as teachers of statistics and practitioners who wish to avoid the mistake of overemphasizing philosophy at the expense of practical matters. This article is part of the theme issue 'Bayesian inference: challenges, perspectives, and prospects'.

15.
Philos Trans A Math Phys Eng Sci ; 381(2247): 20220146, 2023 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-36970821

RESUMEN

We develop a representation of a decision maker's uncertainty based on e-variables. Like the Bayesian posterior, this e-posterior allows for making predictions against arbitrary loss functions that may not be specified ex ante. Unlike the Bayesian posterior, it provides risk bounds that have frequentist validity irrespective of prior adequacy: if the e-collection (which plays a role analogous to the Bayesian prior) is chosen badly, the bounds get loose rather than wrong, making e-posterior minimax decision rules safer than Bayesian ones. The resulting quasi-conditional paradigm is illustrated by re-interpreting a previous influential partial Bayes-frequentist unification, Kiefer-Berger-Brown-Wolpert conditional frequentist tests, in terms of e-posteriors. This article is part of the theme issue 'Bayesian inference: challenges, perspectives, and prospects'.

16.
Clin Trials ; 20(1): 59-70, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36086822

RESUMEN

BACKGROUND/AIMS: To evaluate how uncertainty in the intra-cluster correlation impacts whether a parallel-group or stepped-wedge cluster-randomized trial design is more efficient in terms of the required sample size, in the case of cross-sectional stepped-wedge cluster-randomized trials and continuous outcome data. METHODS: We motivate our work by reviewing how the intra-cluster correlation and standard deviation were justified in 54 health technology assessment reports on cluster-randomized trials. To enable uncertainty at the design stage to be incorporated into the design specification, we then describe how sample size calculation can be performed for cluster- randomized trials in the 'hybrid' framework, which places priors on design parameters and controls the expected power in place of the conventional frequentist power. Comparison of the parallel-group and stepped-wedge cluster-randomized trial designs is conducted by placing Beta and truncated Normal priors on the intra-cluster correlation, and a Gamma prior on the standard deviation. RESULTS: Many Health Technology Assessment reports did not adhere to the Consolidated Standards of Reporting Trials guideline of indicating the uncertainty around the assumed intra-cluster correlation, while others did not justify the assumed intra-cluster correlation or standard deviation. Even for a prior intra-cluster correlation distribution with a small mode, moderate prior densities on high intra-cluster correlation values can lead to a stepped-wedge cluster-randomized trial being more efficient because of the degree to which a stepped-wedge cluster-randomized trial is more efficient for high intra-cluster correlations. With careful specification of the priors, the designs in the hybrid framework can become more robust to, for example, an unexpectedly large value of the outcome variance. CONCLUSION: When there is difficulty obtaining a reliable value for the intra-cluster correlation to assume at the design stage, the proposed methodology offers an appealing approach to sample size calculation. Often, uncertainty in the intra-cluster correlation will mean a stepped-wedge cluster-randomized trial is more efficient than a parallel-group cluster-randomized trial design.


Asunto(s)
Proyectos de Investigación , Humanos , Estudios Transversales , Incertidumbre , Ensayos Clínicos Controlados Aleatorios como Asunto , Tamaño de la Muestra , Análisis por Conglomerados
17.
Biom J ; 65(7): e2100406, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37189217

RESUMEN

There has been growing interest in leveraging external control data to augment a randomized control group data in clinical trials and enable more informative decision making. In recent years, the quality and availability of real-world data have improved steadily as external controls. However, information borrowing by directly pooling such external controls with randomized controls may lead to biased estimates of the treatment effect. Dynamic borrowing methods under the Bayesian framework have been proposed to better control the false positive error. However, the numerical computation and, especially, parameter tuning, of those Bayesian dynamic borrowing methods remain a challenge in practice. In this paper, we present a frequentist interpretation of a Bayesian commensurate prior borrowing approach and describe intrinsic challenges associated with this method from the perspective of optimization. Motivated by this observation, we propose a new dynamic borrowing approach using adaptive lasso. The treatment effect estimate derived from this method follows a known asymptotic distribution, which can be used to construct confidence intervals and conduct hypothesis tests. The finite sample performance of the method is evaluated through extensive Monte Carlo simulations under different settings. We observed highly competitive performance of adaptive lasso compared to Bayesian approaches. Methods for selecting tuning parameters are also thoroughly discussed based on results from numerical studies and an illustration example.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Teorema de Bayes , Método de Montecarlo
18.
Artículo en Inglés | MEDLINE | ID: mdl-38059698

RESUMEN

OBJECTIVE: Improving prediction abilities in the therapy process can increase therapeutic success for a variety of reasons, such as more personalised treatment or resource optimisation. The increasingly applied methods of dynamic prediction seem to be very promising for this purpose. Prediction models are usually based on static approaches of frequentist statistics. However, the application of this statistical approach has been widely criticised in this research area. Bayesian statistics has been proposed in the literature as an alternative, especially for the task of dynamic modelling. In this study, we compare the performance of predicting therapy outcome over the course of therapy between both statistical approaches. METHOD: Based on a sample of 341 patients, a logistic regression analysis was performed using both statistical approaches. Therapy success was conceptualised as reliable pre-post improvement in brief symptom inventory (BSI) scores. As predictors, we used the subscales of the Outcome Questionnaire (OQ-30) and the Helping Alliance Questionnaire (HAQ) measured every fifth session, as well as baseline BSI scores. RESULTS: The influence of the predictors during therapy differs between the frequentist and the Bayesian approach. In contrast, predictive validity is comparable with a mean area under the curve (AUC) of 0.76 in both model types. CONCLUSION: Bayesian statistic provides an innovative and useful alternative to the frequentist approach in predicting therapy outcome. The theoretical foundation is particularly well suited for dynamic prediction. Nevertheless, no differences in predictive validity were found in this study. More complex methodology as well as further research seems necessary to exploit the potential of Bayesian statistics in this area.

19.
Cancer Invest ; 40(1): 1-13, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34709109

RESUMEN

An exploratory analysis of registry data from 2437 patients with advanced gastric cancer revealed a surprising association between astrological birth signs and overall survival (OS) with p = 0.01. After dichotomizing or changing the reference sign, p-values <0.05 were observed for several birth signs following adjustments for multiple comparisons. Bayesian models with moderately skeptical priors still pointed to these associations. A more plausible causal model, justified by contextual knowledge, revealed that these associations arose from the astrological sign association with seasonality. This case study illustrates how causal considerations can guide analyses through what would otherwise be a hopeless maze of statistical possibilities.


Asunto(s)
Análisis de Mediación , Teorema de Bayes , Humanos , Sistema de Registros
20.
Stat Med ; 41(2): 340-355, 2022 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-34710951

RESUMEN

Network meta-analysis (NMA) allows the combination of direct and indirect evidence from a set of randomized clinical trials. Performing NMA using individual patient data (IPD) is considered as a "gold standard" approach as it provides several advantages over NMA based on aggregate data. For example, it allows to perform advanced modeling of covariates or covariate-treatment interactions. An important issue in IPD NMA is the selection of influential parameters among terms that account for inconsistency, covariates, covariate-by-treatment interactions or nonproportionality of treatments effect for time to event data. This issue has not been deeply studied in the literature yet and in particular not for time-to-event data. A major difficulty is to jointly account for between-trial heterogeneity which could have a major influence on the selection process. The use of penalized generalized mixed effect model is a solution, but existing implementations have several shortcomings and an important computational cost that precludes their use for complex IPD NMA. In this article, we propose a penalized Poisson regression model to perform IPD NMA of time-to-event data. It is based only on fixed effect parameters which improve its computational cost over the use of random effects. It could be easily implemented using existing penalized regression package. Computer code is shared for implementation. The methods were applied on simulated data to illustrate the importance to take into account between trial heterogeneity during the selection procedure. Finally, it was applied to an IPD NMA of overall survival of chemotherapy and radiotherapy in nasopharyngeal carcinoma.


Asunto(s)
Metaanálisis en Red , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA