Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.430
Filtrar
1.
J Clin Epidemiol ; 175: 111511, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39233134

RESUMO

OBJECTIVES: The prior event rate ratio (PERR) is a recently developed approach for controlling confounding by measured and unmeasured covariates in real-world evidence research and observational studies. Despite its rising popularity in studies of safety and effectiveness of biopharmaceutical products, there is no guidance on how to empirically evaluate its model assumptions. We propose two methods to evaluate two of the assumptions required by the PERR, specifically, the assumptions that occurrence of outcome events does not alter the likelihood of receiving treatment, and that earlier event rate does not affect later event rate. STUDY DESIGN AND SETTING: We propose using self-controlled case series (SCCS) and dynamic random intercept modeling (DRIM), respectively, to evaluate the two aforementioned assumptions. A nonmathematical introduction of the methods and their application to evaluate the assumptions are provided. We illustrate the evaluation with secondary analysis of deidentified data on pneumococcal vaccination and clinical pneumonia in The Gambia, West Africa. RESULTS: SCCS analysis of data on 12,901 vaccinated Gambian infants did not reject the assumption of clinical pneumonia episodes had no influence on the likelihood of pneumococcal vaccination. DRIM analysis of 14,325 infants with a total of 1719 episodes of clinical pneumonia did not reject the assumption of earlier episodes of clinical pneumonia had no influence on later incidence of the disease. CONCLUSION: The SCCS and DRIM methods can facilitate appropriate use of the PERR approach to control confounding. PLAIN LANGUAGE SUMMARY: The prior event rate ratio is a promising approach for analysis of real-world data and observational studies. We propose two statistical methods to evaluate the validity of two assumptions it is based on. They can facilitate appropriate use of the prior even rate ratio.

2.
Genome Biol ; 25(1): 247, 2024 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-39322959

RESUMO

BACKGROUND: In microbiome disease association studies, it is a fundamental task to test which microbes differ in their abundance between groups. Yet, consensus on suitable or optimal statistical methods for differential abundance testing is lacking, and it remains unexplored how these cope with confounding. Previous differential abundance benchmarks relying on simulated datasets did not quantitatively evaluate the similarity to real data, which undermines their recommendations. RESULTS: Our simulation framework implants calibrated signals into real taxonomic profiles, including signals mimicking confounders. Using several whole meta-genome and 16S rRNA gene amplicon datasets, we validate that our simulated data resembles real data from disease association studies much more than in previous benchmarks. With extensively parametrized simulations, we benchmark the performance of nineteen differential abundance methods and further evaluate the best ones on confounded simulations. Only classic statistical methods (linear models, the Wilcoxon test, t-test), limma, and fastANCOM properly control false discoveries at relatively high sensitivity. When additionally considering confounders, these issues are exacerbated, but we find that adjusted differential abundance testing can effectively mitigate them. In a large cardiometabolic disease dataset, we showcase that failure to account for covariates such as medication causes spurious association in real-world applications. CONCLUSIONS: Tight error control is critical for microbiome association studies. The unsatisfactory performance of many differential abundance methods and the persistent danger of unchecked confounding suggest these contribute to a lack of reproducibility among such studies. We have open-sourced our simulation and benchmarking software to foster a much-needed consolidation of statistical methodology for microbiome research.


Assuntos
Benchmarking , Microbiota , Humanos , RNA Ribossômico 16S/genética , Simulação por Computador
3.
Am J Epidemiol ; 2024 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-39323264

RESUMO

Negative controls are increasingly used to evaluate the presence of potential unmeasured confounding in observational studies. Beyond the use of negative controls to detect the presence of residual confounding, proximal causal inference (PCI) was recently proposed to de-bias confounded causal effect estimates, by leveraging a pair of treatment and outcome negative control or confounding proxy variables. While formal methods for statistical inference have been developed for PCI, these methods can be challenging to implement as they involve solving complex integral equations that are typically ill-posed. We develop a regression-based PCI approach, employing two-stage generalized linear regression models (GLMs) to implement PCI, which obviates the need to solve difficult integral equations. The proposed approach has merit in that (i) it is applicable to continuous, count, and binary outcomes cases, making it relevant to a wide range of real-world applications, and (ii) it is easy to implement using off-the-shelf software for GLMs. We establish the statistical properties of regression-based PCI and illustrate their performance in both synthetic and real-world empirical applications.

4.
Proc Natl Acad Sci U S A ; 121(38): e2401882121, 2024 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-39250663

RESUMO

Although it is well documented that exposure to fine particulate matter (PM2.5) increases the risk of several adverse health outcomes, less is known about its relationship with economic opportunity. Previous studies have relied on regression modeling, which implied strict assumptions regarding confounding adjustments and did not explore geographical heterogeneity. We obtained data for 63,165 US census tracts (86% of all census tracts in the United States) on absolute upward mobility (AUM) defined as the mean income rank in adulthood of children born to families in the 25th percentile of the national income distribution. We applied and compared several state-of-the-art confounding adjustment methods to estimate the overall and county-specific associations of childhood exposure to PM2.5 and AUM controlling for many census tract-level confounders. We estimate that census tracts with a 1 µg/m3 higher PM2.5 concentrations in 1982 are associated with a statistically significant 1.146% (95% CI: 0.834, 1.458) lower AUM in 2015, on average. We also showed evidence that this relationship varies spatially between counties, exhibiting a more pronounced negative relationship in the Midwest and the South.


Assuntos
Exposição Ambiental , Material Particulado , Material Particulado/análise , Estados Unidos , Humanos , Exposição Ambiental/efeitos adversos , Criança , Poluentes Atmosféricos/análise , Renda , Poluição do Ar/análise , Poluição do Ar/efeitos adversos , Feminino
5.
J Hum Lact ; : 8903344241279386, 2024 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-39268893
6.
Neurol Sci ; 2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39307881

RESUMO

BACKGROUND: sNfL, a promising biomarker for neuroaxonal damage in Multiple Sclerosis (MS), requires cautious interpretation due to several comorbidity influences. OBJECTIVES: To investigate the impact of renal function on sNfL levels in MS patients. METHODS: This retrospective study stratified patients by MS clinical phenotype, acute inflammatory activity (AIA) status-defined as relapse or gadolinium-enhancing lesions within 90 days of sample collection-renal function, assessed by estimated glomerular filtration rate (eGFR), and age (< 40 years, 40-60 years, > 60 years). Comparative analysis of sNfL levels across these groups was performed. The sNfL-eGFR relationship was examined using linear and non-linear regression models, with the best fit determined by R2 and the F estimator. RESULTS: Data from 2933 determinations across 800 patients were analyzed. Patients with renal insufficiency (RI) (eGFR < 60 mL/min/1.73 m2) and mild renal impairment (MDRF) (eGFR 60-90 mL/min/1.73 m2) showed significantly higher sNfL levels compared to those with normal renal function, a pattern also observed in age groups 40 years and older. No significant differences were found between MDRF patients and those with AIA. Among RI patients, no differences in sNfL levels were observed between relapsing-remitting and progressive MS phenotypes. A regression S-Curve model was identified as the best fit, illustrating a marked increase in sNfL levels beginning at an eGFR of approximately 75 mL/min/1.73 m2. DISCUSSION: Caution is advised when interpreting sNfL levels for monitoring MS in patients with impaired renal function.

8.
Stat Med ; 2024 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-39237100

RESUMO

From early in the coronavirus disease 2019 (COVID-19) pandemic, there was interest in using machine learning methods to predict COVID-19 infection status based on vocal audio signals, for example, cough recordings. However, early studies had limitations in terms of data collection and of how the performances of the proposed predictive models were assessed. This article describes how these limitations have been overcome in a study carried out by the Turing-RSS Health Data Laboratory and the UK Health Security Agency. As part of the study, the UK Health Security Agency collected a dataset of acoustic recordings, SARS-CoV-2 infection status and extensive study participant meta-data. This allowed us to rigorously assess state-of-the-art machine learning techniques to predict SARS-CoV-2 infection status based on vocal audio signals. The lessons learned from this project should inform future studies on statistical evaluation methods to assess the performance of machine learning techniques for public health tasks.

9.
mSystems ; : e0130323, 2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39240096

RESUMO

A key challenge in the analysis of microbiome data is the integration of multi-omic datasets and the discovery of interactions between microbial taxa, their expressed genes, and the metabolites they consume and/or produce. In an effort to improve the state of the art in inferring biologically meaningful multi-omic interactions, we sought to address some of the most fundamental issues in causal inference from longitudinal multi-omics microbiome data sets. We developed METALICA, a suite of tools and techniques that can infer interactions between microbiome entities. METALICA introduces novel unrolling and de-confounding techniques used to uncover multi-omic entities that are believed to act as confounders for some of the relationships that may be inferred using standard causal inferencing tools. The results lend support to predictions about biological models and processes by which microbial taxa interact with each other in a microbiome. The unrolling process helps identify putative intermediaries (genes and/or metabolites) to explain the interactions between microbes; the de-confounding process identifies putative common causes that may lead to spurious relationships to be inferred. METALICA was applied to the networks inferred by existing causal discovery, and network inference algorithms were applied to a multi-omics data set resulting from a longitudinal study of IBD microbiomes. The most significant unrollings and de-confoundings were manually validated using the existing literature and databases. IMPORTANCE: We have developed a suite of tools and techniques capable of inferring interactions between microbiome entities. METALICA introduces novel techniques called unrolling and de-confounding that are employed to uncover multi-omic entities considered to be confounders for some of the relationships that may be inferred using standard causal inferencing tools. To evaluate our method, we conducted tests on the inflammatory bowel disease (IBD) dataset from the iHMP longitudinal study, which we pre-processed in accordance with our previous work. From this dataset, we generated various subsets, encompassing different combinations of metagenomics, metabolomics, and metatranscriptomics datasets. Using these multi-omics datasets, we demonstrate how the unrolling process aids in the identification of putative intermediaries (genes and/or metabolites) to explain the interactions between microbes. Additionally, the de-confounding process identifies potential common causes that may give rise to spurious relationships to be inferred. The most significant unrollings and de-confoundings were manually validated using the existing literature and databases.

10.
Trials ; 25(1): 593, 2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39243103

RESUMO

BACKGROUND: Cluster randomized trials (CRTs) are randomized trials where randomization takes place at an administrative level (e.g., hospitals, clinics, or schools) rather than at the individual level. When the number of available clusters is small, researchers may not be able to rely on simple randomization to achieve balance on cluster-level covariates across treatment conditions. If these cluster-level covariates are predictive of the outcome, covariate imbalance may distort treatment effects, threaten internal validity, lead to a loss of power, and increase the variability of treatment effects. Covariate-constrained randomization (CR) is a randomization strategy designed to reduce the risk of imbalance in cluster-level covariates when performing a CRT. Existing methods for CR have been developed and evaluated for two- and multi-arm CRTs but not for factorial CRTs. METHODS: Motivated by the BEGIN study-a CRT for weight loss among patients with pre-diabetes-we develop methods for performing CR in 2 × 2 factorial cluster randomized trials with a continuous outcome and continuous cluster-level covariates. We apply our methods to the BEGIN study and use simulation to assess the performance of CR versus simple randomization for estimating treatment effects by varying the number of clusters, the degree to which clusters are associated with the outcome, the distribution of cluster level covariates, the size of the constrained randomization space, and analysis strategies. RESULTS: Compared to simple randomization of clusters, CR in the factorial setting is effective at achieving balance across cluster-level covariates between treatment conditions and provides more precise inferences. When cluster-level covariates are included in the analyses model, CR also results in greater power to detect treatment effects, but power is low compared to unadjusted analyses when the number of clusters is small. CONCLUSIONS: CR should be used instead of simple randomization when performing factorial CRTs to avoid highly imbalanced designs and to obtain more precise inferences. Except when there are a small number of clusters, cluster-level covariates should be included in the analysis model to increase power and maintain coverage and type 1 error rates at their nominal levels.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Análise por Conglomerados , Projetos de Pesquisa , Simulação por Computador , Resultado do Tratamento , Diabetes Mellitus Tipo 2/prevenção & controle , Diabetes Mellitus Tipo 2/diagnóstico , Redução de Peso , Interpretação Estatística de Dados
11.
BMC Med Res Methodol ; 24(1): 195, 2024 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-39244581

RESUMO

The inability to correctly account for unmeasured confounding can lead to bias in parameter estimates, invalid uncertainty assessments, and erroneous conclusions. Sensitivity analysis is an approach to investigate the impact of unmeasured confounding in observational studies. However, the adoption of this approach has been slow given the lack of accessible software. An extensive review of available R packages to account for unmeasured confounding list deterministic sensitivity analysis methods, but no R packages were listed for probabilistic sensitivity analysis. The R package unmconf implements the first available package for probabilistic sensitivity analysis through a Bayesian unmeasured confounding model. The package allows for normal, binary, Poisson, or gamma responses, accounting for one or two unmeasured confounders from the normal or binomial distribution. The goal of unmconf is to implement a user friendly package that performs Bayesian modeling in the presence of unmeasured confounders, with simple commands on the front end while performing more intensive computation on the back end. We investigate the applicability of this package through novel simulation studies. The results indicate that credible intervals will have near nominal coverage probability and smaller bias when modeling the unmeasured confounder(s) for varying levels of internal/external validation data across various combinations of response-unmeasured confounder distributional families.


Assuntos
Teorema de Bayes , Fatores de Confusão Epidemiológicos , Software , Humanos , Simulação por Computador , Modelos Estatísticos , Algoritmos , Viés , Análise de Regressão
12.
Artigo em Inglês | MEDLINE | ID: mdl-39243214
13.
Am J Epidemiol ; 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39103282

RESUMO

Recently, a bespoke instrumental variable method was proposed, which, under certain assumptions, can eliminate bias due to unmeasured confounding when estimating the causal exposure effect among the exposed. This method uses data from both the study population of interest, and a reference population in which the exposure is completely absent. In this paper, we extend the bespoke instrumental variable method to allow for a non-ideal reference population that may include exposed subjects. Such an extension is particularly important in randomized trials with nonadherence, where even subjects in the control arm may have access to the treatment under investigation. We further scrutinize the assumptions underlying the bespoke instrumental method, and caution the reader about the potential non-robustness of the method to these assumptions.

15.
Artigo em Inglês | MEDLINE | ID: mdl-39122629

RESUMO

Oncologists are faced with choosing the best treatment for each patient, based on the available evidence from randomized controlled trials (RCTs) and observational studies. RCTs provide estimates of the average effects of treatments on groups of patients, but they may not apply in many real-world scenarios where for example patients have different characteristics than the RCT participants, or where different treatment variants are considered. Causal inference defines what a treatment effect is and how it may be estimated with RCTs or outside of RCTs with observational - or 'real-world' - data. In this review, we introduce the field of causal inference, explain what a treatment effect is and what important challenges are with treatment effect estimation with observational data. We then provide a framework for conducting causal inference studies and describe when in oncology causal inference from observational data may be particularly valuable. Recognizing the strengths and limitations of both RCTs and observational causal inference provides a way for more informed and individualized treatment decision-making in oncology.

16.
Front Med (Lausanne) ; 11: 1419147, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39156695

RESUMO

Purpose: To investigate the robustness and variability of a novel kinetic visual field (VF) screening method termed rapid campimetry (RC). Methods: In RC visual field (VF) screening is enabled via kinetic-based testing on any computer (10°/4.7 s at 40-cm viewing distance) and high contrast in a dark room (1 cd/cm2). In experiment (1): 30 participants [20 healthy participants (HC), 5 glaucoma patients (GLA) and 5 patients with cataract (CAT)] were included to test the intra-session variability (fatigue effect) and the following effects on RC: room illumination (140 cd/m2), ±3 D refractive errors, media opacity. In experiment (2): Inter-session variability (1-3 weeks apart) was assessed in 10 HC and 10 GLA. Since RC detects absolute scotomas, the outcome measure was the size of physiological (blindspot) and pathological (glaucoma) scotomas in degrees. A repeated measures ANOVA was employed in experiment 1 and intraclass correlation (ICC) in experiment 2. Results: Neither the size of the blindspot nor the VF defects differed significantly between the different testing conditions. For intra-session variability, the average bias of blindspot size was -0.6 ± 2.5°, limits of agreement (LOA), in comparison to 0.3 ± 1.5° for VF defects, both with ICC of 0.86 and 0.93, respectively. For the inter-session repeatability, the average bias and LOA for blindspot size was 0.2 ± 3.85° in comparison 1.6 ± 3.1° for VF defects, both with ICC of 0.87 and 0.91, respectively. Conclusion: RC was robust to suboptimal testing VF conditions and showed good-to-excellent reliability between VF testing visits holding high potential for teleophthalmology.

17.
Am J Epidemiol ; 2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39160637

RESUMO

The test-negative design (TND) is a popular method for evaluating vaccine effectiveness (VE). A "classical" TND study includes symptomatic individuals tested for the disease targeted by the vaccine to estimate VE against symptomatic infection. However, recent applications of the TND have attempted to estimate VE against infection by including all tested individuals, regardless of their symptoms. In this article, we use directed acyclic graphs and simulations to investigate potential biases in TND studies of COVID-19 VE arising from the use of this "alternative" approach, particularly when applied during periods of widespread testing. We show that the inclusion of asymptomatic individuals can potentially lead to collider stratification bias, uncontrolled confounding by health and healthcare-seeking behaviors (HSBs), and differential outcome misclassification. While our focus is on the COVID-19 setting, the issues discussed here may also be relevant in the context of other infectious diseases. This may be particularly true in scenarios where there is either a high baseline prevalence of infection, a strong correlation between HSBs and vaccination, different testing practices for vaccinated and unvaccinated individuals, or settings where both the vaccine under study attenuates symptoms of infection and diagnostic accuracy is modified by the presence of symptoms.

18.
J Clin Epidemiol ; 175: 111507, 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39197688

RESUMO

OBJECTIVES: Quantitative bias analysis (QBA) methods evaluate the impact of biases arising from systematic errors on observational study results. This systematic review aimed to summarize the range and characteristics of QBA methods for summary-level data published in the peer-reviewed literature. STUDY DESIGN AND SETTING: We searched MEDLINE, Embase, Scopus, and Web of Science for English-language articles describing QBA methods. For each QBA method, we recorded key characteristics, including applicable study designs, bias(es) addressed; bias parameters, and publicly available software. The study protocol was preregistered on the Open Science Framework (https://osf.io/ue6vm/). RESULTS: Our search identified 10,249 records, of which 53 were articles describing 57 QBA methods for summary-level data. Of the 57 QBA methods, 53 (93%) were explicitly designed for observational studies, and 4 (7%) for meta-analyses. There were 29 (51%) QBA methods that addressed unmeasured confounding, 19 (33%) misclassification bias, 6 (11%) selection bias, and 3 (5%) multiple biases. Thirty-eight (67%) QBA methods were designed to generate bias-adjusted effect estimates and 18 (32%) were designed to describe how bias could explain away observed findings. Twenty-two (39%) articles provided code or online tools to implement the QBA methods. CONCLUSION: In this systematic review, we identified a total of 57 QBA methods for summary-level epidemiologic data published in the peer-reviewed literature. Future investigators can use this systematic review to identify different QBA methods for summary-level epidemiologic data. PLAIN LANGUAGE SUMMARY: Quantitative bias analysis (QBA) methods can be used to evaluate the impact of biases on observational study results. However, little is known about the full range and characteristics of available methods in the peer-reviewed literature that can be used to conduct QBA using information reported in manuscripts and other publicly available sources without requiring the raw data from a study. In this systematic review, we identified 57 QBA methods for summary-level data from observational studies. Overall, there were 29 methods that addressed unmeasured confounding, 19 that addressed misclassification bias, six that addressed selection bias, and three that addressed multiple biases. This systematic review may help future investigators identify different QBA methods for summary-level data.

19.
J Clin Epidemiol ; 174: 111504, 2024 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-39159770

RESUMO

OBJECTIVES: To quantify the ability of two new comorbidity indices to adjust for confounding, by benchmarking a target trial emulation against the randomized controlled trial (RCT) result. STUDY DESIGN AND SETTING: Observational study including 18,316 men from Prostate Cancer data Base Sweden 5.0, diagnosed with prostate cancer between 2008 and 2019 and treated with primary radical prostatectomy (RP, n = 14,379) or radiotherapy (RT, n = 3,937). The effect on adjusted risk of death from any cause after adjustment for comorbidity by use of two new comorbidity indices, the multidimensional diagnosis-based comorbidity index and the drug comorbidity index, were compared to adjustment for the Charlson comorbidity index (CCI). RESULTS: Risk of death was higher after RT than RP (hazard ratio [HR] = 1.94; 95% confidence interval [CI]: 1.70-2.21). The difference decreased when adjusting for age, cancer characteristics, and CCI (HR = 1.32, 95% CI: 1.06-1.66). Adjustment for the two new comorbidity indices further attenuated the difference (HR 1.14, 95% CI 0.91-1.44). Emulation of a hypothetical pragmatic trial where also older men with any type of baseline comorbidity were included, largely confirmed these results (HR 1.10; 95% CI 0.95-1.26). CONCLUSION: Adjustment for comorbidity using two new indices provided comparable risk of death from any cause in line with results of a RCT. Similar results were seen in a broader study population, more representative of clinical practice.

20.
Stat Methods Med Res ; : 9622802241262527, 2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39053570

RESUMO

Observational studies are frequently used in clinical research to estimate the effects of treatments or exposures on outcomes. To reduce the effects of confounding when estimating treatment effects, covariate balancing methods are frequently implemented. This study evaluated, using extensive Monte Carlo simulation, several methods of covariate balancing, and two methods for propensity score estimation, for estimating the average treatment effect on the treated using a hazard ratio from a Cox proportional hazards model. With respect to minimizing bias and maximizing accuracy (as measured by the mean square error) of the treatment effect, the average treatment effect on the treated weighting, fine stratification, and optimal full matching with a conventional logistic regression model for the propensity score performed best across all simulated conditions. Other methods performed well in specific circumstances, such as pair matching when sample sizes were large (n = 5000) and the proportion treated was < 0.25. Statistical power was generally higher for weighting methods than matching methods, and Type I error rates were at or below the nominal level for balancing methods with unbiased treatment effect estimates. There was also a decreasing effective sample size with an increasing number of strata, therefore for stratification-based weighting methods, it may be important to consider fewer strata. Generally, we recommend methods that performed well in our simulations, although the identification of methods that performed well is necessarily limited by the specific features of our simulation. The methods are illustrated using a real-world example comparing beta blockers and angiotensin-converting enzyme inhibitors among hypertensive patients at risk for incident stroke.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA