Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.463
Filter
1.
Proc Natl Acad Sci U S A ; 121(38): e2401882121, 2024 Sep 17.
Article in English | MEDLINE | ID: mdl-39250663

ABSTRACT

Although it is well documented that exposure to fine particulate matter (PM2.5) increases the risk of several adverse health outcomes, less is known about its relationship with economic opportunity. Previous studies have relied on regression modeling, which implied strict assumptions regarding confounding adjustments and did not explore geographical heterogeneity. We obtained data for 63,165 US census tracts (86% of all census tracts in the United States) on absolute upward mobility (AUM) defined as the mean income rank in adulthood of children born to families in the 25th percentile of the national income distribution. We applied and compared several state-of-the-art confounding adjustment methods to estimate the overall and county-specific associations of childhood exposure to PM2.5 and AUM controlling for many census tract-level confounders. We estimate that census tracts with a 1 µg/m3 higher PM2.5 concentrations in 1982 are associated with a statistically significant 1.146% (95% CI: 0.834, 1.458) lower AUM in 2015, on average. We also showed evidence that this relationship varies spatially between counties, exhibiting a more pronounced negative relationship in the Midwest and the South.


Subject(s)
Environmental Exposure , Particulate Matter , Particulate Matter/analysis , United States , Humans , Environmental Exposure/adverse effects , Child , Air Pollutants/analysis , Income , Air Pollution/analysis , Air Pollution/adverse effects , Female
2.
J Clin Epidemiol ; 175: 111511, 2024 Sep 02.
Article in English | MEDLINE | ID: mdl-39233134

ABSTRACT

OBJECTIVES: The prior event rate ratio (PERR) is a recently developed approach for controlling confounding by measured and unmeasured covariates in real-world evidence research and observational studies. Despite its rising popularity in studies of safety and effectiveness of biopharmaceutical products, there is no guidance on how to empirically evaluate its model assumptions. We propose two methods to evaluate two of the assumptions required by the PERR, specifically, the assumptions that occurrence of outcome events does not alter the likelihood of receiving treatment, and that earlier event rate does not affect later event rate. STUDY DESIGN AND SETTING: We propose using self-controlled case series (SCCS) and dynamic random intercept modeling (DRIM), respectively, to evaluate the two aforementioned assumptions. A nonmathematical introduction of the methods and their application to evaluate the assumptions are provided. We illustrate the evaluation with secondary analysis of deidentified data on pneumococcal vaccination and clinical pneumonia in The Gambia, West Africa. RESULTS: SCCS analysis of data on 12,901 vaccinated Gambian infants did not reject the assumption of clinical pneumonia episodes had no influence on the likelihood of pneumococcal vaccination. DRIM analysis of 14,325 infants with a total of 1719 episodes of clinical pneumonia did not reject the assumption of earlier episodes of clinical pneumonia had no influence on later incidence of the disease. CONCLUSION: The SCCS and DRIM methods can facilitate appropriate use of the PERR approach to control confounding. PLAIN LANGUAGE SUMMARY: The prior event rate ratio is a promising approach for analysis of real-world data and observational studies. We propose two statistical methods to evaluate the validity of two assumptions it is based on. They can facilitate appropriate use of the prior even rate ratio.

3.
Neurol Sci ; 2024 Sep 23.
Article in English | MEDLINE | ID: mdl-39307881

ABSTRACT

BACKGROUND: sNfL, a promising biomarker for neuroaxonal damage in Multiple Sclerosis (MS), requires cautious interpretation due to several comorbidity influences. OBJECTIVES: To investigate the impact of renal function on sNfL levels in MS patients. METHODS: This retrospective study stratified patients by MS clinical phenotype, acute inflammatory activity (AIA) status-defined as relapse or gadolinium-enhancing lesions within 90 days of sample collection-renal function, assessed by estimated glomerular filtration rate (eGFR), and age (< 40 years, 40-60 years, > 60 years). Comparative analysis of sNfL levels across these groups was performed. The sNfL-eGFR relationship was examined using linear and non-linear regression models, with the best fit determined by R2 and the F estimator. RESULTS: Data from 2933 determinations across 800 patients were analyzed. Patients with renal insufficiency (RI) (eGFR < 60 mL/min/1.73 m2) and mild renal impairment (MDRF) (eGFR 60-90 mL/min/1.73 m2) showed significantly higher sNfL levels compared to those with normal renal function, a pattern also observed in age groups 40 years and older. No significant differences were found between MDRF patients and those with AIA. Among RI patients, no differences in sNfL levels were observed between relapsing-remitting and progressive MS phenotypes. A regression S-Curve model was identified as the best fit, illustrating a marked increase in sNfL levels beginning at an eGFR of approximately 75 mL/min/1.73 m2. DISCUSSION: Caution is advised when interpreting sNfL levels for monitoring MS in patients with impaired renal function.

5.
mSystems ; : e0130323, 2024 Sep 06.
Article in English | MEDLINE | ID: mdl-39240096

ABSTRACT

A key challenge in the analysis of microbiome data is the integration of multi-omic datasets and the discovery of interactions between microbial taxa, their expressed genes, and the metabolites they consume and/or produce. In an effort to improve the state of the art in inferring biologically meaningful multi-omic interactions, we sought to address some of the most fundamental issues in causal inference from longitudinal multi-omics microbiome data sets. We developed METALICA, a suite of tools and techniques that can infer interactions between microbiome entities. METALICA introduces novel unrolling and de-confounding techniques used to uncover multi-omic entities that are believed to act as confounders for some of the relationships that may be inferred using standard causal inferencing tools. The results lend support to predictions about biological models and processes by which microbial taxa interact with each other in a microbiome. The unrolling process helps identify putative intermediaries (genes and/or metabolites) to explain the interactions between microbes; the de-confounding process identifies putative common causes that may lead to spurious relationships to be inferred. METALICA was applied to the networks inferred by existing causal discovery, and network inference algorithms were applied to a multi-omics data set resulting from a longitudinal study of IBD microbiomes. The most significant unrollings and de-confoundings were manually validated using the existing literature and databases. IMPORTANCE: We have developed a suite of tools and techniques capable of inferring interactions between microbiome entities. METALICA introduces novel techniques called unrolling and de-confounding that are employed to uncover multi-omic entities considered to be confounders for some of the relationships that may be inferred using standard causal inferencing tools. To evaluate our method, we conducted tests on the inflammatory bowel disease (IBD) dataset from the iHMP longitudinal study, which we pre-processed in accordance with our previous work. From this dataset, we generated various subsets, encompassing different combinations of metagenomics, metabolomics, and metatranscriptomics datasets. Using these multi-omics datasets, we demonstrate how the unrolling process aids in the identification of putative intermediaries (genes and/or metabolites) to explain the interactions between microbes. Additionally, the de-confounding process identifies potential common causes that may give rise to spurious relationships to be inferred. The most significant unrollings and de-confoundings were manually validated using the existing literature and databases.

6.
Trials ; 25(1): 593, 2024 Sep 06.
Article in English | MEDLINE | ID: mdl-39243103

ABSTRACT

BACKGROUND: Cluster randomized trials (CRTs) are randomized trials where randomization takes place at an administrative level (e.g., hospitals, clinics, or schools) rather than at the individual level. When the number of available clusters is small, researchers may not be able to rely on simple randomization to achieve balance on cluster-level covariates across treatment conditions. If these cluster-level covariates are predictive of the outcome, covariate imbalance may distort treatment effects, threaten internal validity, lead to a loss of power, and increase the variability of treatment effects. Covariate-constrained randomization (CR) is a randomization strategy designed to reduce the risk of imbalance in cluster-level covariates when performing a CRT. Existing methods for CR have been developed and evaluated for two- and multi-arm CRTs but not for factorial CRTs. METHODS: Motivated by the BEGIN study-a CRT for weight loss among patients with pre-diabetes-we develop methods for performing CR in 2 × 2 factorial cluster randomized trials with a continuous outcome and continuous cluster-level covariates. We apply our methods to the BEGIN study and use simulation to assess the performance of CR versus simple randomization for estimating treatment effects by varying the number of clusters, the degree to which clusters are associated with the outcome, the distribution of cluster level covariates, the size of the constrained randomization space, and analysis strategies. RESULTS: Compared to simple randomization of clusters, CR in the factorial setting is effective at achieving balance across cluster-level covariates between treatment conditions and provides more precise inferences. When cluster-level covariates are included in the analyses model, CR also results in greater power to detect treatment effects, but power is low compared to unadjusted analyses when the number of clusters is small. CONCLUSIONS: CR should be used instead of simple randomization when performing factorial CRTs to avoid highly imbalanced designs and to obtain more precise inferences. Except when there are a small number of clusters, cluster-level covariates should be included in the analysis model to increase power and maintain coverage and type 1 error rates at their nominal levels.


Subject(s)
Randomized Controlled Trials as Topic , Humans , Cluster Analysis , Research Design , Computer Simulation , Treatment Outcome , Diabetes Mellitus, Type 2/prevention & control , Diabetes Mellitus, Type 2/diagnosis , Weight Loss , Data Interpretation, Statistical
7.
BMC Med Res Methodol ; 24(1): 195, 2024 Sep 07.
Article in English | MEDLINE | ID: mdl-39244581

ABSTRACT

The inability to correctly account for unmeasured confounding can lead to bias in parameter estimates, invalid uncertainty assessments, and erroneous conclusions. Sensitivity analysis is an approach to investigate the impact of unmeasured confounding in observational studies. However, the adoption of this approach has been slow given the lack of accessible software. An extensive review of available R packages to account for unmeasured confounding list deterministic sensitivity analysis methods, but no R packages were listed for probabilistic sensitivity analysis. The R package unmconf implements the first available package for probabilistic sensitivity analysis through a Bayesian unmeasured confounding model. The package allows for normal, binary, Poisson, or gamma responses, accounting for one or two unmeasured confounders from the normal or binomial distribution. The goal of unmconf is to implement a user friendly package that performs Bayesian modeling in the presence of unmeasured confounders, with simple commands on the front end while performing more intensive computation on the back end. We investigate the applicability of this package through novel simulation studies. The results indicate that credible intervals will have near nominal coverage probability and smaller bias when modeling the unmeasured confounder(s) for varying levels of internal/external validation data across various combinations of response-unmeasured confounder distributional families.


Subject(s)
Bayes Theorem , Confounding Factors, Epidemiologic , Software , Humans , Computer Simulation , Models, Statistical , Algorithms , Bias , Regression Analysis
8.
Stat Med ; 2024 Sep 05.
Article in English | MEDLINE | ID: mdl-39237100

ABSTRACT

From early in the coronavirus disease 2019 (COVID-19) pandemic, there was interest in using machine learning methods to predict COVID-19 infection status based on vocal audio signals, for example, cough recordings. However, early studies had limitations in terms of data collection and of how the performances of the proposed predictive models were assessed. This article describes how these limitations have been overcome in a study carried out by the Turing-RSS Health Data Laboratory and the UK Health Security Agency. As part of the study, the UK Health Security Agency collected a dataset of acoustic recordings, SARS-CoV-2 infection status and extensive study participant meta-data. This allowed us to rigorously assess state-of-the-art machine learning techniques to predict SARS-CoV-2 infection status based on vocal audio signals. The lessons learned from this project should inform future studies on statistical evaluation methods to assess the performance of machine learning techniques for public health tasks.

9.
Article in English | MEDLINE | ID: mdl-39243214
10.
J Hum Lact ; : 8903344241279386, 2024 Sep 13.
Article in English | MEDLINE | ID: mdl-39268893
11.
J Clin Epidemiol ; 174: 111504, 2024 Aug 17.
Article in English | MEDLINE | ID: mdl-39159770

ABSTRACT

OBJECTIVES: To quantify the ability of two new comorbidity indices to adjust for confounding, by benchmarking a target trial emulation against the randomized controlled trial (RCT) result. STUDY DESIGN AND SETTING: Observational study including 18,316 men from Prostate Cancer data Base Sweden 5.0, diagnosed with prostate cancer between 2008 and 2019 and treated with primary radical prostatectomy (RP, n = 14,379) or radiotherapy (RT, n = 3,937). The effect on adjusted risk of death from any cause after adjustment for comorbidity by use of two new comorbidity indices, the multidimensional diagnosis-based comorbidity index and the drug comorbidity index, were compared to adjustment for the Charlson comorbidity index (CCI). RESULTS: Risk of death was higher after RT than RP (hazard ratio [HR] = 1.94; 95% confidence interval [CI]: 1.70-2.21). The difference decreased when adjusting for age, cancer characteristics, and CCI (HR = 1.32, 95% CI: 1.06-1.66). Adjustment for the two new comorbidity indices further attenuated the difference (HR 1.14, 95% CI 0.91-1.44). Emulation of a hypothetical pragmatic trial where also older men with any type of baseline comorbidity were included, largely confirmed these results (HR 1.10; 95% CI 0.95-1.26). CONCLUSION: Adjustment for comorbidity using two new indices provided comparable risk of death from any cause in line with results of a RCT. Similar results were seen in a broader study population, more representative of clinical practice.

12.
J Clin Epidemiol ; 175: 111507, 2024 Aug 27.
Article in English | MEDLINE | ID: mdl-39197688

ABSTRACT

OBJECTIVES: Quantitative bias analysis (QBA) methods evaluate the impact of biases arising from systematic errors on observational study results. This systematic review aimed to summarize the range and characteristics of QBA methods for summary-level data published in the peer-reviewed literature. STUDY DESIGN AND SETTING: We searched MEDLINE, Embase, Scopus, and Web of Science for English-language articles describing QBA methods. For each QBA method, we recorded key characteristics, including applicable study designs, bias(es) addressed; bias parameters, and publicly available software. The study protocol was preregistered on the Open Science Framework (https://osf.io/ue6vm/). RESULTS: Our search identified 10,249 records, of which 53 were articles describing 57 QBA methods for summary-level data. Of the 57 QBA methods, 53 (93%) were explicitly designed for observational studies, and 4 (7%) for meta-analyses. There were 29 (51%) QBA methods that addressed unmeasured confounding, 19 (33%) misclassification bias, 6 (11%) selection bias, and 3 (5%) multiple biases. Thirty-eight (67%) QBA methods were designed to generate bias-adjusted effect estimates and 18 (32%) were designed to describe how bias could explain away observed findings. Twenty-two (39%) articles provided code or online tools to implement the QBA methods. CONCLUSION: In this systematic review, we identified a total of 57 QBA methods for summary-level epidemiologic data published in the peer-reviewed literature. Future investigators can use this systematic review to identify different QBA methods for summary-level epidemiologic data. PLAIN LANGUAGE SUMMARY: Quantitative bias analysis (QBA) methods can be used to evaluate the impact of biases on observational study results. However, little is known about the full range and characteristics of available methods in the peer-reviewed literature that can be used to conduct QBA using information reported in manuscripts and other publicly available sources without requiring the raw data from a study. In this systematic review, we identified 57 QBA methods for summary-level data from observational studies. Overall, there were 29 methods that addressed unmeasured confounding, 19 that addressed misclassification bias, six that addressed selection bias, and three that addressed multiple biases. This systematic review may help future investigators identify different QBA methods for summary-level data.

13.
Am J Epidemiol ; 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39103282

ABSTRACT

Recently, a bespoke instrumental variable method was proposed, which, under certain assumptions, can eliminate bias due to unmeasured confounding when estimating the causal exposure effect among the exposed. This method uses data from both the study population of interest, and a reference population in which the exposure is completely absent. In this paper, we extend the bespoke instrumental variable method to allow for a non-ideal reference population that may include exposed subjects. Such an extension is particularly important in randomized trials with nonadherence, where even subjects in the control arm may have access to the treatment under investigation. We further scrutinize the assumptions underlying the bespoke instrumental method, and caution the reader about the potential non-robustness of the method to these assumptions.

15.
Article in English | MEDLINE | ID: mdl-39122629

ABSTRACT

Oncologists are faced with choosing the best treatment for each patient, based on the available evidence from randomized controlled trials (RCTs) and observational studies. RCTs provide estimates of the average effects of treatments on groups of patients, but they may not apply in many real-world scenarios where for example patients have different characteristics than the RCT participants, or where different treatment variants are considered. Causal inference defines what a treatment effect is and how it may be estimated with RCTs or outside of RCTs with observational - or 'real-world' - data. In this review, we introduce the field of causal inference, explain what a treatment effect is and what important challenges are with treatment effect estimation with observational data. We then provide a framework for conducting causal inference studies and describe when in oncology causal inference from observational data may be particularly valuable. Recognizing the strengths and limitations of both RCTs and observational causal inference provides a way for more informed and individualized treatment decision-making in oncology.

16.
Front Med (Lausanne) ; 11: 1419147, 2024.
Article in English | MEDLINE | ID: mdl-39156695

ABSTRACT

Purpose: To investigate the robustness and variability of a novel kinetic visual field (VF) screening method termed rapid campimetry (RC). Methods: In RC visual field (VF) screening is enabled via kinetic-based testing on any computer (10°/4.7 s at 40-cm viewing distance) and high contrast in a dark room (1 cd/cm2). In experiment (1): 30 participants [20 healthy participants (HC), 5 glaucoma patients (GLA) and 5 patients with cataract (CAT)] were included to test the intra-session variability (fatigue effect) and the following effects on RC: room illumination (140 cd/m2), ±3 D refractive errors, media opacity. In experiment (2): Inter-session variability (1-3 weeks apart) was assessed in 10 HC and 10 GLA. Since RC detects absolute scotomas, the outcome measure was the size of physiological (blindspot) and pathological (glaucoma) scotomas in degrees. A repeated measures ANOVA was employed in experiment 1 and intraclass correlation (ICC) in experiment 2. Results: Neither the size of the blindspot nor the VF defects differed significantly between the different testing conditions. For intra-session variability, the average bias of blindspot size was -0.6 ± 2.5°, limits of agreement (LOA), in comparison to 0.3 ± 1.5° for VF defects, both with ICC of 0.86 and 0.93, respectively. For the inter-session repeatability, the average bias and LOA for blindspot size was 0.2 ± 3.85° in comparison 1.6 ± 3.1° for VF defects, both with ICC of 0.87 and 0.91, respectively. Conclusion: RC was robust to suboptimal testing VF conditions and showed good-to-excellent reliability between VF testing visits holding high potential for teleophthalmology.

17.
Am J Epidemiol ; 2024 Aug 19.
Article in English | MEDLINE | ID: mdl-39160637

ABSTRACT

The test-negative design (TND) is a popular method for evaluating vaccine effectiveness (VE). A "classical" TND study includes symptomatic individuals tested for the disease targeted by the vaccine to estimate VE against symptomatic infection. However, recent applications of the TND have attempted to estimate VE against infection by including all tested individuals, regardless of their symptoms. In this article, we use directed acyclic graphs and simulations to investigate potential biases in TND studies of COVID-19 VE arising from the use of this "alternative" approach, particularly when applied during periods of widespread testing. We show that the inclusion of asymptomatic individuals can potentially lead to collider stratification bias, uncontrolled confounding by health and healthcare-seeking behaviors (HSBs), and differential outcome misclassification. While our focus is on the COVID-19 setting, the issues discussed here may also be relevant in the context of other infectious diseases. This may be particularly true in scenarios where there is either a high baseline prevalence of infection, a strong correlation between HSBs and vaccination, different testing practices for vaccinated and unvaccinated individuals, or settings where both the vaccine under study attenuates symptoms of infection and diagnostic accuracy is modified by the presence of symptoms.

18.
J Clin Epidemiol ; 173: 111457, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38977160

ABSTRACT

Randomized trials can take more explanatory or more pragmatic approaches. Pragmatic studies, conducted closer to real-world conditions, assess treatment effectiveness while considering factors like protocol adherence. In these studies, intention-to-treat (ITT) analysis is fundamental, comparing outcomes regardless of the actual treatment received. Explanatory trials, conducted closer to optimal conditions, evaluate treatment efficacy, commonly with a per protocol (PP) analysis, which includes only outcomes from adherent participants. ITT and PP are strategies used in the conception, design, conduct (protocol execution), analysis, and interpretation of trials. Each serves distinct objectives. While both can be valid, when bias is controlled, and complementary, each has its own limitations. By excluding nonadherent participants, PP analyses can lose the benefits of randomization, resulting in group differences in factors (influencing adherence and outcomes) that were present at baseline. Additionally, clinical and social factors affecting adherence can also operate during follow-up, that is, after randomization. Therefore, incomplete adherence may introduce postrandomization confounding. Conversely, ITT analysis, including all participants regardless of adherence, may dilute treatment effects. Moreover, varying adherence levels could limit the applicability of ITT findings in settings with diverse adherence patterns. Both ITT and PP analyses can be affected by selection bias due to differential losses and nonresponse (ie, missing data) during follow-up. Combining high-quality and comprehensive data with advanced statistical methods, known as g-methods, like inverse probability weighting, may help address postrandomization confounding in PP analysis as well as selection bias in both ITT and PP analyses.


Subject(s)
Intention to Treat Analysis , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/methods , Research Design , Pragmatic Clinical Trials as Topic/methods , Clinical Protocols/standards
19.
Multivariate Behav Res ; 59(5): 995-1018, 2024.
Article in English | MEDLINE | ID: mdl-38963381

ABSTRACT

Psychologists leverage longitudinal designs to examine the causal effects of a focal predictor (i.e., treatment or exposure) over time. But causal inference of naturally observed time-varying treatments is complicated by treatment-dependent confounding in which earlier treatments affect confounders of later treatments. In this tutorial article, we introduce psychologists to an established solution to this problem from the causal inference literature: the parametric g-computation formula. We explain why the g-formula is effective at handling treatment-dependent confounding. We demonstrate that the parametric g-formula is conceptually intuitive, easy to implement, and well-suited for psychological research. We first clarify that the parametric g-formula essentially utilizes a series of statistical models to estimate the joint distribution of all post-treatment variables. These statistical models can be readily specified as standard multiple linear regression functions. We leverage this insight to implement the parametric g-formula using lavaan, a widely adopted R package for structural equation modeling. Moreover, we describe how the parametric g-formula may be used to estimate a marginal structural model whose causal parameters parsimoniously encode time-varying treatment effects. We hope this accessible introduction to the parametric g-formula will equip psychologists with an analytic tool to address their causal inquiries using longitudinal data.


Subject(s)
Models, Statistical , Humans , Causality , Data Interpretation, Statistical , Time Factors , Software , Longitudinal Studies , Linear Models
20.
Diabetes Obes Metab ; 26(10): 4273-4280, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39014528

ABSTRACT

AIM: Non-randomized studies on bariatric surgery have reported large reductions in mortality within 6-12 months after surgery compared with non-surgical patients. It is unclear whether these findings are the result of bias. STUDY DESIGN AND SETTING: We searched PubMed to identify all non-randomized studies investigating the effect of bariatric surgery on all-cause mortality compared with non-surgical patients. We assessed these studies for potential confounding and time-related biases. We conducted bias analyses to quantify the effect of these biases. RESULTS: We identified 21 cohort studies that met our inclusion criteria. Among those, 11 were affected by immortal time bias resulting from the misclassification or exclusion of relevant follow-up time. Five studies were subject to potential confounding bias because of a lack of adjustment for body mass index (BMI). All studies used an inadequate comparator group that lacked indications for bariatric surgery. Bias analyses to correct for potential confounding from BMI shifted the effect estimates towards the null [reported hazard ratio (HR): 0.78 vs. bias-adjusted HR: 0.92]. Bias analyses to correct for the presence of immortal time also shifted the effect estimates towards the null (adjustment for 2-year wait time: reported HR: 0.57 vs. bias-adjusted HR: 0.81). CONCLUSION: Several important sources of bias were identified in non-randomized studies of the effectiveness of bariatric surgery versus non-surgical comparators on mortality. Future studies should ensure that confounding by BMI is accounted for, considering the choice of the comparator group, and that the design or analysis avoids immortal time bias from the misclassification or exclusion.


Subject(s)
Bariatric Surgery , Bias , Humans , Bariatric Surgery/mortality , Body Mass Index , Obesity, Morbid/surgery , Obesity, Morbid/mortality , Obesity, Morbid/complications , Mortality , Cause of Death , Female , Obesity/surgery , Obesity/mortality , Obesity/complications , Confounding Factors, Epidemiologic , Male
SELECTION OF CITATIONS
SEARCH DETAIL