Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 202
Filter
1.
Pharmacoepidemiol Drug Saf ; 33(6): e5820, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38783407

ABSTRACT

PURPOSE: Our objective is to describe how the U.S. Food and Drug Administration (FDA)'s Sentinel System implements best practices to ensure trust in drug safety studies using real-world data from disparate sources. METHODS: We present a stepwise schematic for Sentinel's data harmonization, data quality check, query design and implementation, and reporting practices, and describe approaches to enhancing the transparency, reproducibility, and replicability of studies at each step. CONCLUSIONS: Each Sentinel data partner converts its source data into the Sentinel Common Data Model. The transformed data undergoes rigorous quality checks before it can be used for Sentinel queries. The Sentinel Common Data Model framework, data transformation codes for several data sources, and data quality assurance packages are publicly available. Designed to run against the Sentinel Common Data Model, Sentinel's querying system comprises a suite of pre-tested, parametrizable computer programs that allow users to perform sophisticated descriptive and inferential analysis without having to exchange individual-level data across sites. Detailed documentation of capabilities of the programs as well as the codes and information required to execute them are publicly available on the Sentinel website. Sentinel also provides public trainings and online resources to facilitate use of its data model and querying system. Its study specifications conform to established reporting frameworks aimed at facilitating reproducibility and replicability of real-world data studies. Reports from Sentinel queries and associated design and analytic specifications are available for download on the Sentinel website. Sentinel is an example of how real-world data can be used to generate regulatory-grade evidence at scale using a transparent, reproducible, and replicable process.


Subject(s)
Pharmacoepidemiology , United States Food and Drug Administration , Pharmacoepidemiology/methods , Reproducibility of Results , United States Food and Drug Administration/standards , Humans , United States , Data Accuracy , Adverse Drug Reaction Reporting Systems/statistics & numerical data , Adverse Drug Reaction Reporting Systems/standards , Drug-Related Side Effects and Adverse Reactions/epidemiology , Databases, Factual/standards , Research Design/standards
2.
Clin Epidemiol ; 16: 329-343, 2024.
Article in English | MEDLINE | ID: mdl-38798915

ABSTRACT

Objective: Partially observed confounder data pose challenges to the statistical analysis of electronic health records (EHR) and systematic assessments of potentially underlying missingness mechanisms are lacking. We aimed to provide a principled approach to empirically characterize missing data processes and investigate performance of analytic methods. Methods: Three empirical sub-cohorts of diabetic SGLT2 or DPP4-inhibitor initiators with complete information on HbA1c, BMI and smoking as confounders of interest (COI) formed the basis of data simulation under a plasmode framework. A true null treatment effect, including the COI in the outcome generation model, and four missingness mechanisms for the COI were simulated: completely at random (MCAR), at random (MAR), and two not at random (MNAR) mechanisms, where missingness was dependent on an unmeasured confounder and on the value of the COI itself. We evaluated the ability of three groups of diagnostics to differentiate between mechanisms: 1)-differences in characteristics between patients with or without the observed COI (using averaged standardized mean differences [ASMD]), 2)-predictive ability of the missingness indicator based on observed covariates, and 3)-association of the missingness indicator with the outcome. We then compared analytic methods including "complete case", inverse probability weighting, single and multiple imputation in their ability to recover true treatment effects. Results: The diagnostics successfully identified characteristic patterns of simulated missingness mechanisms. For MAR, but not MCAR, the patient characteristics showed substantial differences (median ASMD 0.20 vs 0.05) and consequently, discrimination of the prediction models for missingness was also higher (0.59 vs 0.50). For MNAR, but not MAR or MCAR, missingness was significantly associated with the outcome even in models adjusting for other observed covariates. Comparing analytic methods, multiple imputation using a random forest algorithm resulted in the lowest root-mean-squared-error. Conclusion: Principled diagnostics provided reliable insights into missingness mechanisms. When assumptions allow, multiple imputation with nonparametric models could help reduce bias.

3.
Pragmat Obs Res ; 15: 65-78, 2024.
Article in English | MEDLINE | ID: mdl-38559704

ABSTRACT

Background: Lack of body mass index (BMI) measurements limits the utility of claims data for bariatric surgery research, but pre-operative BMI may be imputed due to existence of weight-related diagnosis codes and BMI-related reimbursement requirements. We used a machine learning pipeline to create a claims-based scoring system to predict pre-operative BMI, as documented in the electronic health record (EHR), among patients undergoing a new bariatric surgery. Methods: Using the Optum Labs Data Warehouse, containing linked de-identified claims and EHR data for commercial or Medicare Advantage enrollees, we identified adults undergoing a new bariatric surgery between January 2011 and June 2018 with a BMI measurement in linked EHR data ≤30 days before the index surgery (n=3226). We constructed predictors from claims data and applied a machine learning pipeline to create a scoring system for pre-operative BMI, the B3S3. We evaluated the B3S3 and a simple linear regression model (benchmark) in test patients whose index surgery occurred concurrent (2011-2017) or prospective (2018) to the training data. Results: The machine learning pipeline yielded a final scoring system that included weight-related diagnosis codes, age, and number of days hospitalized and distinct drugs dispensed in the past 6 months. In concurrent test data, the B3S3 had excellent performance (R2 0.862, 95% confidence interval [CI] 0.815-0.898) and calibration. The benchmark algorithm had good performance (R2 0.750, 95% CI 0.686-0.799) and calibration but both aspects were inferior to the B3S3. Findings in prospective test data were similar. Conclusion: The B3S3 is an accessible tool that researchers can use with claims data to obtain granular and accurate predicted values of pre-operative BMI, which may enhance confounding control and investigation of effect modification by baseline obesity levels in bariatric surgery studies utilizing claims data.


Pre-operative BMI is an important potential confounder in comparative effectiveness studies of bariatric surgeries.Claims data lack clinical measurements, but insurance reimbursement requirements for bariatric surgery often result in pre-operative BMI being coded in claims data.We used a machine learning pipeline to create a model, the B3S3, to predict pre-operative BMI, as documented in the EHR, among bariatric surgery patients based on the presence of certain weight-related diagnosis codes and other patient characteristics derived from claims data.Researchers can easily use the B3S3 with claims data to obtain granular and accurate predicted values of pre-operative BMI among bariatric surgery patients.

4.
Am J Epidemiol ; 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38517025

ABSTRACT

Lasso regression is widely used for large-scale propensity score (PS) estimation in healthcare database studies. In these settings, previous work has shown that undersmoothing (overfitting) Lasso PS models can improve confounding control, but it can also cause problems of non-overlap in covariate distributions. It remains unclear how to select the degree of undersmoothing when fitting large-scale Lasso PS models to improve confounding control while avoiding issues that can result from reduced covariate overlap. Here, we used simulations to evaluate the performance of using collaborative-controlled targeted learning to data-adaptively select the degree of undersmoothing when fitting large-scale PS models within both singly and doubly robust frameworks to reduce bias in causal estimators. Simulations showed that collaborative learning can data-adaptively select the degree of undersmoothing to reduce bias in estimated treatment effects. Results further showed that when fitting undersmoothed Lasso PS-models, the use of cross-fitting was important for avoiding non-overlap in covariate distributions and reducing bias in causal estimates.

6.
JAMIA Open ; 7(1): ooae008, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38304248

ABSTRACT

Objectives: Partially observed confounder data pose a major challenge in statistical analyses aimed to inform causal inference using electronic health records (EHRs). While analytic approaches such as imputation are available, assumptions on underlying missingness patterns and mechanisms must be verified. We aimed to develop a toolkit to streamline missing data diagnostics to guide choice of analytic approaches based on meeting necessary assumptions. Materials and methods: We developed the smdi (structural missing data investigations) R package based on results of a previous simulation study which considered structural assumptions of common missing data mechanisms in EHR. Results: smdi enables users to run principled missing data investigations on partially observed confounders and implement functions to visualize, describe, and infer potential missingness patterns and mechanisms based on observed data. Conclusions: The smdi R package is freely available on CRAN and can provide valuable insights into underlying missingness patterns and mechanisms and thereby help improve the robustness of real-world evidence studies.

7.
Inflamm Bowel Dis ; 30(4): 554-562, 2024 Apr 03.
Article in English | MEDLINE | ID: mdl-37358904

ABSTRACT

BACKGROUND: Antitumor necrosis factor (anti-TNF) inhibitors are first-line treatment among patients with ulcerative colitis (UC). With time, patients tend to lose response or become intolerant, necessitating switching to small cell biologics such as tofacitinib or vedolizumab. In this real-world study of a large, geographically diverse US population of TNF-experienced patients with UC, we evaluated the effectiveness and safety of newly initiating treatment with tofacitinib vs vedolizumab. METHODS: We conducted a cohort study using secondary data from a large US insurer (Anthem, Inc.). Our cohort included patients with UC newly initiating treatment with tofacitinib or vedolizumab. Patients were required to have evidence of treatment with anti-TNF inhibitors in the 6 months prior to cohort entry. The primary outcome was treatment persistence >52 weeks. Additionally, we evaluated the following secondary outcomes as additional measures of effectiveness and safety: (1) all-cause hospitalization; (2) total abdominal colectomy; (3) hospitalization for infection; (4) hospitalization for malignancy; (5) hospitalization for cardiac events; and (6) hospitalization for thromboembolic events. We used fine stratification by propensity scores to control for confounding by demographics, clinical factors, and treatment history at baseline. RESULTS: Our primary cohort included 168 new users of tofacitinib and 568 new users of vedolizumab. Tofacitinib was associated with lower treatment persistence (adjusted risked ratio, 0.77; 95% CI, 0.60 -0.99). Differences in secondary measures of effectiveness or safety between tofacitinib initiators vs vedolizumab initiators were not statistically significant (all-cause hospitalization, adjusted hazard ratio, 1.23; 95% CI, 0.83-1.84; total abdominal colectomy, adjusted HR, 1.79; 95% CI, 0.93-3.44;and hospitalization for any infection, adjusted HR, 1.94; 95% CI, 0.83-4.52). DISCUSSION: Ulcerative colitis patients with prior anti-TNF experience initiating tofacitinib demonstrated lower treatment persistence compared with those initiating vedolizumab. This finding is in contrast to other recent studies suggesting superior effectiveness of tofacitinib. Ultimately, head-to-head randomized, controlled trials that focus on directly measured end points may be needed to best inform clinical practice.


Anti-TNF-experienced patients with UC initiating vedolizumab demonstrated higher treatment persistence compared with those initiating tofacitinib in this real-world evaluation of comparative effectiveness. Ultimately, head-to-head randomized trials that focus on directly measured end points are needed to best inform clinical practice.


Subject(s)
Antibodies, Monoclonal, Humanized , Colitis, Ulcerative , Piperidines , Pyrimidines , Humans , Cohort Studies , Colitis, Ulcerative/pathology , Treatment Outcome , Tumor Necrosis Factor Inhibitors/therapeutic use
8.
Clin Pharmacol Ther ; 115(1): 147-157, 2024 01.
Article in English | MEDLINE | ID: mdl-37926942

ABSTRACT

Biological plausibility suggests that fluoroquinolones may lead to mitral valve regurgitation or aortic valve regurgitation (MR/AR) through a collagen degradation pathway. However, available real-world studies were limited and yielded inconsistent findings. We estimated the risk of MR/AR associated with fluoroquinolones compared with other antibiotics with similar indications in a population-based cohort study. We identified adult patients who initiated fluoroquinolones or comparison antibiotics from the nationwide Taiwanese claims database. Patients were followed for up to 60 days after cohort entry. Cox regression models were used to estimate hazard ratios (HRs) and 95% confidence intervals (CIs) of MR/AR comparing fluoroquinolones to comparison antibiotics after 1:1 propensity score (PS) matching. All analyses were conducted by type of fluoroquinolone (fluoroquinolones as a class, respiratory fluoroquinolones, and non-respiratory fluoroquinolones) and comparison antibiotic (amoxicillin/clavulanate or ampicillin/sulbactam, extended-spectrum cephalosporins). Among 6,649,284 eligible patients, the crude incidence rates of MR/AR ranged from 1.44 to 4.99 per 1,000 person-years across different types of fluoroquinolones and comparison antibiotics. However, fluoroquinolone use was not associated with an increased risk in each pairwise PS-matched comparison. HRs were 1.00 (95% CI, 0.89-1.11) for fluoroquinolones as a class, 0.96 (95% CI, 0.83-1.12) for respiratory fluoroquinolones, and 0.87 (95% CI, 0.75-1.01) for non-respiratory fluoroquinolones, compared with amoxicillin/clavulanate or ampicillin/sulbactam. Results were similar when fluoroquinolones were compared with extended-spectrum cephalosporins (HRs of 0.96, 95% CI, 0.82-1.12, HR, 1.05, 95% CI, 0.86-1.28, and HR, 0.88, 95% CI, 0.75-1.03, respectively). This large-scale cohort study did not find a higher risk of MR/AR with different types of fluoroquinolones in the adult population.


Subject(s)
Aortic Valve , Fluoroquinolones , Adult , Humans , Fluoroquinolones/adverse effects , Cohort Studies , Sulbactam , Anti-Bacterial Agents/adverse effects , Amoxicillin-Potassium Clavulanate Combination , Ampicillin , Cephalosporins
9.
J Am Med Inform Assoc ; 31(3): 574-582, 2024 Feb 16.
Article in English | MEDLINE | ID: mdl-38109888

ABSTRACT

OBJECTIVES: Automated phenotyping algorithms can reduce development time and operator dependence compared to manually developed algorithms. One such approach, PheNorm, has performed well for identifying chronic health conditions, but its performance for acute conditions is largely unknown. Herein, we implement and evaluate PheNorm applied to symptomatic COVID-19 disease to investigate its potential feasibility for rapid phenotyping of acute health conditions. MATERIALS AND METHODS: PheNorm is a general-purpose automated approach to creating computable phenotype algorithms based on natural language processing, machine learning, and (low cost) silver-standard training labels. We applied PheNorm to cohorts of potential COVID-19 patients from 2 institutions and used gold-standard manual chart review data to investigate the impact on performance of alternative feature engineering options and implementing externally trained models without local retraining. RESULTS: Models at each institution achieved AUC, sensitivity, and positive predictive value of 0.853, 0.879, 0.851 and 0.804, 0.976, and 0.885, respectively, at quantiles of model-predicted risk that maximize F1. We report performance metrics for all combinations of silver labels, feature engineering options, and models trained internally versus externally. DISCUSSION: Phenotyping algorithms developed using PheNorm performed well at both institutions. Performance varied with different silver-standard labels and feature engineering options. Models developed locally at one site also worked well when implemented externally at the other site. CONCLUSION: PheNorm models successfully identified an acute health condition, symptomatic COVID-19. The simplicity of the PheNorm approach allows it to be applied at multiple study sites with substantially reduced overhead compared to traditional approaches.


Subject(s)
Algorithms , COVID-19 , Humans , Electronic Health Records , Machine Learning , Natural Language Processing
10.
Pharmacoepidemiol Drug Saf ; 33(1): e5734, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38112287

ABSTRACT

PURPOSE: Observational studies assessing effects of medical products on suicidal behavior often rely on health record data to account for pre-existing risk. We assess whether high-dimensional models predicting suicide risk using data derived from insurance claims and electronic health records (EHRs) are superior to models using data from insurance claims alone. METHODS: Data were from seven large health systems identified outpatient mental health visits by patients aged 11 or older between 1/1/2009 and 9/30/2017. Data for the 5 years prior to each visit identified potential predictors of suicidal behavior typically available from insurance claims (e.g., mental health diagnoses, procedure codes, medication dispensings) and additional potential predictors available from EHRs (self-reported race and ethnicity, responses to Patient Health Questionnaire or PHQ-9 depression questionnaires). Nonfatal self-harm events following each visit were identified from insurance claims data and fatal self-harm events were identified by linkage to state mortality records. Random forest models predicting nonfatal or fatal self-harm over 90 days following each visit were developed in a 70% random sample of visits and validated in a held-out sample of 30%. Performance of models using linked claims and EHR data was compared to models using claims data only. RESULTS: Among 15 845 047 encounters by 1 574 612 patients, 99 098 (0.6%) were followed by a self-harm event within 90 days. Overall classification performance did not differ between the best-fitting model using all data (area under the receiver operating curve or AUC = 0.846, 95% CI 0.839-0.854) and the best-fitting model limited to data available from insurance claims (AUC = 0.846, 95% CI 0.838-0.853). Competing models showed similar classification performance across a range of cut-points and similar calibration performance across a range of risk strata. Results were similar when the sample was limited to health systems and time periods where PHQ-9 depression questionnaires were recorded more frequently. CONCLUSION: Investigators using health record data to account for pre-existing risk in observational studies of suicidal behavior need not limit that research to databases including linked EHR data.


Subject(s)
Insurance , Self-Injurious Behavior , Humans , Suicidal Ideation , Electronic Health Records , Semantic Web
11.
Res Synth Methods ; 14(5): 742-763, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37527843

ABSTRACT

Missing data complicates statistical analyses in multi-site studies, especially when it is not feasible to centrally pool individual-level data across sites. We combined meta-analysis with within-site multiple imputation for one-step estimation of the average causal effect (ACE) of a target population comprised of all individuals from all data-contributing sites within a multi-site distributed data network, without the need for sharing individual-level data to handle missing data. We considered two orders of combination and three choices of weights for meta-analysis, resulting in six approaches. The first three approaches, denoted as RR + metaF, RR + metaR and RR + std, first combined results from imputed data sets within each site using Rubin's rules and then meta-analyzed the combined results across sites using fixed-effect, random-effects and sample-standardization weights, respectively. The last three approaches, denoted as metaF + RR, metaR + RR and std + RR, first meta-analyzed results across sites separately for each imputation and then combined the meta-analysis results using Rubin's rules. Simulation results confirmed very good performance of RR + std and std + RR under various missing completely at random and missing at random settings. A direct application of the inverse-variance weighted meta-analysis based on site-specific ACEs can lead to biased results for the targeted network-wide ACE in the presence of treatment effect heterogeneity by site, demonstrating the need to clearly specify the target population and estimand and properly account for potential site heterogeneity in meta-analyses seeking to draw causal interpretations. An illustration using a large administrative claims database is presented.


Subject(s)
Multicenter Studies as Topic , Humans , Computer Simulation , Privacy , Research Design
12.
Inflamm Bowel Dis ; 2023 Aug 23.
Article in English | MEDLINE | ID: mdl-37611117

ABSTRACT

BACKGROUND AND AIMS: Immunosuppressed individuals are at higher risk for COVID-19 complications, yet data in patients with inflammatory bowel disease (IBD) are limited. We evaluated the risk of COVID-19- severe sequelae by medication utilization in a large cohort of patients with IBD. METHODS: We conducted a retrospective cohort study utilizing insurance claims data between August 31, 2019, and August 31, 2021.We included IBD patients identified by diagnosis and treatment codes. Use of IBD medications was defined in the 90 days prior to cohort entry. Study outcomes included COVID-19 hospitalization, mechanical ventilation, and inpatient death. Patients were followed until the outcome of interest, outpatient death, disenrollment, or end of study period. Due to the aggregate nature of available data, we were unable to perform multivariate analyses. RESULTS: We included 102 986 patients (48 728 CD, 47 592 UC) with a mean age of 53 years; 55% were female. Overall, 412 (0.4%) patients were hospitalized with COVID-19. The incidence of hospitalization was higher in those on corticosteroids (0.6% vs 0.3%; P < .0001; 13.6 per 1000 person-years; 95% confidence interval [CI], 10.8-16.9) and lower in those receiving anti-tumor necrosis factor α therapy (0.2% vs 0.5%; P < .0001; 3.9 per 1000 person-years; 95% CI, 2.7-5.4). Older age was associated with increased hospitalization with COVID-19. Overall, 71 (0.07%) patients required mechanical ventilation and 52 (0.05%) died at the hospital with COVID-19. The proportion requiring mechanical ventilation (1.9% vs 0.05%; P < .0001; 3.9 per 1000 person-years; 95% CI, 2.5-5.9) was higher among users of corticosteroids. CONCLUSIONS: Among patients with IBD, those on corticosteroids had more hospitalizations and mechanical ventilation with COVID-19. Anti-tumor necrosis factor α therapy was associated with a decreased risk of hospitalization. These findings reinforce previous guidance to taper and/or discontinue corticosteroids in IBD.

13.
J Manag Care Spec Pharm ; 29(7): 842-847, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37404073

ABSTRACT

BACKGROUND: The first follow-on drug (Basaglar) of the originator insulin glargine (Lantus), a long-acting insulin for treatment of type 1 and type 2 diabetes mellitus (T1DM, T2DM), was approved in 2015 in the United States. Information on the uptake, user characteristics, and outcomes of follow-on insulin remains sparse. OBJECTIVE: To describe the utilization, user characteristics, and health outcomes of the follow-on insulin glargine and insulin glargine originators in a large, distributed network of primarily commercially insured patients in the United States. METHODS: We used health care claims data in the US Food and Drug Administration's Sentinel common data model format across 5 research partners in the Biologics & Biosimilars Collective Intelligence Consortium distributed research network. Sentinel analytic tools were used to identify adult users of insulin glargine between January 1, 2011, and February 28, 2021, and describe patient demographics, baseline clinical characteristics, and adverse health events among users of the originators and the follow-on drug, stratified by diabetes type. RESULTS: We identified 508,438 users of originator drugs and 63,199 users of the follow-on drug. The proportions of the follow-on drug users among total insulin glargine users were 9.1% (n = 7,070) for T1DM and 11.4% (n=56,129) for T2DM. Follow-on use rose from 8.2% in 2017 to 24.8% in 2020, accompanied by a steady decrease in the use of originator drugs. Demographics of the users of the originators and follow-on drug were similar among the T1DM and T2DM groups. Overall, follow-on users had poorer baseline health profile and higher proportions of episodes with adverse events in the follow-up. CONCLUSIONS: We found evidence of increased uptake of the follow-on drug relative to the originator products in the post-2016 period. The differences in the base-line clinical characteristics between users of the originator products and the follow-on drug and their relationship with health outcomes merit further research. DISCLOSURES: Sengwee Toh consults for Pfizer, Inc., and TriNetX, LLC. This study was funded by the BBCIC.


Subject(s)
Biosimilar Pharmaceuticals , Diabetes Mellitus, Type 1 , Diabetes Mellitus, Type 2 , Adult , Humans , United States/epidemiology , Insulin Glargine/adverse effects , Diabetes Mellitus, Type 2/drug therapy , Diabetes Mellitus, Type 2/epidemiology , Hypoglycemic Agents/adverse effects , Diabetes Mellitus, Type 1/drug therapy , Pharmaceutical Preparations , Biosimilar Pharmaceuticals/adverse effects , Insulin/adverse effects
14.
Pharmacoepidemiol Drug Saf ; 32(12): 1360-1367, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37463756

ABSTRACT

PURPOSE: While much has been written about how distributed networks address internal validity, external validity is rarely discussed. We aimed to define key terms related to external validity, discuss how they relate to distributed networks, and identify how three networks (the US Food and Drug Administration's Sentinel System, the Canadian Network for Observational Drug Effect Studies [CNODES], and the National Patient Centered Clinical Research Network [PCORnet]) deal with external validity. METHODS: We define external validity, target populations, target validity, generalizability, and transportability and describe how each relates to distributed networks. We then describe Sentinel, CNODES, and PCORnet and how each approaches these concepts, including a sample case study. RESULTS: Each network approaches external validity differently. As its target population is US citizens and it includes only US data, Sentinel primarily worries about lack of external validity by not including some segments of the population. The fact that CNODES includes Canadian, United States, and United Kingdom data forces them to seriously consider whether the United States and United Kingdom data will be transportable to Canadian citizens when they meta-analyze database-specific estimates. PCORnet, with its focus on study-specific cohorts and pragmatic trials, conducts more case-by-case explorations of external validity for each new analytic data set it generates. CONCLUSIONS: There is no one-size-fits-all approach to external validity within distributed networks. With these networks and comparisons between their findings becoming a key part of pharmacoepidemiology, there is a need to adapt tools for improving external validity to the distributed network setting.


Subject(s)
Computer Communication Networks , Pharmacovigilance , Canada , United Kingdom , United States , United States Food and Drug Administration
15.
Neurology ; 100(16): e1702-e1711, 2023 04 18.
Article in English | MEDLINE | ID: mdl-36813729

ABSTRACT

BACKGROUND AND OBJECTIVES: The use of over-the-counter laxatives is common in the general population. The microbiome-gut-brain axis hypothesis suggests that the use of laxatives could be associated with dementia. We aimed to examine the association between the regular use of laxatives and the incidence of dementia in UK Biobank participants. METHODS: This prospective cohort study was based on UK Biobank participants aged 40-69 years without a history of dementia. Regular use of laxatives was defined as self-reported use in most days of the week for the last 4 weeks at baseline (2006-2010). The outcomes were all-cause dementia, Alzheimer disease (AD), and vascular dementia (VD), identified from linked hospital admissions or death registers (up to 2019). Sociodemographic characteristics, lifestyle factors, medical conditions, family history, and regular medication use were adjusted for in the multivariable Cox regression analyses. RESULTS: Among the 502,229 participants with a mean age of 56.5 (SD 8.1) years at baseline, 273,251 (54.4%) were female, and 18,235 (3.6%) reported regular use of laxatives. Over a mean follow-up of 9.8 years, 218 (1.3%) participants with regular use of laxatives and 1,969 (0.4%) with no regular use developed all-cause dementia. Multivariable analyses showed that regular use of laxatives was associated with increased risk of all-cause dementia (hazard ratio [HR] 1.51; 95% CI 1.30-1.75) and VD (HR 1.65; 95% CI 1.21-2.27), with no significant association observed for AD (HR 1.05; 95% CI 0.79-1.40). The risk of both all-cause dementia and VD increased with the number of regularly used laxative types (p trend 0.001 and 0.04, respectively). Among the participants who clearly reported that they were using just 1 type of laxative (n = 5,800), only those using osmotic laxatives showed a statistically significantly higher risk of all-cause dementia (HR 1.64; 95% CI 1.20-2.24) and VD (HR 1.97; 95% CI 1.04-3.75). These results remained robust in various subgroup and sensitivity analyses. DISCUSSION: Regular use of laxatives was associated with a higher risk of all-cause dementia, particularly in those who used multiple laxative types or osmotic laxative.


Subject(s)
Alzheimer Disease , Dementia, Vascular , Humans , Female , Middle Aged , Male , Laxatives/adverse effects , Constipation , Prospective Studies , Biological Specimen Banks , Alzheimer Disease/drug therapy , United Kingdom/epidemiology
17.
Am J Gastroenterol ; 118(4): 674-684, 2023 04 01.
Article in English | MEDLINE | ID: mdl-36508681

ABSTRACT

INTRODUCTION: Many patients with Crohn's disease (CD) lose response or become intolerant to antitumor necrosis factor (TNF) therapy and subsequently switch out of class. We compared the effectiveness and safety of ustekinumab to vedolizumab in a large, geographically diverse US population of TNF-experienced patients with CD. METHODS: We conducted a retrospective cohort study using longitudinal claims data from a large US insurer (Anthem, Inc.). We identified patients with CD initiating vedolizumab or ustekinumab with anti-TNF treatment in the prior 6 months. Our primary outcome was treatment persistence for >52 weeks. Secondary outcomes included (i) all-cause hospitalization, (ii) hospitalization for CD with surgery, (iii) hospitalization for CD without surgery, and (iv) hospitalization for infection. Propensity score fine stratification was used to control for demographic and baseline clinical characteristics and prior treatments. RESULTS: Among 885 new users of ustekinumab and 490 new users of vedolizumab, we observed no difference in treatment persistence (adjusted risk ratio 1.09 [95% confidence interval 0.95-1.25]). Ustekinumab was associated with a lower rate of all-cause hospitalization (adjusted hazard ratio 0.73 [0.59-0.91]), nonsurgical CD hospitalization (adjusted hazard ratio 0.58 [0.40-0.83]), and hospitalization for infection (adjusted hazard ratio 0.56 [0.34-0.92]). DISCUSSION: This real-world comparative effectiveness study of anti-TNF-experienced patients with CD initiating vedolizumab or ustekinumab showed similar treatment persistence rates beyond 52 weeks, although secondary outcomes such as all-cause hospitalizations, nonsurgical CD hospitalizations, and hospitalizations for infection favored ustekinumab initiation. We, therefore, advocate for individualized decision making in this medically refractory population, considering patient preference and other factors such as cost and route of administration.


Subject(s)
Crohn Disease , Ustekinumab , Humans , Ustekinumab/therapeutic use , Crohn Disease/drug therapy , Crohn Disease/surgery , Tumor Necrosis Factor Inhibitors/therapeutic use , Retrospective Studies , Necrosis/drug therapy , Treatment Outcome
18.
Pharmacoepidemiol Drug Saf ; 32(3): 330-340, 2023 03.
Article in English | MEDLINE | ID: mdl-36380400

ABSTRACT

PURPOSE: In distributed research network (DRN) settings, multiple imputation cannot be directly implemented because pooling individual-level data are often not feasible. The performance of multiple imputation in combination with meta-analysis is not well understood within DRNs. METHODS: To evaluate the performance of imputation for missing baseline covariate data in combination with meta-analysis for time-to-event analysis within DRNs, we compared two parametric algorithms including one approximated linear imputation model (Approx), and one nonlinear substantive model compatible imputation model (SMC), as well as two non-parametric machine learning algorithms including random forest (RF), and classification and regression trees (CART), through simulation studies motivated by a real-world data set. RESULTS: Under the setting with small effect sizes (i.e., log-Hazard ratios [logHR]) and homogeneous missingness mechanisms across sites, all imputation methods produced unbiased and more efficient estimates while the complete-case analysis could be biased and inefficient; and under heterogeneous missingness mechanisms, estimates with RF method could have higher efficiency. Estimates from the distributed imputation combined by meta-analysis were similar to those from the imputation using pooled data. When logHRs were large, the SMC imputation algorithm generally performed better than others. CONCLUSIONS: These findings suggest the validity and feasibility of imputation within DRNs in the presence of missing covariate data in time-to-event analysis under various settings. The performance of the four imputation algorithms varies with the effect sizes and level of missingness.


Subject(s)
Algorithms , Humans , Computer Simulation , Proportional Hazards Models , Linear Models
19.
Pharmacoepidemiol Drug Saf ; 32(2): 93-106, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36349471

ABSTRACT

Real-world evidence used for regulatory, payer, and clinical decision-making requires principled epidemiology in design and analysis, applying methods to minimize confounding given the lack of randomization. One technique to deal with potential confounding is propensity score (PS) analysis, which allows for the adjustment for measured preexposure covariates. Since its first publication in 2009, the high-dimensional propensity score (hdPS) method has emerged as an approach that extends traditional PS covariate selection to include large numbers of covariates that may reduce confounding bias in the analysis of healthcare databases. hdPS is an automated, data-driven analytic approach for covariate selection that empirically identifies preexposure variables and proxies to include in the PS model. This article provides an overview of the hdPS approach and recommendations on the planning, implementation, and reporting of hdPS used for causal treatment-effect estimations in longitudinal healthcare databases. We supply a checklist with key considerations as a supportive decision tool to aid investigators in the implementation and transparent reporting of hdPS techniques, and to aid decision-makers unfamiliar with hdPS in the understanding and interpretation of studies employing this approach. This article is endorsed by the International Society for Pharmacoepidemiology.


Subject(s)
Propensity Score , Humans , Bias , Pharmacoepidemiology , Electronic Health Records , Routinely Collected Health Data
20.
Pharmacoepidemiol Drug Saf ; 32(2): 158-215, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36351880

ABSTRACT

PURPOSE: The US Food and Drug Administration established the Sentinel System to monitor the safety of medical products. A component of this system includes parameterizable analytic tools to identify mother-infant pairs and evaluate infant outcomes to enable the routine monitoring of the utilization and safety of drugs used in pregnancy. We assessed the feasibility of using the data and tools in the Sentinel System by assessing a known association between topiramate use during pregnancy and oral clefts in the infant. METHODS: We identified mother-infant pairs using the mother-infant linkage table from six data partners contributing to the Sentinel Distributed Database from January 1, 2000, to September 30, 2015. We compared mother-infant pairs with first-trimester exposure to topiramate to mother-infant pairs that were topiramate-unexposed or lamotrigine-exposed and used a validated algorithm to identify oral clefts in the infant. We estimated adjusted risk ratios through propensity score stratification. RESULTS: There were 2007 topiramate-exposed and 1 066 086 unexposed mother-infant pairs in the main comparison. In the active-comparator analysis, there were 1996 topiramate-exposed and 2859 lamotrigine-exposed mother-infant pairs. After propensity score stratification, the odds ratio for oral clefts was 2.92 (95% CI: 1.43, 5.93) comparing the topiramate-exposed to unexposed groups and 2.72 (95% CI: 0.75, 9.93) comparing the topiramate-exposed to lamotrigine-exposed groups. CONCLUSIONS: We found an increased risk of oral clefts after topiramate exposure in the first trimester in the Sentinel database. These results are similar to prior published observational study results and demonstrate the ability of Sentinel's data and analytic tools to assess medical product safety in cohorts of mother-infant pairs in a timely manner.


Subject(s)
Anticonvulsants , Mothers , Infant , Pregnancy , Female , Humans , Topiramate , Lamotrigine , Anticonvulsants/therapeutic use , Pregnancy Trimester, First
SELECTION OF CITATIONS
SEARCH DETAIL
...