Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 140
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Stat Med ; 43(2): 395-418, 2024 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-38010062

RESUMO

Postmarket safety surveillance is an integral part of mass vaccination programs. Typically relying on sequential analysis of real-world health data as they accrue, safety surveillance is challenged by sequential multiple testing and by biases induced by residual confounding in observational data. The current standard approach based on the maximized sequential probability ratio test (MaxSPRT) fails to satisfactorily address these practical challenges and it remains a rigid framework that requires prespecification of the surveillance schedule. We develop an alternative Bayesian surveillance procedure that addresses both aforementioned challenges using a more flexible framework. To mitigate bias, we jointly analyze a large set of negative control outcomes that are adverse events with no known association with the vaccines in order to inform an empirical bias distribution, which we then incorporate into estimating the effect of vaccine exposure on the adverse event of interest through a Bayesian hierarchical model. To address multiple testing and improve on flexibility, at each analysis timepoint, we update a posterior probability in favor of the alternative hypothesis that vaccination induces higher risks of adverse events, and then use it for sequential detection of safety signals. Through an empirical evaluation using six US observational healthcare databases covering more than 360 million patients, we benchmark the proposed procedure against MaxSPRT on testing errors and estimation accuracy, under two epidemiological designs, the historical comparator and the self-controlled case series. We demonstrate that our procedure substantially reduces Type 1 error rates, maintains high statistical power and fast signal detection, and provides considerably more accurate estimation than MaxSPRT. Given the extensiveness of the empirical study which yields more than 7 million sets of results, we present all results in a public R ShinyApp. As an effort to promote open science, we provide full implementation of our method in the open-source R package EvidenceSynthesis.


Assuntos
Sistemas de Notificação de Reações Adversas a Medicamentos , Vigilância de Produtos Comercializados , Vacinas , Humanos , Teorema de Bayes , Viés , Probabilidade , Vacinas/efeitos adversos
2.
BMC Med Res Methodol ; 23(1): 246, 2023 10 21.
Artigo em Inglês | MEDLINE | ID: mdl-37865728

RESUMO

BACKGROUND: Administrative healthcare claims databases are used in drug safety research but are limited for investigating the impacts of prenatal exposures on neonatal and pediatric outcomes without mother-infant pair identification. Further, existing algorithms are not transportable across data sources. We developed a transportable mother-infant linkage algorithm and evaluated it in two, large US commercially insured populations. METHODS: We used two US commercial health insurance claims databases during the years 2000 to 2021. Mother-infant links were constructed where persons of female sex 12-55 years of age with a pregnancy episode ending in live birth were associated with a person who was 0 years of age at database entry, who shared a common insurance plan ID, had overlapping insurance coverage time, and whose date of birth was within ± 60-days of the mother's pregnancy episode live birth date. We compared the characteristics of linked vs. non-linked mothers and infants to assess similarity. RESULTS: The algorithm linked 3,477,960 mothers to 4,160,284 infants in the two databases. Linked mothers and linked infants comprised 73.6% of all mothers and 49.1% of all infants, respectively. 94.9% of linked infants' dates of birth were within ± 30-days of the associated mother's pregnancy episode end dates. Characteristics were largely similar in linked vs. non-linked mothers and infants. Differences included that linked mothers were older, had longer pregnancy episodes, and had greater post-pregnancy observation time than mothers with live births who were not linked. Linked infants had less observation time and greater healthcare utilization than non-linked infants. CONCLUSIONS: We developed a mother-infant linkage algorithm and applied it to two US commercial healthcare claims databases that achieved a high linkage proportion and demonstrated that linked and non-linked mother and infant cohorts were similar. Transparent, reusable algorithms applied to large databases enable large-scale research on exposures during pregnancy and pediatric outcomes with relevance to drug safety. These features suggest studies using this algorithm can produce valid and generalizable evidence to inform clinical, policy, and regulatory decisions.


Assuntos
Mães , Farmacoepidemiologia , Gravidez , Recém-Nascido , Lactente , Feminino , Humanos , Criança , Gravidez Múltipla , Algoritmos , Atenção à Saúde
3.
BMC Med Res Methodol ; 22(1): 35, 2022 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-35094685

RESUMO

BACKGROUND: We investigated whether we could use influenza data to develop prediction models for COVID-19 to increase the speed at which prediction models can reliably be developed and validated early in a pandemic. We developed COVID-19 Estimated Risk (COVER) scores that quantify a patient's risk of hospital admission with pneumonia (COVER-H), hospitalization with pneumonia requiring intensive services or death (COVER-I), or fatality (COVER-F) in the 30-days following COVID-19 diagnosis using historical data from patients with influenza or flu-like symptoms and tested this in COVID-19 patients. METHODS: We analyzed a federated network of electronic medical records and administrative claims data from 14 data sources and 6 countries containing data collected on or before 4/27/2020. We used a 2-step process to develop 3 scores using historical data from patients with influenza or flu-like symptoms any time prior to 2020. The first step was to create a data-driven model using LASSO regularized logistic regression, the covariates of which were used to develop aggregate covariates for the second step where the COVER scores were developed using a smaller set of features. These 3 COVER scores were then externally validated on patients with 1) influenza or flu-like symptoms and 2) confirmed or suspected COVID-19 diagnosis across 5 databases from South Korea, Spain, and the United States. Outcomes included i) hospitalization with pneumonia, ii) hospitalization with pneumonia requiring intensive services or death, and iii) death in the 30 days after index date. RESULTS: Overall, 44,507 COVID-19 patients were included for model validation. We identified 7 predictors (history of cancer, chronic obstructive pulmonary disease, diabetes, heart disease, hypertension, hyperlipidemia, kidney disease) which combined with age and sex discriminated which patients would experience any of our three outcomes. The models achieved good performance in influenza and COVID-19 cohorts. For COVID-19 the AUC ranges were, COVER-H: 0.69-0.81, COVER-I: 0.73-0.91, and COVER-F: 0.72-0.90. Calibration varied across the validations with some of the COVID-19 validations being less well calibrated than the influenza validations. CONCLUSIONS: This research demonstrated the utility of using a proxy disease to develop a prediction model. The 3 COVER models with 9-predictors that were developed using influenza data perform well for COVID-19 patients for predicting hospitalization, intensive services, and fatality. The scores showed good discriminatory performance which transferred well to the COVID-19 population. There was some miscalibration in the COVID-19 validations, which is potentially due to the difference in symptom severity between the two diseases. A possible solution for this is to recalibrate the models in each location before use.


Assuntos
COVID-19 , Influenza Humana , Pneumonia , Teste para COVID-19 , Humanos , Influenza Humana/epidemiologia , SARS-CoV-2 , Estados Unidos
4.
J Biomed Inform ; 135: 104177, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35995107

RESUMO

PURPOSE: Phenotype algorithms are central to performing analyses using observational data. These algorithms translate the clinical idea of a health condition into an executable set of rules allowing for queries of data elements from a database. PheValuator, a software package in the Observational Health Data Sciences and Informatics (OHDSI) tool stack, provides a method to assess the performance characteristics of these algorithms, namely, sensitivity, specificity, and positive and negative predictive value. It uses machine learning to develop predictive models for determining a probabilistic gold standard of subjects for assessment of cases and non-cases of health conditions. PheValuator was developed to complement or even replace the traditional approach of algorithm validation, i.e., by expert assessment of subject records through chart review. Results in our first PheValuator paper suggest a systematic underestimation of the PPV compared to previous results using chart review. In this paper we evaluate modifications made to the method designed to improve its performance. METHODS: The major changes to PheValuator included allowing all diagnostic conditions, clinical observations, drug prescriptions, and laboratory measurements to be included as predictors within the modeling process whereas in the prior version there were significant restrictions on the included predictors. We also have allowed for the inclusion of the temporal relationships of the predictors in the model. To evaluate the performance of the new method, we compared the results from the new and original methods against results found from the literature using traditional validation of algorithms for 19 phenotypes. We performed these tests using data from five commercial databases. RESULTS: In the assessment aggregating all phenotype algorithms, the median difference between the PheValuator estimate and the gold standard estimate for PPV was reduced from -21 (IQR -34, -3) in Version 1.0 to 4 (IQR -3, 15) using Version 2.0. We found a median difference in specificity of 3 (IQR 1, 4.25) for Version 1.0 and 3 (IQR 1, 4) for Version 2.0. The median difference between the two versions of PheValuator and the gold standard for estimates of sensitivity was reduced from -39 (-51, -20) to -16 (-34, -6). CONCLUSION: PheValuator 2.0 produces estimates for the performance characteristics for phenotype algorithms that are significantly closer to estimates from traditional validation through chart review compared to version 1.0. With this tool in researcher's toolkits, methods, such as quantitative bias analysis, may now be used to improve the reliability and reproducibility of research studies using observational data.


Assuntos
Algoritmos , Aprendizado de Máquina , Reprodutibilidade dos Testes , Bases de Dados Factuais , Fenótipo
5.
BMC Med Inform Decis Mak ; 22(1): 142, 2022 05 25.
Artigo em Inglês | MEDLINE | ID: mdl-35614485

RESUMO

BACKGROUND: Prognostic models that are accurate could help aid medical decision making. Large observational databases often contain temporal medical data for large and diverse populations of patients. It may be possible to learn prognostic models using the large observational data. Often the performance of a prognostic model undesirably worsens when transported to a different database (or into a clinical setting). In this study we investigate different ensemble approaches that combine prognostic models independently developed using different databases (a simple federated learning approach) to determine whether ensembles that combine models developed across databases can improve model transportability (perform better in new data than single database models)? METHODS: For a given prediction question we independently trained five single database models each using a different observational healthcare database. We then developed and investigated numerous ensemble models (fusion, stacking and mixture of experts) that combined the different database models. Performance of each model was investigated via discrimination and calibration using a leave one dataset out technique, i.e., hold out one database to use for validation and use the remaining four datasets for model development. The internal validation of a model developed using the hold out database was calculated and presented as the 'internal benchmark' for comparison. RESULTS: In this study the fusion ensembles generally outperformed the single database models when transported to a previously unseen database and the performances were more consistent across unseen databases. Stacking ensembles performed poorly in terms of discrimination when the labels in the unseen database were limited. Calibration was consistently poor when both ensembles and single database models were applied to previously unseen databases. CONCLUSION: A simple federated learning approach that implements ensemble techniques to combine models independently developed across different databases for the same prediction question may improve the discriminative performance in new data (new database or clinical setting) but will need to be recalibrated using the new data. This could help medical decision making by improving prognostic model performance.


Assuntos
Atenção à Saúde , Calibragem , Bases de Dados Factuais , Humanos , Prognóstico
6.
Knee Surg Sports Traumatol Arthrosc ; 30(9): 3068-3075, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34870731

RESUMO

PURPOSE: The purpose of this study was to develop and validate a prediction model for 90-day mortality following a total knee replacement (TKR). TKR is a safe and cost-effective surgical procedure for treating severe knee osteoarthritis (OA). Although complications following surgery are rare, prediction tools could help identify high-risk patients who could be targeted with preventative interventions. The aim was to develop and validate a simple model to help inform treatment choices. METHODS: A mortality prediction model for knee OA patients following TKR was developed and externally validated using a US claims database and a UK general practice database. The target population consisted of patients undergoing a primary TKR for knee OA, aged ≥ 40 years and registered for ≥ 1 year before surgery. LASSO logistic regression models were developed for post-operative (90-day) mortality. A second mortality model was developed with a reduced feature set to increase interpretability and usability. RESULTS: A total of 193,615 patients were included, with 40,950 in The Health Improvement Network (THIN) database and 152,665 in Optum. The full model predicting 90-day mortality yielded AUROC of 0.78 when trained in OPTUM and 0.70 when externally validated on THIN. The 12 variable model achieved internal AUROC of 0.77 and external AUROC of 0.71 in THIN. CONCLUSIONS: A simple prediction model based on sex, age, and 10 comorbidities that can identify patients at high risk of short-term mortality following TKR was developed that demonstrated good, robust performance. The 12-feature mortality model is easily implemented and the performance suggests it could be used to inform evidence based shared decision-making prior to surgery and targeting prophylaxis for those at high risk. LEVEL OF EVIDENCE: III.


Assuntos
Artroplastia do Joelho , Osteoartrite do Joelho , Criança , Bases de Dados Factuais , Humanos
7.
Rheumatology (Oxford) ; 60(SI): SI37-SI50, 2021 10 09.
Artigo em Inglês | MEDLINE | ID: mdl-33725121

RESUMO

OBJECTIVE: Patients with autoimmune diseases were advised to shield to avoid coronavirus disease 2019 (COVID-19), but information on their prognosis is lacking. We characterized 30-day outcomes and mortality after hospitalization with COVID-19 among patients with prevalent autoimmune diseases, and compared outcomes after hospital admissions among similar patients with seasonal influenza. METHODS: A multinational network cohort study was conducted using electronic health records data from Columbia University Irving Medical Center [USA, Optum (USA), Department of Veterans Affairs (USA), Information System for Research in Primary Care-Hospitalization Linked Data (Spain) and claims data from IQVIA Open Claims (USA) and Health Insurance and Review Assessment (South Korea). All patients with prevalent autoimmune diseases, diagnosed and/or hospitalized between January and June 2020 with COVID-19, and similar patients hospitalized with influenza in 2017-18 were included. Outcomes were death and complications within 30 days of hospitalization. RESULTS: We studied 133 589 patients diagnosed and 48 418 hospitalized with COVID-19 with prevalent autoimmune diseases. Most patients were female, aged ≥50 years with previous comorbidities. The prevalence of hypertension (45.5-93.2%), chronic kidney disease (14.0-52.7%) and heart disease (29.0-83.8%) was higher in hospitalized vs diagnosed patients with COVID-19. Compared with 70 660 hospitalized with influenza, those admitted with COVID-19 had more respiratory complications including pneumonia and acute respiratory distress syndrome, and higher 30-day mortality (2.2-4.3% vs 6.32-24.6%). CONCLUSION: Compared with influenza, COVID-19 is a more severe disease, leading to more complications and higher mortality.


Assuntos
Doenças Autoimunes/mortalidade , Doenças Autoimunes/virologia , COVID-19/mortalidade , Hospitalização/estatística & dados numéricos , Influenza Humana/mortalidade , Adulto , Idoso , Idoso de 80 Anos ou mais , COVID-19/imunologia , Estudos de Coortes , Feminino , Humanos , Influenza Humana/imunologia , Masculino , Pessoa de Meia-Idade , Prevalência , Prognóstico , República da Coreia/epidemiologia , SARS-CoV-2 , Espanha/epidemiologia , Estados Unidos/epidemiologia , Adulto Jovem
8.
J Biomed Inform ; 121: 103870, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34302957

RESUMO

Evidence-Based Medicine (EBM) encourages clinicians to seek the most reputable evidence. The quality of evidence is organized in a hierarchy in which randomized controlled trials (RCTs) are regarded as least biased. However, RCTs are plagued by poor generalizability, impeding the translation of clinical research to practice. Though the presence of poor external validity is known, the factors that contribute to poor generalizability have not been summarized and placed in a framework. We propose a new population-oriented conceptual framework to facilitate consistent and comprehensive evaluation of generalizability, replicability, and assessment of RCT study quality.


Assuntos
Medicina Baseada em Evidências , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa
9.
Regul Toxicol Pharmacol ; 120: 104866, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33454352

RESUMO

Many observational studies explore the association between acetaminophen and cancer, but known limitations such as vulnerability to channeling, protopathic bias, and uncontrolled confounding hamper the interpretability of results. To help understand the potential magnitude of bias, we identify key design choices in these observational studies and specify 10 study design variants that represent different combinations of these design choices. We evaluate these variants by applying them to 37 negative controls - outcome presumed not to be caused by acetaminophen - as well as 4 cancer outcomes in the Clinical Practice Research Datalink (CPRD) database. The estimated odds and hazards ratios for the negative controls show substantial bias in the evaluated design variants, with far fewer of the 95% confidence intervals containing 1 than the nominal 95% expected for negative controls. The effect-size estimates for the cancer outcomes are comparable to those observed for the negative controls. A comparison of exposed and unexposed reveals many differences at baseline for which most studies do not correct. We observe that the design choices made in many of the published observational studies can lead to substantial bias. Thus, caution in the interpretation of published studies of acetaminophen and cancer is recommended.


Assuntos
Acetaminofen/efeitos adversos , Analgésicos não Narcóticos/efeitos adversos , Bases de Dados Factuais , Neoplasias/induzido quimicamente , Neoplasias/epidemiologia , Viés , Estudos de Casos e Controles , Estudos de Coortes , Estudos Epidemiológicos , Humanos
10.
Regul Toxicol Pharmacol ; 127: 105043, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34517075

RESUMO

Introduced in the 1950s, acetaminophen is one of the most widely used antipyretics and analgesics worldwide. In 1999, the International Agency for Research on Cancer (IARC) reviewed the epidemiologic studies of acetaminophen and the data were judged to be "inadequate" to conclude that it is carcinogenic. In 2019 the California Office of Environmental Health Hazard Assessment initiated a review process on the carcinogenic hazard potential of acetaminophen. To inform this review process, the authors performed a comprehensive literature search and identified 136 epidemiologic studies, which for most cancer types suggest no alteration in risk associated with acetaminophen use. For 3 cancer types, renal cell, liver, and some forms of lymphohematopoietic, some studies suggest an increased risk; however, multiple factors unique to acetaminophen need to be considered to determine if these results are real and clinically meaningful. The objective of this publication is to analyze the results of these epidemiologic studies using a framework that accounts for the inherent challenge of evaluating acetaminophen, including, broad population-wide use in multiple disease states, challenges with exposure measurement, protopathic bias, channeling bias, and recall bias. When evaluated using this framework, the data do not support a causal association between acetaminophen use and cancer.


Assuntos
Acetaminofen/efeitos adversos , Analgésicos não Narcóticos/efeitos adversos , Neoplasias/induzido quimicamente , Causalidade , Humanos , Modelos Biológicos
11.
Proc Natl Acad Sci U S A ; 115(11): 2571-2577, 2018 03 13.
Artigo em Inglês | MEDLINE | ID: mdl-29531023

RESUMO

Observational healthcare data, such as electronic health records and administrative claims, offer potential to estimate effects of medical products at scale. Observational studies have often been found to be nonreproducible, however, generating conflicting results even when using the same database to answer the same question. One source of discrepancies is error, both random caused by sampling variability and systematic (for example, because of confounding, selection bias, and measurement error). Only random error is typically quantified but converges to zero as databases become larger, whereas systematic error persists independent from sample size and therefore, increases in relative importance. Negative controls are exposure-outcome pairs, where one believes no causal effect exists; they can be used to detect multiple sources of systematic error, but interpreting their results is not always straightforward. Previously, we have shown that an empirical null distribution can be derived from a sample of negative controls and used to calibrate P values, accounting for both random and systematic error. Here, we extend this work to calibration of confidence intervals (CIs). CIs require positive controls, which we synthesize by modifying negative controls. We show that our CI calibration restores nominal characteristics, such as 95% coverage of the true effect size by the 95% CI. We furthermore show that CI calibration reduces disagreement in replications of two pairs of conflicting observational studies: one related to dabigatran, warfarin, and gastrointestinal bleeding and one related to selective serotonin reuptake inhibitors and upper gastrointestinal bleeding. We recommend CI calibration to improve reproducibility of observational studies.


Assuntos
Viés , Calibragem/normas , Pesquisa sobre Serviços de Saúde/estatística & dados numéricos , Pesquisa sobre Serviços de Saúde/normas , Estudos Observacionais como Assunto , Intervalos de Confiança , Humanos , Projetos de Pesquisa/normas , Projetos de Pesquisa/estatística & dados numéricos
12.
BMC Med Inform Decis Mak ; 21(1): 43, 2021 02 06.
Artigo em Inglês | MEDLINE | ID: mdl-33549087

RESUMO

BACKGROUND: Researchers developing prediction models are faced with numerous design choices that may impact model performance. One key decision is how to include patients who are lost to follow-up. In this paper we perform a large-scale empirical evaluation investigating the impact of this decision. In addition, we aim to provide guidelines for how to deal with loss to follow-up. METHODS: We generate a partially synthetic dataset with complete follow-up and simulate loss to follow-up based either on random selection or on selection based on comorbidity. In addition to our synthetic data study we investigate 21 real-world data prediction problems. We compare four simple strategies for developing models when using a cohort design that encounters loss to follow-up. Three strategies employ a binary classifier with data that: (1) include all patients (including those lost to follow-up), (2) exclude all patients lost to follow-up or (3) only exclude patients lost to follow-up who do not have the outcome before being lost to follow-up. The fourth strategy uses a survival model with data that include all patients. We empirically evaluate the discrimination and calibration performance. RESULTS: The partially synthetic data study results show that excluding patients who are lost to follow-up can introduce bias when loss to follow-up is common and does not occur at random. However, when loss to follow-up was completely at random, the choice of addressing it had negligible impact on model discrimination performance. Our empirical real-world data results showed that the four design choices investigated to deal with loss to follow-up resulted in comparable performance when the time-at-risk was 1-year but demonstrated differential bias when we looked into 3-year time-at-risk. Removing patients who are lost to follow-up before experiencing the outcome but keeping patients who are lost to follow-up after the outcome can bias a model and should be avoided. CONCLUSION: Based on this study we therefore recommend (1) developing models using data that includes patients that are lost to follow-up and (2) evaluate the discrimination and calibration of models twice: on a test set including patients lost to follow-up and a test set excluding patients lost to follow-up.


Assuntos
Perda de Seguimento , Viés , Calibragem , Estudos de Coortes , Humanos , Prognóstico
13.
Lancet ; 394(10211): 1816-1826, 2019 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-31668726

RESUMO

BACKGROUND: Uncertainty remains about the optimal monotherapy for hypertension, with current guidelines recommending any primary agent among the first-line drug classes thiazide or thiazide-like diuretics, angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, dihydropyridine calcium channel blockers, and non-dihydropyridine calcium channel blockers, in the absence of comorbid indications. Randomised trials have not further refined this choice. METHODS: We developed a comprehensive framework for real-world evidence that enables comparative effectiveness and safety evaluation across many drugs and outcomes from observational data encompassing millions of patients, while minimising inherent bias. Using this framework, we did a systematic, large-scale study under a new-user cohort design to estimate the relative risks of three primary (acute myocardial infarction, hospitalisation for heart failure, and stroke) and six secondary effectiveness and 46 safety outcomes comparing all first-line classes across a global network of six administrative claims and three electronic health record databases. The framework addressed residual confounding, publication bias, and p-hacking using large-scale propensity adjustment, a large set of control outcomes, and full disclosure of hypotheses tested. FINDINGS: Using 4·9 million patients, we generated 22 000 calibrated, propensity-score-adjusted hazard ratios (HRs) comparing all classes and outcomes across databases. Most estimates revealed no effectiveness differences between classes; however, thiazide or thiazide-like diuretics showed better primary effectiveness than angiotensin-converting enzyme inhibitors: acute myocardial infarction (HR 0·84, 95% CI 0·75-0·95), hospitalisation for heart failure (0·83, 0·74-0·95), and stroke (0·83, 0·74-0·95) risk while on initial treatment. Safety profiles also favoured thiazide or thiazide-like diuretics over angiotensin-converting enzyme inhibitors. The non-dihydropyridine calcium channel blockers were significantly inferior to the other four classes. INTERPRETATION: This comprehensive framework introduces a new way of doing observational health-care science at scale. The approach supports equivalence between drug classes for initiating monotherapy for hypertension-in keeping with current guidelines, with the exception of thiazide or thiazide-like diuretics superiority to angiotensin-converting enzyme inhibitors and the inferiority of non-dihydropyridine calcium channel blockers. FUNDING: US National Science Foundation, US National Institutes of Health, Janssen Research & Development, IQVIA, South Korean Ministry of Health & Welfare, Australian National Health and Medical Research Council.


Assuntos
Anti-Hipertensivos/uso terapêutico , Hipertensão/tratamento farmacológico , Adolescente , Adulto , Idoso , Antagonistas de Receptores de Angiotensina/efeitos adversos , Antagonistas de Receptores de Angiotensina/uso terapêutico , Inibidores da Enzima Conversora de Angiotensina/efeitos adversos , Inibidores da Enzima Conversora de Angiotensina/uso terapêutico , Anti-Hipertensivos/efeitos adversos , Bloqueadores dos Canais de Cálcio/efeitos adversos , Bloqueadores dos Canais de Cálcio/uso terapêutico , Criança , Estudos de Coortes , Pesquisa Comparativa da Efetividade/métodos , Bases de Dados Factuais , Diuréticos/efeitos adversos , Diuréticos/uso terapêutico , Medicina Baseada em Evidências/métodos , Feminino , Insuficiência Cardíaca/etiologia , Insuficiência Cardíaca/prevenção & controle , Humanos , Hipertensão/complicações , Masculino , Pessoa de Meia-Idade , Infarto do Miocárdio/etiologia , Infarto do Miocárdio/prevenção & controle , Acidente Vascular Cerebral/etiologia , Acidente Vascular Cerebral/prevenção & controle , Adulto Jovem
14.
BMC Med Res Methodol ; 20(1): 102, 2020 05 06.
Artigo em Inglês | MEDLINE | ID: mdl-32375693

RESUMO

BACKGROUND: To demonstrate how the Observational Healthcare Data Science and Informatics (OHDSI) collaborative network and standardization can be utilized to scale-up external validation of patient-level prediction models by enabling validation across a large number of heterogeneous observational healthcare datasets. METHODS: Five previously published prognostic models (ATRIA, CHADS2, CHADS2VASC, Q-Stroke and Framingham) that predict future risk of stroke in patients with atrial fibrillation were replicated using the OHDSI frameworks. A network study was run that enabled the five models to be externally validated across nine observational healthcare datasets spanning three countries and five independent sites. RESULTS: The five existing models were able to be integrated into the OHDSI framework for patient-level prediction and they obtained mean c-statistics ranging between 0.57-0.63 across the 6 databases with sufficient data to predict stroke within 1 year of initial atrial fibrillation diagnosis for females with atrial fibrillation. This was comparable with existing validation studies. The validation network study was run across nine datasets within 60 days once the models were replicated. An R package for the study was published at https://github.com/OHDSI/StudyProtocolSandbox/tree/master/ExistingStrokeRiskExternalValidation. CONCLUSION: This study demonstrates the ability to scale up external validation of patient-level prediction models using a collaboration of researchers and a data standardization that enable models to be readily shared across data sites. External validation is necessary to understand the transportability or reproducibility of a prediction model, but without collaborative approaches it can take three or more years for a model to be validated by one independent researcher. In this paper we show it is possible to both scale-up and speed-up external validation by showing how validation can be done across multiple databases in less than 2 months. We recommend that researchers developing new prediction models use the OHDSI network to externally validate their models.


Assuntos
Fibrilação Atrial , Acidente Vascular Cerebral , Fibrilação Atrial/diagnóstico , Fibrilação Atrial/epidemiologia , Estudos de Viabilidade , Feminino , Humanos , Prognóstico , Reprodutibilidade dos Testes , Acidente Vascular Cerebral/diagnóstico , Acidente Vascular Cerebral/epidemiologia
15.
JAMA ; 324(16): 1640-1650, 2020 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-33107944

RESUMO

Importance: Current guidelines recommend ticagrelor as the preferred P2Y12 platelet inhibitor for patients with acute coronary syndrome (ACS), primarily based on a single large randomized clinical trial. The benefits and risks associated with ticagrelor vs clopidogrel in routine practice merits attention. Objective: To determine the association of ticagrelor vs clopidogrel with ischemic and hemorrhagic events in patients undergoing percutaneous coronary intervention (PCI) for ACS in clinical practice. Design, Setting, and Participants: A retrospective cohort study of patients with ACS who underwent PCI and received ticagrelor or clopidogrel was conducted using 2 United States electronic health record-based databases and 1 nationwide South Korean database from November 2011 to March 2019. Patients were matched using a large-scale propensity score algorithm, and the date of final follow-up was March 2019. Exposures: Ticagrelor vs clopidogrel. Main Outcomes and Measures: The primary end point was net adverse clinical events (NACE) at 12 months, composed of ischemic events (recurrent myocardial infarction, revascularization, or ischemic stroke) and hemorrhagic events (hemorrhagic stroke or gastrointestinal bleeding). Secondary outcomes included NACE or mortality, all-cause mortality, ischemic events, hemorrhagic events, individual components of the primary outcome, and dyspnea at 12 months. The database-level hazard ratios (HRs) were pooled to calculate summary HRs by random-effects meta-analysis. Results: After propensity score matching among 31 290 propensity-matched pairs (median age group, 60-64 years; 29.3% women), 95.5% of patients took aspirin together with ticagrelor or clopidogrel. The 1-year risk of NACE was not significantly different between ticagrelor and clopidogrel (15.1% [3484/23 116 person-years] vs 14.6% [3290/22 587 person-years]; summary HR, 1.05 [95% CI, 1.00-1.10]; P = .06). There was also no significant difference in the risk of all-cause mortality (2.0% for ticagrelor vs 2.1% for clopidogrel; summary HR, 0.97 [95% CI, 0.81-1.16]; P = .74) or ischemic events (13.5% for ticagrelor vs 13.4% for clopidogrel; summary HR, 1.03 [95% CI, 0.98-1.08]; P = .32). The risks of hemorrhagic events (2.1% for ticagrelor vs 1.6% for clopidogrel; summary HR, 1.35 [95% CI, 1.13-1.61]; P = .001) and dyspnea (27.3% for ticagrelor vs 22.6% for clopidogrel; summary HR, 1.21 [95% CI, 1.17-1.26]; P < .001) were significantly higher in the ticagrelor group. Conclusions and Relevance: Among patients with ACS who underwent PCI in routine clinical practice, ticagrelor, compared with clopidogrel, was not associated with significant difference in the risk of NACE at 12 months. Because the possibility of unmeasured confounders cannot be excluded, further research is needed to determine whether ticagrelor is more effective than clopidogrel in this setting.


Assuntos
Síndrome Coronariana Aguda/cirurgia , Clopidogrel/efeitos adversos , Intervenção Coronária Percutânea , Antagonistas do Receptor Purinérgico P2Y/efeitos adversos , Ticagrelor/efeitos adversos , Síndrome Coronariana Aguda/mortalidade , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Aspirina/administração & dosagem , Estudos de Casos e Controles , Causas de Morte , Clopidogrel/administração & dosagem , Bases de Dados Factuais/estatística & dados numéricos , Dispneia/induzido quimicamente , Feminino , Hemorragia/induzido quimicamente , Humanos , Isquemia/induzido quimicamente , Masculino , Pessoa de Meia-Idade , Infarto do Miocárdio/epidemiologia , Metanálise em Rede , Pontuação de Propensão , Antagonistas do Receptor Purinérgico P2Y/administração & dosagem , Recidiva , República da Coreia , Estudos Retrospectivos , Acidente Vascular Cerebral/epidemiologia , Ticagrelor/administração & dosagem , Estados Unidos
16.
Stat Med ; 38(22): 4199-4208, 2019 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-31436848

RESUMO

The case-control design is widely used in retrospective database studies, often leading to spectacular findings. However, results of these studies often cannot be replicated, and the advantage of this design over others is questionable. To demonstrate the shortcomings of applications of this design, we replicate two published case-control studies. The first investigates isotretinoin and ulcerative colitis using a simple case-control design. The second focuses on dipeptidyl peptidase-4 inhibitors and acute pancreatitis, using a nested case-control design. We include large sets of negative control exposures (where the true odds ratio is believed to be 1) in both studies. Both replication studies produce effect size estimates consistent with the original studies, but also generate estimates for the negative control exposures showing substantial residual bias. In contrast, applying a self-controlled design to answer the same questions using the same data reveals far less bias. Although the case-control design in general is not at fault, its application in retrospective database studies, where all exposure and covariate data for the entire cohort are available, is unnecessary, as other alternatives such as cohort and self-controlled designs are available. Moreover, by focusing on cases and controls it opens the door to inappropriate comparisons between exposure groups, leading to confounding for which the design has few options to adjust for. We argue that this design should no longer be used in these types of data. At the very least, negative control exposures should be used to prove that the concerns raised here do not apply.


Assuntos
Estudos de Casos e Controles , Bases de Dados Factuais , Reprodutibilidade dos Testes , Viés , Simulação por Computador , Interpretação Estatística de Dados , Humanos , Estudos Retrospectivos
17.
J Biomed Inform ; 97: 103258, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31369862

RESUMO

BACKGROUND: The primary approach for defining disease in observational healthcare databases is to construct phenotype algorithms (PAs), rule-based heuristics predicated on the presence, absence, and temporal logic of clinical observations. However, a complete evaluation of PAs, i.e., determining sensitivity, specificity, and positive predictive value (PPV), is rarely performed. In this study, we propose a tool (PheValuator) to efficiently estimate a complete PA evaluation. METHODS: We used 4 administrative claims datasets: OptumInsight's de-identified Clinformatics™ Datamart (Eden Prairie,MN); IBM MarketScan Multi-State Medicaid); IBM MarketScan Medicare Supplemental Beneficiaries; and IBM MarketScan Commercial Claims and Encounters from 2000 to 2017. Using PheValuator involves (1) creating a diagnostic predictive model for the phenotype, (2) applying the model to a large set of randomly selected subjects, and (3) comparing each subject's predicted probability for the phenotype to inclusion/exclusion in PAs. We used the predictions as a 'probabilistic gold standard' measure to classify positive/negative cases. We examined 4 phenotypes: myocardial infarction, cerebral infarction, chronic kidney disease, and atrial fibrillation. We examined several PAs for each phenotype including 1-time (1X) occurrence of the diagnosis code in the subject's record and 1-time occurrence of the diagnosis in an inpatient setting with the diagnosis code as the primary reason for admission (1X-IP-1stPos). RESULTS: Across phenotypes, the 1X PA showed the highest sensitivity/lowest PPV among all PAs. 1X-IP-1stPos yielded the highest PPV/lowest sensitivity. Specificity was very high across algorithms. We found similar results between algorithms across datasets. CONCLUSION: PheValuator appears to show promise as a tool to estimate PA performance characteristics.


Assuntos
Algoritmos , Diagnóstico por Computador , Fenótipo , Fibrilação Atrial/diagnóstico , Infarto Cerebral/diagnóstico , Biologia Computacional , Current Procedural Terminology , Bases de Dados Factuais/estatística & dados numéricos , Diagnóstico por Computador/estatística & dados numéricos , Erros de Diagnóstico/estatística & dados numéricos , Humanos , Modelos Estatísticos , Infarto do Miocárdio/diagnóstico , Valor Preditivo dos Testes , Probabilidade , Insuficiência Renal Crônica/diagnóstico , Sensibilidade e Especificidade
18.
J Biomed Inform ; 97: 103264, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31386904

RESUMO

OBJECTIVES: Smoking status is poorly record in US claims data. IBM MarketScan Commercial is a claims database that can be linked to an additional health risk assessment with self-reported smoking status for a subset of 1,966,174 patients. We investigate whether this subset could be used to learn a smoking status phenotype model generalizable to all US claims data that calculates the probability of being a current smoker. METHODS: 251,643 (12.8%) had self-reported their smoking status as 'current smoker'. A regularized logistic regression model, the Current Risk of Smoking Status (CROSS), was trained using the subset of patients with self-reported smoking status. CROSS considered 53,027 candidate covariates including demographics and conditions/drugs/measurements/procedures/observations recorded in the prior 365 days, The CROSS phenotype model was validated across multiple other claims data. RESULTS: The internal validation showed the CROSS model achieved an area under the receiver operating characteristic curve (AUC) of 0.76 and the calibration plots indicated it was well calibrated. The external validation across three US claims databases obtained AUCs ranging between 0.82 and 0.87 showing the model appears to be transportable across Claims data. CONCLUSION: CROSS predicts current smoking status based on the claims records in the prior year. CROSS can be readily implemented to any US insurance claims mapped to the OMOP common data model and will be a useful way to impute smoking status when conducting epidemiology studies where smoking is a known confounder but smoking status is not recorded. CROSS is available from https://github.com/OHDSI/StudyProtocolSandbox/tree/master/SmokingModel.


Assuntos
Fumar Cigarros/epidemiologia , Revisão da Utilização de Seguros/estatística & dados numéricos , Modelos Estatísticos , Adulto , Biologia Computacional , Interpretação Estatística de Dados , Bases de Dados Factuais/estatística & dados numéricos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fenótipo , Medição de Risco , Autorrelato/estatística & dados numéricos , Estados Unidos/epidemiologia
19.
Pharmacoepidemiol Drug Saf ; 28(12): 1620-1628, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31456304

RESUMO

PURPOSE: To compare the incidence of diabetic ketoacidosis (DKA) among patients with type 2 diabetes mellitus (T2DM) who were new users of sodium glucose co-transporter 2 inhibitors (SGLT2i) versus other classes of antihyperglycemic agents (AHAs). METHODS: Patients were identified from four large US claims databases using broad (all T2DM patients) and narrow (intended to exclude patients with type 1 diabetes or secondary diabetes misclassified as T2DM) definitions of T2DM. New users of SGLT2i and seven groups of comparator AHAs were matched (1:1) on exposure propensity scores to adjust for imbalances in baseline covariates. Cox proportional hazards regression models, conditioned on propensity score-matched pairs, were used to estimate hazard ratios (HRs) of DKA for new users of SGLT2i versus other AHAs. When I2 <40%, a combined HR across the four databases was estimated. RESULTS: Using the broad definition of T2DM, new users of SGLT2i had an increased risk of DKA versus sulfonylureas (HR [95% CI]: 1.53 [1.31-1.79]), DPP-4i (1.28 [1.11-1.47]), GLP-1 receptor agonists (1.34 [1.12-1.60]), metformin (1.31 [1.11-1.54]), and insulinotropic AHAs (1.38 [1.15-1.66]). Using the narrow definition of T2DM, new users of SGLT2i had an increased risk of DKA versus sulfonylureas (1.43 [1.01-2.01]). New users of SGLT2i had a lower risk of DKA versus insulin and a similar risk as thiazolidinediones, regardless of T2DM definition. CONCLUSIONS: Increased risk of DKA was observed for new users of SGLT2i versus several non-SGLT2i AHAs when T2DM was defined broadly. When T2DM was defined narrowly to exclude possible misclassified patients, an increased risk of DKA with SGLT2i was observed compared with sulfonylureas.


Assuntos
Diabetes Mellitus Tipo 2/tratamento farmacológico , Cetoacidose Diabética/epidemiologia , Inibidores do Transportador 2 de Sódio-Glicose/efeitos adversos , Demandas Administrativas em Assistência à Saúde/estatística & dados numéricos , Idoso , Glicemia , Bases de Dados Factuais/estatística & dados numéricos , Cetoacidose Diabética/induzido quimicamente , Feminino , Receptor do Peptídeo Semelhante ao Glucagon 1/antagonistas & inibidores , Humanos , Incidência , Insulina/efeitos adversos , Masculino , Metformina/efeitos adversos , Pessoa de Meia-Idade , Fatores de Risco , Compostos de Sulfonilureia/efeitos adversos , Estados Unidos/epidemiologia
20.
Proc Natl Acad Sci U S A ; 113(27): 7329-36, 2016 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-27274072

RESUMO

Observational research promises to complement experimental research by providing large, diverse populations that would be infeasible for an experiment. Observational research can test its own clinical hypotheses, and observational studies also can contribute to the design of experiments and inform the generalizability of experimental research. Understanding the diversity of populations and the variance in care is one component. In this study, the Observational Health Data Sciences and Informatics (OHDSI) collaboration created an international data network with 11 data sources from four countries, including electronic health records and administrative claims data on 250 million patients. All data were mapped to common data standards, patient privacy was maintained by using a distributed model, and results were aggregated centrally. Treatment pathways were elucidated for type 2 diabetes mellitus, hypertension, and depression. The pathways revealed that the world is moving toward more consistent therapy over time across diseases and across locations, but significant heterogeneity remains among sources, pointing to challenges in generalizing clinical trial results. Diabetes favored a single first-line medication, metformin, to a much greater extent than hypertension or depression. About 10% of diabetes and depression patients and almost 25% of hypertension patients followed a treatment pathway that was unique within the cohort. Aside from factors such as sample size and underlying population (academic medical center versus general population), electronic health records data and administrative claims data revealed similar results. Large-scale international observational research is feasible.


Assuntos
Padrões de Prática Médica/estatística & dados numéricos , Antidepressivos/uso terapêutico , Anti-Hipertensivos/uso terapêutico , Bases de Dados Factuais , Depressão/tratamento farmacológico , Diabetes Mellitus Tipo 2/tratamento farmacológico , Registros Eletrônicos de Saúde , Humanos , Hipertensão/tratamento farmacológico , Hipoglicemiantes/uso terapêutico , Internacionalidade , Informática Médica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA