Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Asunto principal
Intervalo de año de publicación
2.
medRxiv ; 2024 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-38370787

RESUMEN

Background: SGLT2 inhibitors (SGLT2is) and GLP-1 receptor agonists (GLP1-RAs) reduce major adverse cardiovascular events (MACE) in patients with type 2 diabetes mellitus (T2DM). However, their effectiveness relative to each other and other second-line antihyperglycemic agents is unknown, without any major ongoing head-to-head trials. Methods: Across the LEGEND-T2DM network, we included ten federated international data sources, spanning 1992-2021. We identified 1,492,855 patients with T2DM and established cardiovascular disease (CVD) on metformin monotherapy who initiated one of four second-line agents (SGLT2is, GLP1-RAs, dipeptidyl peptidase 4 inhibitor [DPP4is], sulfonylureas [SUs]). We used large-scale propensity score models to conduct an active comparator, target trial emulation for pairwise comparisons. After evaluating empirical equipoise and population generalizability, we fit on-treatment Cox proportional hazard models for 3-point MACE (myocardial infarction, stroke, death) and 4-point MACE (3-point MACE + heart failure hospitalization) risk, and combined hazard ratio (HR) estimates in a random-effects meta-analysis. Findings: Across cohorts, 16·4%, 8·3%, 27·7%, and 47·6% of individuals with T2DM initiated SGLT2is, GLP1-RAs, DPP4is, and SUs, respectively. Over 5·2 million patient-years of follow-up and 489 million patient-days of time at-risk, there were 25,982 3-point MACE and 41,447 4-point MACE events. SGLT2is and GLP1-RAs were associated with a lower risk for 3-point MACE compared with DPP4is (HR 0·89 [95% CI, 0·79-1·00] and 0·83 [0·70-0·98]), and SUs (HR 0·76 [0·65-0·89] and 0·71 [0·59-0·86]). DPP4is were associated with a lower 3-point MACE risk versus SUs (HR 0·87 [0·79-0·95]). The pattern was consistent for 4-point MACE for the comparisons above. There were no significant differences between SGLT2is and GLP1-RAs for 3-point or 4-point MACE (HR 1·06 [0·96-1·17] and 1·05 [0·97-1·13]). Interpretation: In patients with T2DM and established CVD, we found comparable cardiovascular risk reduction with SGLT2is and GLP1-RAs, with both agents more effective than DPP4is, which in turn were more effective than SUs. These findings suggest that the use of GLP1-RAs and SGLT2is should be prioritized as second-line agents in those with established CVD. Funding: National Institutes of Health, United States Department of Veterans Affairs.

3.
BMJ Med ; 2(1): e000651, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37829182

RESUMEN

Objective: To assess the uptake of second line antihyperglycaemic drugs among patients with type 2 diabetes mellitus who are receiving metformin. Design: Federated pharmacoepidemiological evaluation in LEGEND-T2DM. Setting: 10 US and seven non-US electronic health record and administrative claims databases in the Observational Health Data Sciences and Informatics network in eight countries from 2011 to the end of 2021. Participants: 4.8 million patients (≥18 years) across US and non-US based databases with type 2 diabetes mellitus who had received metformin monotherapy and had initiated second line treatments. Exposure: The exposure used to evaluate each database was calendar year trends, with the years in the study that were specific to each cohort. Main outcomes measures: The outcome was the incidence of second line antihyperglycaemic drug use (ie, glucagon-like peptide-1 receptor agonists, sodium-glucose cotransporter-2 inhibitors, dipeptidyl peptidase-4 inhibitors, and sulfonylureas) among individuals who were already receiving treatment with metformin. The relative drug class level uptake across cardiovascular risk groups was also evaluated. Results: 4.6 million patients were identified in US databases, 61 382 from Spain, 32 442 from Germany, 25 173 from the UK, 13 270 from France, 5580 from Scotland, 4614 from Hong Kong, and 2322 from Australia. During 2011-21, the combined proportional initiation of the cardioprotective antihyperglycaemic drugs (glucagon-like peptide-1 receptor agonists and sodium-glucose cotransporter-2 inhibitors) increased across all data sources, with the combined initiation of these drugs as second line drugs in 2021 ranging from 35.2% to 68.2% in the US databases, 15.4% in France, 34.7% in Spain, 50.1% in Germany, and 54.8% in Scotland. From 2016 to 2021, in some US and non-US databases, uptake of glucagon-like peptide-1 receptor agonists and sodium-glucose cotransporter-2 inhibitors increased more significantly among populations with no cardiovascular disease compared with patients with established cardiovascular disease. No data source provided evidence of a greater increase in the uptake of these two drug classes in populations with cardiovascular disease compared with no cardiovascular disease. Conclusions: Despite the increase in overall uptake of cardioprotective antihyperglycaemic drugs as second line treatments for type 2 diabetes mellitus, their uptake was lower in patients with cardiovascular disease than in people with no cardiovascular disease over the past decade. A strategy is needed to ensure that medication use is concordant with guideline recommendations to improve outcomes of patients with type 2 diabetes mellitus.

4.
Drug Saf ; 46(8): 797-807, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37328600

RESUMEN

INTRODUCTION: Vaccine safety surveillance commonly includes a serial testing approach with a sensitive method for 'signal generation' and specific method for 'signal validation.' The extent to which serial testing in real-world studies improves or hinders overall performance in terms of sensitivity and specificity remains unknown. METHODS: We assessed the overall performance of serial testing using three administrative claims and one electronic health record database. We compared type I and II errors before and after empirical calibration for historical comparator, self-controlled case series (SCCS), and the serial combination of those designs against six vaccine exposure groups with 93 negative control and 279 imputed positive control outcomes. RESULTS: The historical comparator design mostly had fewer type II errors than SCCS. SCCS had fewer type I errors than the historical comparator. Before empirical calibration, the serial combination increased specificity and decreased sensitivity. Type II errors mostly exceeded 50%. After empirical calibration, type I errors returned to nominal; sensitivity was lowest when the methods were combined. CONCLUSION: While serial combination produced fewer false-positive signals compared with the most specific method, it generated more false-negative signals compared with the most sensitive method. Using a historical comparator design followed by an SCCS analysis yielded decreased sensitivity in evaluating safety signals relative to a one-stage SCCS approach. While the current use of serial testing in vaccine surveillance may provide a practical paradigm for signal identification and triage, single epidemiological designs should be explored as valuable approaches to detecting signals.


Asunto(s)
Vacunas , Humanos , Vacunas/efectos adversos , Sensibilidad y Especificidad , Proyectos de Investigación , Bases de Datos Factuales , Registros Electrónicos de Salud
6.
Front Pharmacol ; 13: 893484, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35873596

RESUMEN

Background: Routinely collected healthcare data such as administrative claims and electronic health records (EHR) can complement clinical trials and spontaneous reports to detect previously unknown risks of vaccines, but uncertainty remains about the behavior of alternative epidemiologic designs to detect and declare a true risk early. Methods: Using three claims and one EHR database, we evaluate several variants of the case-control, comparative cohort, historical comparator, and self-controlled designs against historical vaccinations using real negative control outcomes (outcomes with no evidence to suggest that they could be caused by the vaccines) and simulated positive control outcomes. Results: Most methods show large type 1 error, often identifying false positive signals. The cohort method appears either positively or negatively biased, depending on the choice of comparator index date. Empirical calibration using effect-size estimates for negative control outcomes can bring type 1 error closer to nominal, often at the cost of increasing type 2 error. After calibration, the self-controlled case series (SCCS) design most rapidly detects small true effect sizes, while the historical comparator performs well for strong effects. Conclusion: When applying any method for vaccine safety surveillance we recommend considering the potential for systematic error, especially due to confounding, which for many designs appears to be substantial. Adjusting for age and sex alone is likely not sufficient to address differences between vaccinated and unvaccinated, and for the cohort method the choice of index date is important for the comparability of the groups. Analysis of negative control outcomes allows both quantification of the systematic error and, if desired, subsequent empirical calibration to restore type 1 error to its nominal value. In order to detect weaker signals, one may have to accept a higher type 1 error.

7.
Front Pharmacol ; 13: 837632, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35392566

RESUMEN

Post-marketing vaccine safety surveillance aims to detect adverse events following immunization in a population. Whether certain methods of surveillance are more precise and unbiased in generating safety signals is unclear. Here, we synthesized information from existing literature to provide an overview of the strengths, weaknesses, and clinical applications of epidemiologic and analytical methods used in vaccine monitoring, focusing on cohort, case-control and self-controlled designs. These designs are proposed to be evaluated in the EUMAEUS (Evaluating Use of Methods for Adverse Event Under Surveillance-for vaccines) study because of their widespread use and potential utility. Over the past decades, there have been an increasing number of epidemiological study designs used for vaccine safety surveillance. While traditional cohort and case-control study designs remain widely used, newer, novel designs such as the self-controlled case series and self-controlled risk intervals have been developed. Each study design comes with its strengths and limitations, and the most appropriate study design will depend on availability of resources, access to records, number and distribution of cases, and availability of population coverage data. Several assumptions have to be made while using the various study designs, and while the goal is to mitigate any biases, violations of these assumptions are often still present to varying degrees. In our review, we discussed some of the potential biases (i.e., selection bias, misclassification bias and confounding bias), and ways to mitigate them. While the types of epidemiological study designs are well established, a comprehensive comparison of the analytical aspects (including method evaluation and performance metrics) of these study designs are relatively less well studied. We summarized the literature, reporting on two simulation studies, which compared the detection time, empirical power, error rate and risk estimate bias across the above-mentioned study designs. While these simulation studies provided insights on the analytic performance of each of the study designs, its applicability to real-world data remains unclear. To bridge that gap, we provided the rationale of the EUMAEUS study, with a brief description of the study design; and how the use of real-world multi-database networks can provide insights into better methods evaluation and vaccine safety surveillance.

8.
Front Pharmacol ; 12: 773875, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34899334

RESUMEN

Using real-world data and past vaccination data, we conducted a large-scale experiment to quantify bias, precision and timeliness of different study designs to estimate historical background (expected) compared to post-vaccination (observed) rates of safety events for several vaccines. We used negative (not causally related) and positive control outcomes. The latter were synthetically generated true safety signals with incident rate ratios ranging from 1.5 to 4. Observed vs. expected analysis using within-database historical background rates is a sensitive but unspecific method for the identification of potential vaccine safety signals. Despite good discrimination, most analyses showed a tendency to overestimate risks, with 20%-100% type 1 error, but low (0% to 20%) type 2 error in the large databases included in our study. Efforts to improve the comparability of background and post-vaccine rates, including age-sex adjustment and anchoring background rates around a visit, reduced type 1 error and improved precision but residual systematic error persisted. Additionally, empirical calibration dramatically reduced type 1 to nominal but came at the cost of increasing type 2 error.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA