RESUMO
The test-negative design (TND) is a popular method for evaluating vaccine effectiveness (VE). A "classical" TND study includes symptomatic individuals tested for the disease targeted by the vaccine to estimate VE against symptomatic infection. However, recent applications of the TND have attempted to estimate VE against infection by including all tested individuals, regardless of their symptoms. In this article, we use directed acyclic graphs and simulations to investigate potential biases in TND studies of COVID-19 VE arising from the use of this "alternative" approach, particularly when applied during periods of widespread testing. We show that the inclusion of asymptomatic individuals can potentially lead to collider stratification bias, uncontrolled confounding by health and healthcare-seeking behaviors (HSBs), and differential outcome misclassification. While our focus is on the COVID-19 setting, the issues discussed here may also be relevant in the context of other infectious diseases. This may be particularly true in scenarios where there is either a high baseline prevalence of infection, a strong correlation between HSBs and vaccination, different testing practices for vaccinated and unvaccinated individuals, or settings where both the vaccine under study attenuates symptoms of infection and diagnostic accuracy is modified by the presence of symptoms.
RESUMO
In this study, we develop a new method for the meta-analysis of mixed aggregate data (AD) and individual participant data (IPD). The method is an adaptation of inverse probability weighted targeted maximum likelihood estimation (IPW-TMLE), which was initially proposed for two-stage sampled data. Our methods are motivated by a systematic review investigating treatment effectiveness for multidrug resistant tuberculosis (MDR-TB) where the available data include IPD from some studies but only AD from others. One complication in this application is that participants with MDR-TB are typically treated with multiple antimicrobial agents where many such medications were not observed in all studies considered in the meta-analysis. We focus here on the estimation of the expected potential outcome while intervening on a specific medication but not intervening on any others. Our method involves the implementation of a TMLE that transports the estimation from studies where the treatment is observed to the full target population. A second weighting component adjusts for the studies with missing (inaccessible) IPD. We demonstrate the properties of the proposed method and contrast it with alternative approaches in a simulation study. We finally apply this method to estimate treatment effectiveness in the MDR-TB case study.
Assuntos
Tuberculose Resistente a Múltiplos Medicamentos , Humanos , Funções Verossimilhança , Tuberculose Resistente a Múltiplos Medicamentos/tratamento farmacológico , Tuberculose Resistente a Múltiplos Medicamentos/epidemiologia , Resultado do Tratamento , Simulação por ComputadorRESUMO
Cox models with time-dependent coefficients and covariates are widely used in survival analysis. In high-dimensional settings, sparse regularization techniques are employed for variable selection, but existing methods for time-dependent Cox models lack flexibility in enforcing specific sparsity patterns (ie, covariate structures). We propose a flexible framework for variable selection in time-dependent Cox models, accommodating complex selection rules. Our method can adapt to arbitrary grouping structures, including interaction selection, temporal, spatial, tree, and directed acyclic graph structures. It achieves accurate estimation with low false alarm rates. We develop the sox package, implementing a network flow algorithm for efficiently solving models with complex covariate structures. sox offers a user-friendly interface for specifying grouping structures and delivers fast computation. Through examples, including a case study on identifying predictors of time to all-cause death in atrial fibrillation patients, we demonstrate the practical application of our method with specific selection rules.
Assuntos
Algoritmos , Modelos de Riscos Proporcionais , Humanos , Análise de Sobrevida , Fibrilação Atrial , Fatores de Tempo , Simulação por ComputadorRESUMO
Time-varying confounding is a common challenge for causal inference in observational studies with time-varying treatments, long follow-up periods, and participant dropout. Confounder adjustment using traditional approaches can be limited by data sparsity, weight instability and computational issues. The Nicotine Dependence in Teens (NDIT) study is a prospective cohort study involving 24 data collection cycles from 1999 to date, among 1,294 students recruited from 10 high schools in Montreal, Canada, including follow-up into adulthood. Our aim is to estimate associations between the timing of alcohol initiation and the cumulative duration of alcohol use on depression symptoms in adulthood. Based on the target trials framework, we define intention-to-treat and as-treated parameters in a marginal structural model with sex as a potential effect-modifier. We then use the observational data to emulate the trials. For estimation, we use pooled longitudinal target maximum likelihood estimation (LTMLE), a plug-in estimator with double robust and local efficiency properties. We describe strategies for dealing with high-dimensional potential drinking patterns and practical positivity violations due to a long follow-up time, including modifying the effect of interest by removing sparsely observed drinking patterns from the loss function and applying longitudinal modified treatment policies to represent the effect of discouraging drinking.
RESUMO
OBJECTIVE: To determine whether proton pump inhibitors (PPIs) are associated with an increased risk of colorectal cancer, compared with histamine-2 receptor antagonists (H2RAs). DESIGN: The United Kingdom Clinical Practice Research Datalink was used to identify initiators of PPIs and H2RA from 1990 to 2018, with follow-up until 2019. Cox proportional hazards models were fit to estimate marginal HRs and 95% CIs of colorectal cancer. The models were weighted using standardised mortality ratio weights using calendar time-specific propensity scores. Prespecified secondary analyses assessed associations with cumulative duration, cumulative dose and time since treatment initiation. The number needed to harm was calculated at five and 10 years of follow-up. RESULTS: The cohort included 1 293 749 and 292 387 initiators of PPIs and H2RAs, respectively, followed for a median duration of 4.9 years. While the use of PPIs was not associated with an overall increased risk of colorectal cancer (HR: 1.02, 95% CI 0.92 to 1.14), HRs increased with cumulative duration of PPI use (<2 years, HR: 0.93, 95% CI 0.83 to 1.04; 2-4 years, HR: 1.45, 95% CI 1.28 to 1.60; ≥4 years, HR: 1.60, 95% CI 1.42 to 1.80). Similar patterns were observed with cumulative dose and time since treatment initiation. The number needed to harm was 5343 and 792 for five and 10 years of follow-up, respectively. CONCLUSION: While any use of PPIs was not associated with an increased risk of colorectal cancer compared with H2RAs, prolonged use may be associated with a modest increased risk of this malignancy.
Assuntos
Neoplasias Colorretais/induzido quimicamente , Inibidores da Bomba de Prótons/efeitos adversos , Estudos de Coortes , Neoplasias Colorretais/epidemiologia , Bases de Dados Factuais , Feminino , Antagonistas dos Receptores H2 da Histamina/efeitos adversos , Humanos , Masculino , Pessoa de Meia-Idade , Modelos de Riscos Proporcionais , Reino Unido/epidemiologiaRESUMO
OBJECTIVE: To determine whether new users of proton pump inhibitors (PPIs) are at an increased risk of gastric cancer compared with new users of histamine-2 receptor antagonists (H2RAs). DESIGN: Using the UK Clinical Practice Research Datalink, we conducted a population-based cohort study using a new-user active comparator design. From 1 January 1990 to 30 April 2018, we identified 973 281 new users of PPIs and 193 306 new users of H2RAs. Cox proportional hazards models were fit to estimate HRs and 95% CIs of gastric cancer, and the number needed to harm was estimated using the Kaplan-Meier method. The models were weighted using standardised mortality ratio weights using calendar time-specific propensity scores. Secondary analyses assessed duration and dose-response associations. RESULTS: After a median follow-up of 5.0 years, the use of PPIs was associated with a 45% increased risk of gastric cancer compared with the use of H2RAs (HR 1.45, 95% CI 1.06 to 1.98). The number needed to harm was 2121 and 1191 for five and 10 years after treatment initiation, respectively. The HRs increased with cumulative duration, cumulative omeprazole equivalents and time since treatment initiation. The results were consistent across several sensitivity analyses. CONCLUSION: The findings of this large population-based cohort study indicate that the use of PPIs is associated with an increased risk of gastric cancer compared with the use of H2RAs, although the absolute risk remains low.
Assuntos
Inibidores da Bomba de Prótons/efeitos adversos , Neoplasias Gástricas/induzido quimicamente , Estudos de Coortes , Feminino , Antagonistas dos Receptores H2 da Histamina/efeitos adversos , Humanos , Masculino , Pessoa de Meia-Idade , Modelos de Riscos Proporcionais , Neoplasias Gástricas/epidemiologia , Reino Unido/epidemiologiaRESUMO
The test-negative design is routinely used for the monitoring of seasonal flu vaccine effectiveness. More recently, it has become integral to the estimation of COVID-19 vaccine effectiveness, in particular for more severe disease outcomes. Because the design has many important advantages and is becoming a mainstay for monitoring postlicensure vaccine effectiveness, epidemiologists and biostatisticians may be interested in further understanding the effect measures being estimated in these studies and connections to causal effects. Logistic regression is typically applied to estimate the conditional risk ratio but relies on correct outcome model specification and may be biased in the presence of effect modification by a confounder. We give and justify an inverse probability of treatment weighting (IPTW) estimator for the marginal risk ratio, which is valid under effect modification. We use causal directed acyclic graphs, and counterfactual arguments under assumptions about no interference and partial interference to illustrate the connection between these statistical estimands and causal quantities. We conduct a simulation study to illustrate and confirm our derivations and to evaluate the performance of the estimators. We find that if the effectiveness of the vaccine varies across patient subgroups, the logistic regression can lead to misleading estimates, but the IPTW estimator can produce unbiased estimates. We also find that in the presence of partial interference both estimators can produce misleading estimates.
Assuntos
Vacinas contra COVID-19 , COVID-19 , COVID-19/epidemiologia , COVID-19/prevenção & controle , Causalidade , Humanos , Modelos Estatísticos , Eficácia de VacinasRESUMO
PURPOSE: Confounding adjustment is required to estimate the effect of an exposure on an outcome in observational studies. However, variable selection and unmeasured confounding are particularly challenging when analyzing large healthcare data. Machine learning methods may help address these challenges. The objective was to evaluate the capacity of such methods to select confounders and reduce unmeasured confounding bias. METHODS: A simulation study with known true effects was conducted. Completely synthetic and partially synthetic data incorporating real large healthcare data were generated. We compared Bayesian adjustment for confounding (BAC), generalized Bayesian causal effect estimation (GBCEE), Group Lasso and Doubly robust estimation, high-dimensional propensity score (hdPS), and scalable collaborative targeted maximum likelihood algorithms. For the hdPS, two adjustment approaches targeting the effect in the whole population were considered: Full matching and inverse probability weighting. RESULTS: In scenarios without hidden confounders, most methods were essentially unbiased. The bias and variance of the hdPS varied considerably according to the number of variables selected by the algorithm. In scenarios with hidden confounders, substantial bias reduction was achieved by using machine-learning methods to identify proxies as compared to adjusting only by observed confounders. hdPS and Group Lasso performed poorly in the partially synthetic simulation. BAC, GBCEE, and scalable collaborative-targeted maximum likelihood algorithms performed particularly well. CONCLUSIONS: Machine learning can help to identify measured confounders in large healthcare databases. They can also capitalize on proxies of unmeasured confounders to substantially reduce residual confounding bias.
Assuntos
Atenção à Saúde , Teorema de Bayes , Viés , Causalidade , Simulação por Computador , Fatores de Confusão Epidemiológicos , Humanos , Pontuação de PropensãoRESUMO
Owing to the rapidly evolving coronavirus disease 2019 (COVID-19) pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus, quick public health investigations of the relationships between behaviors and infection risk are essential. Recently the test-negative design (TND) was proposed to recruit and survey participants who are symptomatic and being tested for SARS-CoV-2 infection with the goal of evaluating associations between the survey responses (including behaviors and environment) and testing positive on the test. It was also proposed to recruit additional controls who are part of the general population as a baseline comparison group to evaluate risk factors specific to SARS-CoV-2 infection. In this study, we consider an alternative design where we recruit among all individuals, symptomatic and asymptomatic, being tested for the virus in addition to population controls. We define a regression parameter related to a prospective risk factor analysis and investigate its identifiability under the two study designs. We review the difference between the prospective risk factor parameter and the parameter targeted in the typical TND where only symptomatic and tested people are recruited. Using missing data directed acyclic graphs, we provide conditions and required data collection under which identifiability of the prospective risk factor parameter is possible and compare the benefits and limitations of the alternative study designs and target parameters. We propose a novel inverse probability weighting estimator and demonstrate the performance of this estimator through simulation study.
Assuntos
COVID-19 , SARS-CoV-2 , Objetivos , Humanos , Controle da População , Estudos ProspectivosRESUMO
AIMS: There are conflicting signals in the literature about comparative safety and effectiveness of direct oral anticoagulants (DOACs) for nonvalvular atrial fibrillation (NVAF). METHODS: We conducted multicentre matched cohort studies with secondary meta-analysis to assess safety and effectiveness of dabigatran, rivaroxaban and apixaban across 9 administrative healthcare databases. We included adults with NVAF initiating anticoagulation therapy (dabigatran, rivaroxaban or apixaban), and constructed 3 cohorts to compare DOACs pairwise. The primary outcome was pooled hazard ratio (pHR) of ischaemic stroke or systemic thromboembolism. Secondary outcomes included pHR of major bleeding, and a composite of stroke, major bleeding, or all-cause mortality. We used proportional hazard Cox regressions models, and pooled estimates were obtained with random effect meta-analyses. RESULTS: The cohorts included 73 414 new users of dabigatran, 92 881 of rivaroxaban, and 61 284 of apixaban. After matching, the pHRs (95% confidence intervals) comparing rivaroxaban initiation to dabigatran were: 1.11 (0.93, 1.32) for ischaemic stroke or systemic thromboembolism, 1.26 (1.09, 1.46) for major bleeding, and 1.17 (1.05, 1.30) for the composite endpoint. For apixaban vs dabigatran, they were: 0.91 (0.74, 1.12) for ischaemic stroke or systemic thromboembolism, 0.89 (0.75, 1.05) for major bleeding, and 0.94 (0.78 to 1.14) for the composite endpoint. For apixaban vs rivaroxaban, they were: 0.85 (0.74, 0.99) for ischaemic stroke or systemic thromboembolism, 0.61 (0.53, 0.70) for major bleeding, and 0.82 (0.76, 0.88) for the composite endpoint. CONCLUSION: We found that apixaban use is associated with lower risks of stroke and bleeding compared with rivaroxaban, and similar risks compared with dabigatran.
Assuntos
Fibrilação Atrial , Isquemia Encefálica , Acidente Vascular Cerebral , Administração Oral , Adulto , Anticoagulantes/efeitos adversos , Fibrilação Atrial/complicações , Fibrilação Atrial/tratamento farmacológico , Isquemia Encefálica/tratamento farmacológico , Estudos de Coortes , Dabigatrana/efeitos adversos , Humanos , Piridonas/efeitos adversos , Estudos Retrospectivos , Rivaroxabana/efeitos adversos , Acidente Vascular Cerebral/tratamento farmacológico , Acidente Vascular Cerebral/epidemiologia , Acidente Vascular Cerebral/prevenção & controle , Resultado do Tratamento , VarfarinaRESUMO
OBJECTIVES: We aimed to test the hypothesis that exposure to immunosuppression in early systemic sclerosis (SSc) could modify the risk of developing new onset severe gastrointestinal (GIT) involvement. METHODS: A total of 762 subjects with <3 years of disease duration and without severe GIT disease at baseline study visit were identified from combined longitudinal cohort data from the Canadian Scleroderma Research Group (CSRG) and Australian Scleroderma Interest Group (ASIG). The primary exposure was ever use of methotrexate, cyclophosphamide, mycophenolate mofetil and/or azathioprine during the study period. Severe GIT disease was defined as: 1-malabsorption, 2-hyperalimentation, 3-pseudo-obstruction, and/or 4-≥10% weight loss in association with the use of antibiotics for bacterial overgrowth or oesophageal stricture. The change in the hazard of severe GIT disease due to exposure was estimated using a marginal structural Cox proportional hazards model fit by inverse probability of treatment weights (IPTW) to address potential confounding. RESULTS: Study subjects were 81.5% female, had a mean age of 53.7±13.0 years and mean disease duration at baseline of 1.4±0.8 years. During a mean follow-up of 4.0±2.6 years, severe GIT involvement developed in 11.6% of the 319 subjects exposed to immunosuppression and in 6.8% of the 443 unexposed subjects. In an IPTW-adjusted analysis, exposure to immunosuppression was not associated with severe GIT disease (weighted hazard ratio 0.91, 95% confidence interval 0.52-1.58). CONCLUSIONS: In this large inception SSc cohort, the risk of severe GIT involvement was not modified by exposure to immunosuppression.
Assuntos
Gastroenteropatias , Escleroderma Sistêmico , Adulto , Idoso , Austrália , Canadá , Feminino , Gastroenteropatias/prevenção & controle , Humanos , Terapia de Imunossupressão , Masculino , Pessoa de Meia-Idade , Escleroderma Sistêmico/diagnóstico , Escleroderma Sistêmico/tratamento farmacológicoRESUMO
Causal inference methods have been developed for longitudinal observational study designs where confounding is thought to occur over time. In particular, one may estimate and contrast the population mean counterfactual outcome under specific exposure patterns. In such contexts, confounders of the longitudinal treatment-outcome association are generally identified using domain-specific knowledge. However, this may leave an analyst with a large set of potential confounders that may hinder estimation. Previous approaches to data-adaptive model selection for this type of causal parameter were limited to the single time-point setting. We develop a longitudinal extension of a collaborative targeted minimum loss-based estimation (C-TMLE) algorithm that can be applied to perform variable selection in the models for the probability of treatment with the goal of improving the estimation of the population mean counterfactual outcome under a fixed exposure pattern. We investigate the properties of this method through a simulation study, comparing it to G-Computation and inverse probability of treatment weighting. We then apply the method in a real-data example to evaluate the safety of trimester-specific exposure to inhaled corticosteroids during pregnancy in women with mild asthma. The data for this study were obtained from the linkage of electronic health databases in the province of Quebec, Canada. The C-TMLE covariate selection approach allowed for a reduction of the set of potential confounders, which included baseline and longitudinal variables.
Assuntos
Algoritmos , Biometria/métodos , Modelos Estatísticos , Corticosteroides/administração & dosagem , Asma/complicações , Asma/tratamento farmacológico , Causalidade , Estudos de Coortes , Simulação por Computador , Interpretação Estatística de Dados , Bases de Dados Factuais/estatística & dados numéricos , Feminino , Humanos , Estudos Longitudinais , Gravidez , Complicações na Gravidez/tratamento farmacológico , Resultado do TratamentoRESUMO
Persons with multidrug-resistant tuberculosis (MDR-TB) have a disease resulting from a strain of tuberculosis (TB) that does not respond to at least isoniazid and rifampicin, the two most effective anti-TB drugs. MDR-TB is always treated with multiple antimicrobial agents. Our data consist of individual patient data from 31 international observational studies with varying prescription practices, access to medications, and distributions of antibiotic resistance. In this study, we develop identifiability criteria for the estimation of a global treatment importance metric in the context where not all medications are observed in all studies. With stronger causal assumptions, this treatment importance metric can be interpreted as the effect of adding a medication to the existing treatments. We then use this metric to rank 15 observed antimicrobial agents in terms of their estimated add-on value. Using the concept of transportability, we propose an implementation of targeted maximum likelihood estimation, a doubly robust and locally efficient plug-in estimator, to estimate the treatment importance metric. A clustered sandwich estimator is adopted to compute variance estimates and produce confidence intervals. Simulation studies are conducted to assess the performance of our estimator, verify the double robustness property, and assess the appropriateness of the variance estimation approach.
Assuntos
Tuberculose Resistente a Múltiplos Medicamentos , Tuberculose , Antituberculosos/uso terapêutico , Causalidade , Humanos , Metanálise em Rede , Estudos Observacionais como Assunto , Tuberculose/tratamento farmacológico , Tuberculose Resistente a Múltiplos Medicamentos/tratamento farmacológicoRESUMO
In longitudinal settings, causal inference methods usually rely on a discretization of the patient timeline that may not reflect the underlying data generation process. This article investigates the estimation of causal parameters under discretized data. It presents the implicit assumptions practitioners make but do not acknowledge when discretizing data to assess longitudinal causal parameters. We illustrate that differences in point estimates under different discretizations are due to the data coarsening resulting in both a modified definition of the parameter of interest and loss of information about time-dependent confounders. We further investigate several tools to advise analysts in selecting a timeline discretization for use with pooled longitudinal targeted maximum likelihood estimation for the estimation of the parameters of a marginal structural model. We use a simulation study to empirically evaluate bias at different discretizations and assess the use of the cross-validated variance as a measure of data support to select a discretization under a chosen data coarsening mechanism. We then apply our approach to a study on the relative effect of alternative asthma treatments during pregnancy on pregnancy duration. The results of the simulation study illustrate how coarsening changes the target parameter of interest as well as how it may create bias due to a lack of appropriate control for time-dependent confounders. We also observe evidence that the cross-validated variance acts well as a measure of support in the data, by being minimized at finer discretizations as the sample size increases.
Assuntos
Causalidade , Viés , Simulação por Computador , HumanosRESUMO
In this tutorial, we focus on the problem of how to define and estimate treatment effects when some patients develop a contraindication and are thus ineligible to receive a treatment of interest during follow-up. We first describe the concept of positivity, which is the requirement that all subjects in an analysis be eligible for all treatments of interest conditional on their baseline covariates, and the extension of this concept in the longitudinal treatment setting. We demonstrate using simulated datasets and regression analysis that under violations of longitudinal positivity, typical associational estimates between treatment over time and the outcome of interest may be misleading depending on the data-generating structure. Finally, we explain how one may define "treatment strategies," such as "treat with medication unless contraindicated," to overcome the problems linked to time-varying eligibility. Finally, we show how contrasts between the expected potential outcomes under these strategies may be consistently estimated with inverse probability weighting methods. We provide R code for all the analyses described.
Assuntos
Fibrilação Atrial , Varfarina , Anticoagulantes/efeitos adversos , Fibrilação Atrial/tratamento farmacológico , Inibidores do Fator Xa , Hemorragia/induzido quimicamente , Hemorragia/epidemiologia , Humanos , Probabilidade , Varfarina/efeitos adversosAssuntos
Vacinas contra COVID-19 , COVID-19 , Estudos de Casos e Controles , Hospitalização , Humanos , SARS-CoV-2RESUMO
BACKGROUND: Cardiovascular disease morbidity and mortality are largely influenced by poor control of hypertension, dyslipidemia, and diabetes. Process indicators are essential to monitor the effectiveness of quality improvement strategies. However, process indicators should be validated by demonstrating their ability to predict desirable outcomes. The objective of this study is to identify an effective method for building prediction models and to assess the predictive validity of the TRANSIT indicators. METHODS: On the basis of blood pressure readings and laboratory test results at baseline, the TRANSIT study population was divided into 3 overlapping subpopulations: uncontrolled hypertension, uncontrolled dyslipidemia, and uncontrolled diabetes. A classic statistical method, a sparse machine learning technique, and a hybrid method combining both were used to build prediction models for whether a patient reached therapeutic targets for hypertension, dyslipidemia, and diabetes. The final models' performance for predicting these intermediate outcomes was established using cross-validated area under the curves (cvAUC). RESULTS: At baseline, 320, 247, and 303 patients were uncontrolled for hypertension, dyslipidemia, and diabetes, respectively. Among the 3 techniques used to predict reaching therapeutic targets, the hybrid method had a better discriminative capacity (cvAUCs=0.73 for hypertension, 0.64 for dyslipidemia, and 0.79 for diabetes) and succeeded in identifying indicators with a better capacity for predicting intermediate outcomes related to cardiovascular disease prevention. CONCLUSIONS: Even though this study was conducted in a complex population of patients, a set of 5 process indicators were found to have good predictive validity based on the hybrid method.
Assuntos
Doenças Cardiovasculares/prevenção & controle , Modelos Estatísticos , Avaliação de Resultados em Cuidados de Saúde , Atenção Primária à Saúde/métodos , Indicadores de Qualidade em Assistência à Saúde , Pressão Sanguínea , Índice de Massa Corporal , Diabetes Mellitus/terapia , Dislipidemias/terapia , Feminino , Humanos , Hipertensão/terapia , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Fatores de RiscoRESUMO
Thrombocytopenia (TP) is common in hospitalized patients. In the context of acute coronary syndromes (ACS), TP has been linked to adverse clinical outcomes. We present a systematic review and meta-analysis of the evidence on the clinical importance of preexisting and in-hospital acquired TP in the context of ACS. Specifically, we address (a) the prevalence and associated factors with TP in the context of ACS; and (b) the association between TP and all-cause mortality, major adverse cardiovascular events (MACEs), and major bleeding. We conducted systematic literature searches in MEDLINE and Web of Science. For the meta-analysis, we fit linear mixed models with a random study-specific intercept for the aggregate outcomes. A total of 16 studies and 190 915 patients were included in this study. Of these patients, 8.8% ± 1.2% presented with preexisting TP while 5.8% ± 1.0% developed TP after hospital admission. Preexisting TP was not statistically significantly associated with adverse outcomes. Acquired TP was associated with greater risk of all-cause mortality (risk difference [RD]: 4.3%; 95% confidence interval [CI]: 2-6%; p = 0.04), MACE (RD: 8.5%; 95% CI: 1-16.0%; p = 0.037), and major bleeding (RD: 11.9%; 95% CI: 5-19%; p = 0.005). In conclusion, TP is a prevalent condition in patients admitted for an ACS and identifies a high-risk patient population more likely to experience ischemic and bleeding complications, as well as higher mortality.
Assuntos
Síndrome Coronariana Aguda/sangue , Síndrome Coronariana Aguda/complicações , Trombocitopenia/etiologia , Síndrome Coronariana Aguda/patologia , Feminino , Humanos , Masculino , Trombocitopenia/patologiaRESUMO
BACKGROUND: Despite the increasing use of medical records to measure quality of care, studies have shown that their validity is suboptimal. The objective of this study is to assess the concordance of cardiovascular care processes evaluated through medical record review and patient self-administered questionnaires (SAQs) using ten quality indicators (TRANSIT indicators). These indicators were developed as part of a participatory research program (TRANSIT study) dedicated to TRANSforming InTerprofessional clinical practices to improve cardiovascular disease (CVD) prevention in primary care. METHODS: For every patient participating in the TRANSIT study, the compliance to each indicator (individual scores) as well as the mean compliance to all indicators of a category (subscale scores) and to the complete set of ten indicators (overall scale score) were established. Concordance between results obtained using medical records and patient SAQs was assessed by prevalence-adjusted bias-adjusted kappa (PABAK) coefficients as well as intraclass correlation coefficients (ICCs) and 95% confidence intervals (95% CI). Generalized linear mixed models (GLMM) were used to identify patients' sociodemographic and clinical characteristics associated with agreement between the two data sources. RESULTS: The TRANSIT study was conducted in a primary care setting among patients (n = 759) with multimorbidity, at moderate (16%) and high risk (83%) of cardiovascular diseases. Quality of care, as measured by the TRANSIT indicators, varied substantially between medical records and patient SAQ. Concordance between the two data sources, as measured by ICCs (95% CI), was poor for the subscale (0.18 [0.08-0.27] to 0.46 [0.40-0.52]) and overall (0.46 [0.40-0.53]) compliance scale scores. GLMM showed that agreement was not affected by patients' characteristics. CONCLUSIONS: In quality improvement strategies, researchers must acknowledge that care processes may not be consistently recorded in medical records. They must also be aware that the evaluation of the quality of care may vary depending on the source of information, the clinician responsible of documenting the interventions, and the domain of care.