Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 555
Filter
1.
J Am Coll Cardiol ; 2024 Aug 29.
Article in English | MEDLINE | ID: mdl-39230544

ABSTRACT

BACKGROUND: Atrial fibrillation (AF) often remains undiagnosed, and it independently raises the risk of ischemic stroke, which is largely reversible by oral anticoagulation. Although randomized trials using longer term screening approaches increase identification of AF, no studies have established that AF screening lowers stroke rates. OBJECTIVES: To address this knowledge gap, the GUARD-AF (Reducing Stroke by Screening for Undiagnosed Atrial Fibrillation in Elderly Individuals) trial screened participants in primary care practices using a 14-day continuous electrocardiographic monitor to determine whether screening for AF coupled with physician/patient decision-making to use oral anticoagulation reduces stroke and provides a net clinical benefit compared with usual care. METHODS: GUARD-AF was a prospective, parallel-group, randomized controlled trial designed to test whether screening for AF in people aged ≥70 years using a 14-day single-lead continuous electrocardiographic patch monitor could identify patients with undiagnosed AF and reduce stroke. Participants were randomized 1:1 to screening or usual care. The primary efficacy and safety outcomes were hospitalization due to all-cause stroke and bleeding, respectively. Analyses used the intention-to-treat population. RESULTS: Enrollment began on December 17, 2019, and involved 149 primary care sites across the United States. The COVID-19 pandemic led to premature termination of enrollment, with 11,905 participants in the intention-to-treat population. Median follow-up was 15.3 months (Q1-Q3: 13.8-17.6 months). Median age was 75 years (Q1-Q3: 72-79 years), and 56.6% were female. The risk of stroke in the screening group was 0.7% vs 0.6% in the usual care group (HR: 1.10; 95% CI: 0.69-1.75). The risk of bleeding was 1.0% in the screening group vs 1.1% in the usual care group (HR: 0.87; 95% CI: 0.60-1.26). Diagnosis of AF was 5% in the screening group and 3.3% in the usual care group, and initiation of oral anticoagulation after randomization was 4.2% and 2.8%, respectively. CONCLUSIONS: In this trial, there was no evidence that screening for AF using a 14-day continuous electrocardiographic monitor in people ≥70 years of age seen in primary care practice reduces stroke hospitalizations. Event rates were low, however, and the trial did not enroll the planned sample size.(Reducing Stroke by Screening for Undiagnosed Atrial Fibrillation in Elderly Individuals [GUARD-AF]; NCT04126486).

2.
Am Heart J ; 278: 117-126, 2024 Sep 07.
Article in English | MEDLINE | ID: mdl-39251103

ABSTRACT

BACKGROUND: Prior studies characterizing worsening heart failure events (WHFE) have been limited in using structured healthcare data from hospitalizations, and with little exploration of sociodemographic variation. The current study examined the impact of incorporating unstructured data to identify WHFE, describing age-, sex-, race and ethnicity-, and left ventricular ejection fraction (LVEF)-specific rates. METHODS: Adult members of Kaiser Permanente Southern California (KPSC) with a HF diagnosis between 2014 and 2018 were followed through 2019 to identify hospitalized WHFE. The main outcome was hospitalizations with a principal or secondary HF discharge diagnosis meeting rule-based Natural Language Processing (NLP) criteria for WHFE. In comparison, we examined hospitalizations with a principal discharge diagnosis of HF. Age-, sex-, and race and ethnicity-adjusted rates per 100 person-years (PY) were calculated among age, sex, race and ethnicity (non-Hispanic (NH) Asian/Pacific Islander [API], Hispanic, NH Black, NH White) and LVEF subgroups. RESULTS: Among 44,863 adults with HF, 10,560 (23.5%) had an NLP-defined, hospitalized WHFE. Adjusted rates (per 100 PY) of WHFE using NLP were higher compared to rates based only on HF principal discharge diagnosis codes (12.7 and 9.3, respectively), and this followed similar patterns among subgroups, with the highest rates among adults ≥75 years (16.3 and 11.2), men (13.2 and 9.7), and NH Black (16.9 and 14.3) and Hispanic adults (15.3 and 11.4), and adults with reduced LVEF (16.2 and 14.0). Using NLP disproportionately increased the perceived burden of WHFE among API and adults with mid-range and preserved LVEF. CONCLUSION: Rule-based NLP improved the capture of hospitalized WHFE above principal discharge diagnosis codes alone. Applying standardized consensus definitions to EHR data may improve understanding of the burden of WHFE and promote optimal care overall and in specific sociodemographic groups.

3.
Article in English | MEDLINE | ID: mdl-39297839

ABSTRACT

BACKGROUND: The "burden" of atrial fibrillation (AF) detected by screening likely influences stroke risk, but the distribution of burden is not well described. OBJECTIVES: This study aims to determine the frequency of AF and the distribution of AF burden found when screening individuals ≥70 years of age with a 14-day electrocardiograph monitor. METHODS: This is a cohort study of the screening arm of a randomized AF screening trial among those ≥70 years of age without a prior AF diagnosis (between 2019 and 2021). Screening was performed with a 14-day continuous electrocardiogram patch monitor. RESULTS: Analyzable patches were returned by 5,684 (95%) of screening arm participants; the median age was 75 years (Q1-Q3: 72-78 years), 57% were female, and the median CHA2DS2-VASc score was 3 (Q1-Q3: 2-4). AF was detected in 252 participants (4.4%); 29 (0.5%) patients had continuous AF and 223 (3.9%) had paroxysmal AF. Among those with paroxysmal AF, the average indices of AF burden were of low magnitude with right-skewed distributions. The median percent time in AF was 0.46% (Q1-Q3: 0.02%-2.48%), or 75 (Q1-Q3: 3-454) minutes, and the median longest episode was 38 (Q1-Q3: 2-245) minutes. The upper quartile threshold of 2.48% time in AF corresponded to 7.6 hours. Age greater than 80 years was associated with screen-detected AF in our multivariable model (OR: 1.46; 95% CI: 1.06-2.02). CONCLUSIONS: Most AF detected in these older patients was very low burden. However, one-quarter of those with AF had multiple hours of AF, raising concern about stroke risk. These findings have implications for targeting populations for AF screening trials and for responding to heart rhythm alerts from mobile devices (GUARD-AF [A Study to Determine if Identification of Undiagnosed Atrial Fibrillation in People at least 70 Years of Age Reduces the Risk of Stroke]; NCT04126486).

4.
J Am Soc Nephrol ; 2024 Sep 26.
Article in English | MEDLINE | ID: mdl-39325542

ABSTRACT

BACKGROUND: Cardiovascular risk models have been developed primarily for incident events. Well-performing models are lacking to predict secondary cardiovascular events among people with a history of coronary heart disease, stroke, or heart failure who also have chronic kidney disease (CKD). We sought to develop a proteomics-based risk score for cardiovascular events in individuals with CKD and a history of cardiovascular disease. METHODS: We measured 4638 plasma proteins among 1067 participants from the Chronic Renal Insufficiency Cohort (CRIC) and 536 individuals from the Atherosclerosis Risk in Communities Cohort (ARIC). All had non-dialysis-dependent CKD and coronary heart disease, heart failure, or stroke at study baseline. A proteomic risk model for secondary cardiovascular events was derived by elastic net regression in CRIC, validated in ARIC, and compared to clinical models. Biologic mechanisms of secondary events were characterized through proteomic pathway analysis. RESULTS: A 16-protein risk model was superior to the Framingham risk score for secondary events, including a modified score that included estimated glomerular filtration rate (eGFR). In CRIC, the annualized area under the receiver operating characteristic (AUC) within 1 to 5 years ranged between 0.77 and 0.80 for the protein model and 0.57 and 0.72 for the clinical models. These findings were replicated in the ARIC validation cohort. Biologic pathway analysis identified pathways and proteins for cardiac remodeling and fibrosis, vascular disease, and thrombosis. CONCLUSIONS: The proteomic risk model for secondary cardiovascular events outperformed clinical models based on traditional risk factors and eGFR.

5.
Contemp Clin Trials ; 143: 107601, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38851480

ABSTRACT

BACKGROUND: Food insecurity is associated with poor glycemic control and increased risk for diabetes-related complications. The clinical benefit of addressing these challenges through a medically supportive grocery prescription (GRx) program in patients with type 2 diabetes mellitus (T2D) remains unclear. We report the aims and design of a randomized clinical trial to evaluate the effectiveness of a 6-month GRx intervention on hemoglobin A1c (HbA1c) levels among low-income adults with T2D. METHODS: The Kaiser Permanente Evaluating Nutritional Interventions in Food-Insecure High-Risk Adults (KP ENRICH) Study is a pragmatic randomized trial enrolling 1100 participants within Kaiser Permanente Northern California and Southern California, two integrated health care delivery systems serving >9 million members. Medicaid-insured adults with T2D and baseline HbA1c ≥7.5% will be randomized at a 1:1 ratio to either GRx, delivered as $100 per month for select items from among a curated list of healthful food groups in an online grocery ordering and home-delivery platform along with biweekly digital nutrition educational materials, or control, consisting of free membership and deliveries from the online grocery platform but without curated food groups or purchasing dollars. The primary outcome is 6-month change in HbA1c. Secondary outcomes include 12-month change in HbA1c, and 6- and 12-month change in medical resource utilization, food security, nutrition security, dietary habits, diabetes-related quality of life, and dietary self-efficacy. CONCLUSIONS: The results of this large randomized clinical trial of GRx will help inform future policy and health system-based initiatives to improve food and nutrition security, disease management, and health equity among patients with T2D.


Subject(s)
Diabetes Mellitus, Type 2 , Food Insecurity , Glycated Hemoglobin , Poverty , Humans , Diabetes Mellitus, Type 2/therapy , Glycated Hemoglobin/analysis , California , Adult , Female , Male , Middle Aged , Quality of Life , United States
6.
ESC Heart Fail ; 11(5): 2542-2545, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38741373

ABSTRACT

AIMS: Worsening heart failure (WHF) events occurring in non-inpatient settings are becoming increasingly recognized, with implications for prognostication. We evaluate the performance of a natural language processing (NLP)-based approach compared with traditional diagnostic coding for non-inpatient clinical encounters and left ventricular ejection fraction (LVEF). METHODS AND RESULTS: We compared characteristics for encounters that did vs. did not meet WHF criteria, stratified by care setting [i.e. emergency department (ED) and observation stay]. Overall, 8407 (22%) encounters met NLP-based criteria for WHF (3909 ED visits and 4498 observation stays). The use of an NLP-derived definition adjudicated 3983 (12%) of non-primary HF diagnoses as meeting consensus definitions for WHF. The most common diagnosis indicated in these encounters was dyspnoea. Results were primarily driven by observation stays, in which 2205 (23%) encounters with a secondary HF diagnosis met the WHF definition by NLP. CONCLUSIONS: The use of standard claims-based adjudication for primary diagnosis in the non-inpatient setting may lead to misclassification of WHF events in the ED and overestimate observation stays. Primary diagnoses alone may underestimate the burden of WHF in non-hospitalized settings.


Subject(s)
Disease Progression , Emergency Service, Hospital , Heart Failure , Natural Language Processing , Humans , Heart Failure/diagnosis , Heart Failure/physiopathology , Heart Failure/epidemiology , Emergency Service, Hospital/statistics & numerical data , Female , Male , Aged , Stroke Volume/physiology , Retrospective Studies , Prognosis , Ventricular Function, Left/physiology , Follow-Up Studies , Middle Aged
8.
J Card Fail ; 30(8): 981-990, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38697466

ABSTRACT

BACKGROUND: Differences in demographics, risk factors, and clinical characteristics may contribute to variations in men and women in terms of the prevalence, clinical setting, and outcomes associated with worsening heart failure (WHF) events. We sought to describe sex-based differences in the epidemiology, clinical characteristics, and outcomes associated with WHF events across clinical settings. METHODS AND RESULTS: We examined adults diagnosed with HF from 2010 to 2019 within a large, integrated health care delivery system. Electronic health record data were accessed for hospitalizations, emergency department (ED) visits and observation stays, and outpatient encounters. WHF was identified using validated natural language processing algorithms and defined as ≥1 symptom, ≥2 objective findings (including ≥1 sign), and ≥1 change in HF-related therapy. Incidence rates and associated outcomes for WHF were compared across care setting by sex. We identified 1,122,368 unique clinical encounters with a diagnosis code for HF, with 124,479 meeting WHF criteria. These WHF encounters existed among 102,116 patients, of whom 48,543 (47.5%) were women and 53,573 (52.5%) were men. Women experiencing WHF were older and more likely to have HF with preserved ejection fraction compared with men. The clinical settings of WHF were similar among women and men: hospitalizations (36.8% vs 37.7%), ED visits or observation stays (11.8% vs 13.4%), and outpatient encounters (4.4% vs 4.9%). Women had lower odds of 30-day mortality after an index hospitalization (adjusted odds ratio 0.88, 95% confidence interval 0.83-0.93) or ED visit or observation stay (adjusted odds ratio 0.86, 95% confidence interval 0.75-0.98) for WHF. CONCLUSIONS: Women and men contribute similarly to WHF events across diverse clinical settings despite marked differences in age and left ventricular ejection fraction.


Subject(s)
Heart Failure , Learning Health System , Humans , Female , Heart Failure/epidemiology , Heart Failure/therapy , Heart Failure/diagnosis , Heart Failure/physiopathology , Male , Aged , Middle Aged , Sex Factors , Disease Progression , Retrospective Studies , Hospitalization/statistics & numerical data , Risk Factors , Aged, 80 and over , Incidence , Emergency Service, Hospital , Stroke Volume/physiology
9.
AJPM Focus ; 3(3): 100213, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38590395

ABSTRACT

Introduction: The American Heart Association Life's Simple 7 schema can be used to categorize patients' cardiovascular health status as poor, intermediate, or ideal on the basis of smoking, BMI, physical activity, dietary patterns, blood pressure, cholesterol, and fasting blood sugar. This study examined the association between cardiovascular health status and subsequent healthcare utilization. Methods: This was an observational cohort study of adults from an integrated healthcare delivery system-Kaiser Permanente Northern California-that had outpatient care between 2013 and 2014. Patients were categorized by American Heart Association cardiovascular health status: poor, intermediate, or ideal. Individual-level healthcare utilization and costs in 2015 were accumulated for each patient and compared across the 3 cardiovascular health categories and stratified by age groups. Results: A total of 991,698 patients were included in the study. A total of 194,003 (19.6%) were aged 18-39 years; 554,129 (55.9%) were aged 40-64 years; and 243,566 (24.6%) were aged ≥65 years. A total of 259,931 (26.2%) had ideal cardiovascular health; 521,580 (52.6%) had intermediate cardiovascular health; and 210,187 (21.2%) had poor cardiovascular health. Healthcare utilization measured by average relative cost per patient increased monotonically across age categories (p<0.001). In addition, cardiovascular health category was inversely associated with lower cost in each age group (p<0.001). Conclusions: Adults who were younger and had more ideal cardiovascular health had relatively lower healthcare costs across age groups. Interventions to promote better cardiovascular health may improve patient outcomes and reduce overall healthcare expenditures.

10.
J Hosp Med ; 19(7): 565-571, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38594918

ABSTRACT

BACKGROUND: New-onset atrial fibrillation (AF) during sepsis is common, but models designed to stratify stroke risk excluded patients with secondary AF. We assessed the predictive validity of CHA2DS2VASc scores among patients with new-onset AF during sepsis and developed a novel stroke prediction model incorporating presepsis and intrasepsis characteristics. METHODS: We included patients ≥40 years old who survived hospitalizations with sepsis and new-onset AF across 21 Kaiser Permanente Northern California hospitals from January 1, 2011 to September 30, 2017. We calculated the area under the receiver operating curve (AUC) for CHA2DS2VASc scores to predict stroke or transient ischemic attack (TIA) within 1 year after a hospitalization with new-onset AF during sepsis using Fine-Gray models with death as competing risk. We similarly derived and validated a novel model using presepsis and intrasepsis characteristics associated with 1-year stroke/TIA risk. RESULTS: Among 82,748 adults hospitalized with sepsis, 3992 with new-onset AF (median age: 80 years, median CHA2DS2VASc of 4) survived to discharge, among whom 70 (2.1%) experienced stroke or TIA outcome and 1393 (41.0%) died within 1 year of sepsis. The CHA2DS2VASc score was not predictive of stroke risk after sepsis (AUC: 0.50, 95% confidence interval [CI]: 0.48-0.52). A newly derived model among 2555 (64%) patients in the derivation set and 1437 (36%) in the validation set included 13 variables and produced an AUC of 0.61 (0.49-0.73) in derivation and 0.54 (0.43-0.65) in validation. CONCLUSION: Current models do not accurately stratify risk of stroke following new-onset AF secondary to sepsis. New tools are required to guide anticoagulation decisions following new-onset AF in sepsis.


Subject(s)
Atrial Fibrillation , Hospitalization , Sepsis , Stroke , Humans , Atrial Fibrillation/complications , Atrial Fibrillation/diagnosis , Male , Female , Sepsis/complications , Aged , Stroke/etiology , Stroke/epidemiology , Risk Assessment , Aged, 80 and over , Risk Factors , California/epidemiology , Middle Aged , Ischemic Attack, Transient/diagnosis
11.
Struct Heart ; 8(2): 100237, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38481714

ABSTRACT

Background: The eligibility and potential benefit of transcatheter edge-to-edge repair (TEER) in addition to guideline-directed medical therapy to treat moderate-severe or severe secondary mitral regurgitation (MR) has not been reported in a contemporary heart failure (HF) population. Methods: Eligibility for TEER based on Food and Drug Administration (FDA) labeling: (1) HF symptoms, (2) moderate-severe or severe MR, (3) left ventricular ejection fraction (LVEF) 20% to 50%, (4) left ventricular end-systolic dimension 7.0 cm, and (5) receiving GDMT (blocker + angiotensin-converting enzyme inhibitor/angiotensin receptor blocker). The proportion (%) of patients eligible for TEER. The hypothetical number needed to treat to prevent or postpone adverse outcomes was estimated using relative risk reductions from published hazard ratios in the registration trial and the observed event rates. Results: We identified 50,841 adults with HF and known LVEF. After applying FDA criteria, 2461 patients (4.8%) were considered eligible for transcatheter mitral valve replacement (FDA+), with the vast majority of patients excluded (FDA-) based on a lack of clinically significant MR (N = 47,279). FDA+ patients had higher natriuretic peptide levels and were more likely to have a prior HF hospitalization compared to FDA- patients. Although FDA+ patients had a more dilated left ventricle and lower LVEF, median (25th-75th) left ventricular end-systolic dimension (cm) was low at 4.4 (3.7-5.1) and only 30.8% had severely reduced LVEF. FDA+ patients were at higher risk of HF-related morbidity and mortality. The estimated number needed to treat to potentially prevent or postpone all-cause hospitalization was 4.4, 8.8 for HF hospitalization, and 5.3 for all-cause death at 24 months in FDA+ patients. Conclusions: There is a low prevalence of TEER eligibility based on FDA criteria primarily due to absence of moderate-severe or severe MR. FDA+ patients are a high acuity population and may potentially derive a robust clinical benefit from TEER based on pivotal studies. Additional research is necessary to validate the scope of eligibility and comparative effectiveness of TEER in real-world populations.

12.
AIDS ; 38(4): 547-556, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-37967231

ABSTRACT

OBJECTIVES: Heart failure risk is elevated in people with HIV (PWH). We investigated whether initial antiretroviral therapy (ART) regimens influenced heart failure risk. DESIGN: Cohort study. METHODS: PWH who initiated an ART regimen between 2000 and 2016 were identified from three integrated healthcare systems. We evaluated heart failure risk by protease inhibitor, nonnucleoside reverse transcriptase inhibitors (NNRTI), and integrase strand transfer inhibitor (INSTI)-based ART, and comparing two common nucleotide reverse transcriptase inhibitors: tenofovir disoproxil fumarate (tenofovir) and abacavir. Follow-up for each pairwise comparison varied (i.e. 7 years for protease inhibitor vs. NNRTI; 5 years for tenofovir vs. abacavir; 2 years for INSTIs vs. PIs or NNRTIs). Hazard ratios were from working logistic marginal structural models, fitted with inverse probability weighting to adjust for demographics, and traditional cardiovascular risk factors. RESULTS: Thirteen thousand six hundred and thirty-four PWH were included (88% men, median 40 years of age; 34% non-Hispanic white, 24% non-Hispanic black, and 24% Hispanic). The hazard ratio (95% CI) were: 2.5 (1.5-4.3) for protease inhibitor vs. NNRTI-based ART (reference); 0.5 (0.2-1.8) for protease inhibitor vs. INSTI-based ART (reference); 0.1 (0.1-0.8) for NNRTI vs. INSTI-based ART (reference); and 1.7 (0.5-5.7) for tenofovir vs. abacavir (reference). In more complex models of cumulative incidence that accounted for possible nonproportional hazards over time, the only remaining finding was evidence of a higher risk of heart failure for protease inhibitor compared with NNRTI-based regimens (1.8 vs. 0.8%; P  = 0.002). CONCLUSION: PWH initiating protease inhibitors may be at higher risk of heart failure compared with those initiating NNRTIs. Future studies with longer follow-up with INSTI-based and other specific ART are warranted.


Subject(s)
Anti-HIV Agents , Cyclopropanes , Dideoxyadenosine/analogs & derivatives , HIV Infections , HIV Protease Inhibitors , Heart Failure , Male , Humans , Female , HIV Infections/complications , HIV Infections/drug therapy , Reverse Transcriptase Inhibitors/adverse effects , Anti-HIV Agents/adverse effects , Cohort Studies , HIV Protease Inhibitors/adverse effects , Dideoxynucleosides/adverse effects , Tenofovir/adverse effects , Heart Failure/chemically induced , Heart Failure/epidemiology , Heart Failure/drug therapy
13.
Circulation ; 149(6): 430-449, 2024 02 06.
Article in English | MEDLINE | ID: mdl-37947085

ABSTRACT

BACKGROUND: Multivariable equations are recommended by primary prevention guidelines to assess absolute risk of cardiovascular disease (CVD). However, current equations have several limitations. Therefore, we developed and validated the American Heart Association Predicting Risk of CVD EVENTs (PREVENT) equations among US adults 30 to 79 years of age without known CVD. METHODS: The derivation sample included individual-level participant data from 25 data sets (N=3 281 919) between 1992 and 2017. The primary outcome was CVD (atherosclerotic CVD and heart failure). Predictors included traditional risk factors (smoking status, systolic blood pressure, cholesterol, antihypertensive or statin use, and diabetes) and estimated glomerular filtration rate. Models were sex-specific, race-free, developed on the age scale, and adjusted for competing risk of non-CVD death. Analyses were conducted in each data set and meta-analyzed. Discrimination was assessed using the Harrell C-statistic. Calibration was calculated as the slope of the observed versus predicted risk by decile. Additional equations to predict each CVD subtype (atherosclerotic CVD and heart failure) and include optional predictors (urine albumin-to-creatinine ratio and hemoglobin A1c), and social deprivation index were also developed. External validation was performed in 3 330 085 participants from 21 additional data sets. RESULTS: Among 6 612 004 adults included, mean±SD age was 53±12 years, and 56% were women. Over a mean±SD follow-up of 4.8±3.1 years, there were 211 515 incident total CVD events. The median C-statistics in external validation for CVD were 0.794 (interquartile interval, 0.763-0.809) in female and 0.757 (0.727-0.778) in male participants. The calibration slopes were 1.03 (interquartile interval, 0.81-1.16) and 0.94 (0.81-1.13) among female and male participants, respectively. Similar estimates for discrimination and calibration were observed for atherosclerotic CVD- and heart failure-specific models. The improvement in discrimination was small but statistically significant when urine albumin-to-creatinine ratio, hemoglobin A1c, and social deprivation index were added together to the base model to total CVD (ΔC-statistic [interquartile interval] 0.004 [0.004-0.005] and 0.005 [0.004-0.007] among female and male participants, respectively). Calibration improved significantly when the urine albumin-to-creatinine ratio was added to the base model among those with marked albuminuria (>300 mg/g; 1.05 [0.84-1.20] versus 1.39 [1.14-1.65]; P=0.01). CONCLUSIONS: PREVENT equations accurately and precisely predicted risk for incident CVD and CVD subtypes in a large, diverse, and contemporary sample of US adults by using routinely available clinical variables.


Subject(s)
Atherosclerosis , Cardiovascular Diseases , Heart Failure , Adult , Humans , Male , Female , Middle Aged , Aged , Creatinine , Glycated Hemoglobin , American Heart Association , Risk Factors , Cardiovascular Diseases/diagnosis , Cardiovascular Diseases/epidemiology , Heart Failure/diagnosis , Heart Failure/epidemiology , Albumins , Risk Assessment
14.
Eur Heart J Qual Care Clin Outcomes ; 10(1): 77-88, 2024 Jan 12.
Article in English | MEDLINE | ID: mdl-36997334

ABSTRACT

AIMS: This study aimed to develop and apply natural language processing (NLP) algorithms to identify recurrent atrial fibrillation (AF) episodes following rhythm control therapy initiation using electronic health records (EHRs). METHODS AND RESULTS: We included adults with new-onset AF who initiated rhythm control therapies (ablation, cardioversion, or antiarrhythmic medication) within two US integrated healthcare delivery systems. A code-based algorithm identified potential AF recurrence using diagnosis and procedure codes. An automated NLP algorithm was developed and validated to capture AF recurrence from electrocardiograms, cardiac monitor reports, and clinical notes. Compared with the reference standard cases confirmed by physicians' adjudication, the F-scores, sensitivity, and specificity were all above 0.90 for the NLP algorithms at both sites. We applied the NLP and code-based algorithms to patients with incident AF (n = 22 970) during the 12 months after initiating rhythm control therapy. Applying the NLP algorithms, the percentages of patients with AF recurrence for sites 1 and 2 were 60.7% and 69.9% (ablation), 64.5% and 73.7% (cardioversion), and 49.6% and 55.5% (antiarrhythmic medication), respectively. In comparison, the percentages of patients with code-identified AF recurrence for sites 1 and 2 were 20.2% and 23.7% for ablation, 25.6% and 28.4% for cardioversion, and 20.0% and 27.5% for antiarrhythmic medication, respectively. CONCLUSION: When compared with a code-based approach alone, this study's high-performing automated NLP method identified significantly more patients with recurrent AF. The NLP algorithms could enable efficient evaluation of treatment effectiveness of AF therapies in large populations and help develop tailored interventions.


Subject(s)
Atrial Fibrillation , Electronic Health Records , Adult , Humans , Atrial Fibrillation/epidemiology , Atrial Fibrillation/therapy , Natural Language Processing , Treatment Outcome , Algorithms
15.
Kidney Med ; 5(11): 100723, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37915961

ABSTRACT

Rationale & Objective: Heart failure (HF) is an important cause of morbidity and mortality among individuals with chronic kidney disease (CKD). A large body of evidence from preclinical and clinical studies implicates excess levels of fibroblast growth factor 23 (FGF23) in HF pathogenesis in CKD. It remains unclear whether the relationship between elevated FGF23 levels and HF risk among individuals with CKD varies by HF subtype. Study Design: Prospective cohort study. Settings & Participants: A total of 3,502 participants were selected in the Chronic Renal Insufficiency Cohort study. Exposure: Baseline plasma FGF23. Outcomes: Incident HF by subtype and total rate of HF hospitalization. HF was categorized as HF with preserved ejection fraction (HFpEF, ejection fraction [EF] ≥ 50%), HF with reduced EF (HFrEF, EF < 50%) and HF with unknown EF (HFuEF). Analytical Approach: Multivariable-adjusted cause-specific Cox proportional hazards models were used to investigate associations between FGF23 and incident hospitalizations for HF by subtype. The Lunn-McNeil method was used to compare hazard ratios across HF subtypes. Poisson regression models were used to evaluate the total rate of HF. Results: During a median follow-up time of 10.8 years, 295 HFpEF, 242 HFrEF, and 156 HFuEF hospitalizations occurred. In multivariable-adjusted cause-specific Cox proportional hazards models, FGF23 was significantly associated with the incidence of HFpEF (HR, 1.41; 95% CI, 1.21-1.64), HFrEF (HR, 1.27; 95% CI, 1.05-1.53), and HFuEF (HR, 1.40; 95% CI, 1.13-1.73) per 1 standard deviation (SD) increase in the natural log of FGF23. The Lunn-McNeil method determined that the risk association was consistent across all subtypes. The rate ratio of total HF events increased with FGF23 quartile. In multivariable-adjusted models, compared with quartile 1, FGF23 quartile 4 had a rate ratio of 1.81 (95% CI, 1.28-2.57) for total HF events. Limitations: Self-report of HF hospitalizations and possible lack of an echocardiogram at time of hospitalization. Conclusions: In this large multicenter prospective cohort study, elevated FGF23 levels were associated with increased risks for all HF subtypes. Plain-Language Summary: Heart failure (HF) is a prominent cause of morbidity and mortality in individuals with chronic kidney disease (CKD). Identifying potential pathways in the development of HF is essential in developing therapies to prevent and treat HF. In a large cohort of individuals with CKD, the Chronic Renal Insufficiency Cohort (N = 3,502), baseline fibroblast growth factor-23 (FGF23), a hormone that regulates phosphorous, was evaluated in relation to the development of incident and recurrent HF with reduced, preserved, and unknown ejection fraction. In this large multicenter prospective cohort study, elevated FGF23 levels were associated with increased risk of all HF subtypes. These findings demonstrate the need for further research into FGF23 as a target in preventing the development of HF in individuals with CKD.

16.
PLoS One ; 18(11): e0293293, 2023.
Article in English | MEDLINE | ID: mdl-37910454

ABSTRACT

BACKGROUND: The Kidney Failure Risk Equation (KFRE) and Kaiser Permanente Northwest (KPNW) models have been proposed to predict progression to ESKD among adults with CKD within 2 and 5 years. We evaluated the utility of these equations to predict the 1-year risk of ESKD in a contemporary, ethnically diverse CKD population. METHODS: We conducted a retrospective cohort study of adult members of Kaiser Permanente Northern California (KPNC) with CKD Stages 3-5 from January 2008-September 2015. We ascertained the onset of ESKD through September 2016, and calculated stage-specific estimates of model discrimination and calibration for the KFRE and KPNW equations. RESULTS: We identified 108,091 eligible adults with CKD (98,757 CKD Stage 3; 8,384 CKD Stage 4; and 950 CKD Stage 5 not yet receiving kidney replacement therapy), with mean age of 75 years, 55% women, and 37% being non-white. The overall 1-year risk of ESKD was 0.8% (95%CI: 0.8-0.9%). The KFRE displayed only moderate discrimination for CKD 3 and 5 (c = 0.76) but excellent discrimination for CKD 4 (c = 0.86), with good calibration for CKD 3-4 patients but suboptimal calibration for CKD 5. Calibration by CKD stage was similar to KFRE for the KPNW equation but displayed worse calibration across CKD stages for 1-year ESKD prediction. CONCLUSIONS: In a large, ethnically diverse, community-based CKD 3-5 population, both the KFRE and KPNW equation were suboptimal in accurately predicting the 1-year risk of ESKD within CKD stage 3 and 5, but more accurate for stage 4. Our findings suggest these equations can be used in1-year prediction for CKD 4 patients, but also highlight the need for more personalized, stage-specific equations that predicted various short- and long-term adverse outcomes to better inform overall decision-making.


Subject(s)
Kidney Failure, Chronic , Renal Insufficiency, Chronic , Adult , Humans , Female , Aged , Male , Disease Progression , Retrospective Studies , Kidney Failure, Chronic/epidemiology , Kidney Failure, Chronic/etiology , Renal Insufficiency, Chronic/epidemiology , Renal Replacement Therapy
17.
BMC Cardiovasc Disord ; 23(1): 578, 2023 11 21.
Article in English | MEDLINE | ID: mdl-37990153

ABSTRACT

BACKGROUND: Atrial Fibrillation (AF) is the leading cause of stroke, which can be reduced by 70% with appropriate oral anticoagulation (OAC) therapy. Nationally, appropriate anticoagulation rates for patients with AF with elevated thromboembolic risk are as low as 50% even across the highest stroke risk cohorts. This study aims to evaluate the variability of appropriate anticoagulation rates among patients by sex, ethnicity, and socioeconomic status within the Kaiser Permanente Mid-Atlantic States (KPMAS). METHODS: This retrospective study investigated 9513 patients in KPMAS's AF registry with CHADS2 score ≥ 2 over a 6-month period in 2021. RESULTS: Appropriately anticoagulated patients had higher rates of diabetes, prior stroke, and congestive heart failure than patients who were not appropriately anticoagulated. There were no significant differences in anticoagulation rates between males and females (71.8% vs. 71.6%%, [OR] 1.01; 95% CI, 0.93-1.11; P = .76) nor by SES-SVI quartiles. There was a statistically significant difference between Black and White patients (70.8% vs. 73.1%, P = .03) and Asian and White patients (68.3% vs. 71.6%, P = .005). After adjusting for CHADS2, this difference persisted for Black and White participants with CHADS2 scores of ≤3 (62.6% vs. 70.6%, P < .001) and for Asian and White participants with CHADS2 scores > 5 (68.0% vs. 79.3%, P < .001). CONCLUSIONS: Black and Asian patients may have differing rates of appropriate anticoagulation when compared with White patients. Characterizing such disparities is the first step towards addressing treatment gaps in AF.


Subject(s)
Atrial Fibrillation , Delivery of Health Care, Integrated , Stroke , Thromboembolism , Male , Female , Humans , Atrial Fibrillation/complications , Atrial Fibrillation/diagnosis , Atrial Fibrillation/drug therapy , Anticoagulants/adverse effects , Retrospective Studies , Stroke/diagnosis , Stroke/epidemiology , Stroke/etiology , Thromboembolism/diagnosis , Thromboembolism/etiology , Thromboembolism/prevention & control , Risk Factors , Risk Assessment
18.
Nat Commun ; 14(1): 6340, 2023 10 10.
Article in English | MEDLINE | ID: mdl-37816758

ABSTRACT

Progression of chronic kidney disease (CKD) portends myriad complications, including kidney failure. In this study, we analyze associations of 4638 plasma proteins among 3235 participants of the Chronic Renal Insufficiency Cohort Study with the primary outcome of 50% decline in estimated glomerular filtration rate or kidney failure over 10 years. We validate key findings in the Atherosclerosis Risk in the Communities study. We identify 100 circulating proteins that are associated with the primary outcome after multivariable adjustment, using a Bonferroni statistical threshold of significance. Individual protein associations and biological pathway analyses highlight the roles of bone morphogenetic proteins, ephrin signaling, and prothrombin activation. A 65-protein risk model for the primary outcome has excellent discrimination (C-statistic[95%CI] 0.862 [0.835, 0.889]), and 14/65 proteins are druggable targets. Potentially causal associations for five proteins, to our knowledge not previously reported, are supported by Mendelian randomization: EGFL9, LRP-11, MXRA7, IL-1 sRII and ILT-2. Modifiable protein risk markers can guide therapeutic drug development aimed at slowing CKD progression.


Subject(s)
Renal Insufficiency, Chronic , Renal Insufficiency , Humans , Cohort Studies , Proteomics , Prospective Studies , Renal Insufficiency, Chronic/metabolism , Renal Insufficiency/complications , Disease Progression
19.
J Am Heart Assoc ; 12(19): e029736, 2023 10 03.
Article in English | MEDLINE | ID: mdl-37776209

ABSTRACT

Background There is a need to develop electronic health record-based predictive models for worsening heart failure (WHF) events across clinical settings and across the spectrum of left ventricular ejection fraction (LVEF). Methods and Results We studied adults with heart failure (HF) from 2011 to 2019 within an integrated health care delivery system. WHF encounters were ascertained using natural language processing and structured data. We conducted boosted decision tree ensemble models to predict 1-year hospitalizations, emergency department visits/observation stays, and outpatient encounters for WHF and all-cause death within each LVEF category: HF with reduced ejection fraction (EF) (LVEF <40%), HF with mildly reduced EF (LVEF 40%-49%), and HF with preserved EF (LVEF ≥50%). Model discrimination was evaluated using area under the curve and calibration using mean squared error. We identified 338 426 adults with HF: 61 045 (18.0%) had HF with reduced EF, 49 618 (14.7%) had HF with mildly reduced EF, and 227 763 (67.3%) had HF with preserved EF. The 1-year risks of any WHF event and death were, respectively, 22.3% and 13.0% for HF with reduced EF, 17.0% and 10.1% for HF with mildly reduced EF, and 16.3% and 10.3% for HF with preserved EF. The WHF model displayed an area under the curve of 0.76 and mean squared error of 0.13, whereas the model for death displayed an area under the curve of 0.83 and mean squared error of 0.076. Performance and predictors were similar across WHF encounter types and LVEF categories. Conclusions We developed risk prediction models for 1-year WHF events and death across the LVEF spectrum using structured and unstructured electronic health record data and observed no substantial differences in model performance or predictors except for death, despite differences in underlying HF cause.


Subject(s)
Heart Failure , Ventricular Function, Left , Adult , Humans , Stroke Volume , Heart Failure/diagnosis , Hospitalization
20.
J Clin Transl Sci ; 7(1): e179, 2023.
Article in English | MEDLINE | ID: mdl-37745930

ABSTRACT

Introduction: Clinical trials provide the "gold standard" evidence for advancing the practice of medicine, even as they evolve to integrate real-world data sources. Modern clinical trials are increasingly incorporating real-world data sources - data not intended for research and often collected in free-living contexts. We refer to trials that incorporate real-world data sources as real-world trials. Such trials may have the potential to enhance the generalizability of findings, facilitate pragmatic study designs, and evaluate real-world effectiveness. However, key differences in the design, conduct, and implementation of real-world vs traditional trials have ramifications in data management that can threaten their desired rigor. Methods: Three examples of real-world trials that leverage different types of data sources - wearables, medical devices, and electronic health records are described. Key insights applicable to all three trials in their relationship to Data and Safety Monitoring Boards (DSMBs) are derived. Results: Insight and recommendations are given on four topic areas: A. Charge of the DSMB; B. Composition of the DSMB; C. Pre-launch Activities; and D. Post-launch Activities. We recommend stronger and additional focus on data integrity. Conclusions: Clinical trials can benefit from incorporating real-world data sources, potentially increasing the generalizability of findings and overall trial scale and efficiency. The data, however, present a level of informatic complexity that relies heavily on a robust data science infrastructure. The nature of monitoring the data and safety must evolve to adapt to new trial scenarios to protect the rigor of clinical trials.

SELECTION OF CITATIONS
SEARCH DETAIL