Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 91
Filter
1.
J Gen Intern Med ; 2024 Oct 07.
Article in English | MEDLINE | ID: mdl-39375318

ABSTRACT

IMPORTANCE: Traditional risk prediction and risk adjustment models have focused on clinical characteristics, but accounting for social determinants of health (SDOH) and complex health conditions could improve understanding of sepsis outcomes and our ability to predict outcomes, treat patients, and assess quality of care. OBJECTIVE: To evaluate the impact of SDOH and health scales in sepsis mortality risk prediction and hospital performance assessment. DESIGN: Observational cohort study. SETTING: One hundred twenty-nine hospitals in the nationwide Veterans Affairs (VA) Healthcare System between 2017 and 2021. PARTICIPANTS: Veterans admitted through emergency departments with community-acquired sepsis. EXPOSURES: Individual- and community-level SDOH (race, housing instability, marital status, Area Deprivation Index [ADI], and rural residence) and two health scales (the Care Assessment Need [CAN] score and Claims-Based Frailty Index [CFI]). MAIN OUTCOMES AND MEASURES: The primary outcome was 90-day mortality from emergency department arrival; secondary outcomes included 30-day mortality and in-hospital mortality. RESULTS: Among 144,889 patients admitted to the hospital with community-acquired sepsis, 139,080 were men (96.0%), median (IQR) age was 71 (64-77) years, and median (IQR) ADI was 60 (38-81). Multivariable regression models had good calibration and discrimination across models that adjusted for different sets of variables (e.g., AUROC, 0.782; Brier score, 1.33; and standardized mortality rate, 1.00). Risk-adjusted hospital performance was similar across all models. Among 129 VA hospitals, three hospitals shifted from the lowest or highest quintile of performance when comparing models that excluded SDOH to models that adjusted for all variables. Models that adjusted for ADI reported odds ratios (CI) of 1.00 (1.00-1.00), indicating that ADI does not significantly predict sepsis mortality in this cohort of patients. CONCLUSION AND RELEVANCE: In patients with community-acquired sepsis, adjusting for community SDOH variables such as ADI did not improve 90-day sepsis mortality predictions in mortality models and did not substantively alter hospital performance within the VA Healthcare System. Understanding the role of SDOH in risk prediction and risk adjustment models is vital because it could prevent hospitals from being negatively evaluated for treating less advantaged patients. However, we found that in VA hospitals, the potential impact of SDOH on 90-day sepsis mortality was minimal.

2.
J Am Heart Assoc ; 13(18): e035859, 2024 Sep 17.
Article in English | MEDLINE | ID: mdl-39248259

ABSTRACT

BACKGROUND: Direct oral anticoagulants (DOACs) have complex dosing regimens and are often incorrectly prescribed. We evaluated a nationwide DOAC population management dashboard rollout whose purpose includes pharmacist review and correction of off-label dosing prescriptions. METHODS AND RESULTS: Using data from Veterans Health Affairs, we identified all patients prescribed DOACs for atrial fibrillation or venous thromboembolism between August 2015 and December 2019. Sites were grouped on the basis of the timing of moderate-high usage of the DOAC population management tool dashboard. Effectiveness was defined as the monthly rate of off-label DOAC prescribing and the rate of clinical adverse events (bleeding, composite of stroke or venous thromboembolism). Implementation was evaluated as the percentage of off-label DOAC prescriptions changed within 7 days. Among the 128 652 patients receiving DOAC therapy at 123 centers, between 6.9% and 8.6% had off-label DOAC prescriptions. Adoption of the DOAC population management tool dashboard before July 2018 was associated with a decline in off-label dosing prescriptions (8.7%-7.6%). Only 1 group demonstrated a significant reduction in monthly rates of bleeding following implementation. All sites experienced a reduction in the composite of venous thromboembolism or stroke following dashboard adoption. There was no difference in the implementation outcome of DOAC prescription change within 7 days in any of the adoption groups. CONCLUSIONS: Early adoption of the DOAC population management tool dashboard was associated with decreased rates of off-label DOAC dosing prescription and reduced bleeding. Following adoption of the DOAC population management tool dashboard, all sites experienced reductions in venous thromboembolism and stroke events.


Subject(s)
Atrial Fibrillation , Off-Label Use , Pharmacists , Venous Thromboembolism , Humans , Atrial Fibrillation/drug therapy , Atrial Fibrillation/complications , United States , Venous Thromboembolism/drug therapy , Venous Thromboembolism/prevention & control , Venous Thromboembolism/epidemiology , Female , Male , Aged , Hemorrhage/chemically induced , Hemorrhage/epidemiology , Stroke/prevention & control , Stroke/epidemiology , Administration, Oral , Anticoagulants/adverse effects , Anticoagulants/administration & dosage , Anticoagulants/therapeutic use , Factor Xa Inhibitors/adverse effects , Factor Xa Inhibitors/therapeutic use , Factor Xa Inhibitors/administration & dosage , Practice Patterns, Physicians'/standards , Drug Prescriptions/statistics & numerical data , United States Department of Veterans Affairs
3.
medRxiv ; 2024 May 01.
Article in English | MEDLINE | ID: mdl-38903102

ABSTRACT

Background: It is unclear how post-stroke cognitive trajectories differ by stroke type and ischemic stroke subtype. We studied associations between stroke types (ischemic, hemorrhagic), ischemic stroke subtypes (cardioembolic, large artery atherosclerotic, lacunar/small vessel, cryptogenic/other determined etiology), and post-stroke cognitive decline. Methods: This pooled cohort analysis from four US cohort studies (1971-2019) identified 1,143 dementia-free individuals with acute stroke during follow-up: 1,061 (92.8%) ischemic, 82 (7.2%) hemorrhagic, 49.9% female, 30.8% Black. Median age at stroke was 74.1 (IQR, 68.6, 79.3) years. Outcomes were change in global cognition (primary) and changes in executive function and memory (secondary). Outcomes were standardized as T-scores (mean [SD], 50 [10]); a 1-point difference represents a 0.1-SD difference in cognition. Median follow-up for the primary outcome was 6.0 (IQR, 3.2, 9.2) years. Linear mixed-effects models estimated changes in cognition after stroke. Results: On average, the initial post-stroke global cognition score was 50.78 points (95% CI, 49.52, 52.03) in ischemic stroke survivors and did not differ in hemorrhagic stroke survivors (difference, -0.17 points [95% CI, -1.64, 1.30]; P=0.82) after adjusting for demographics and pre-stroke cognition. On average, ischemic stroke survivors showed declines in global cognition, executive function, and memory. Post-stroke declines in global cognition, executive function, and memory did not differ between hemorrhagic and ischemic stroke survivors. 955 ischemic strokes had subtypes: 200 (20.9%) cardioembolic, 77 (8.1%) large artery atherosclerotic, 207 (21.7%) lacunar/small vessel, 471 (49.3%) cryptogenic/other determined etiology. On average, small vessel stroke survivors showed declines in global cognition and memory, but not executive function. Initial post-stroke cognitive scores and cognitive declines did not differ between small vessel survivors and survivors of other ischemic stroke subtypes. Post-stroke vascular risk factor levels did not attenuate associations. Conclusion: Stroke survivors had cognitive decline in multiple domains. Declines did not differ by stroke type or ischemic stroke subtype.

4.
JAMA Intern Med ; 184(8): 963-970, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38856978

ABSTRACT

Importance: In 2023, the American Heart Association (AHA) developed the Predicting Risk of Cardiovascular Disease Events (PREVENT) equations to estimate 10-year risk of atherosclerotic cardiovascular disease (ASCVD), as an update to the 2013 pooled cohort equations (PCEs). The PREVENT equations were derived from contemporary cohorts and removed race and added variables for kidney function and statin use. Objective: To compare national estimates of 10-year ASCVD risk using the PCEs and PREVENT equations and how these equations affect recommendations for primary prevention statin therapy. Design, Setting, and Participants: This cross-sectional study included adults aged 40 to 75 years who participated in the National Health and Nutrition Examination Survey from 2017 to March 2020. Adults were defined as eligible for primary prevention statin use based on the 2019 AHA/American College of Cardiology guideline on the primary prevention of cardiovascular disease. Data were weighted to be nationally representative and were analyzed from December 27, 2023, to January 31, 2024. Main Outcomes and Measures: The 10-year ASCVD risk and eligibility for primary prevention statin therapy based on PREVENT and PCE calculations. Results: In the weighted sample of 3785 US adults (mean [SD] age, 55.7 [9.7] years; 52.5% women) without known ASCVD, 20.7% reported current statin use. The mean estimated 10-year ASCVD risk was 8.0% (95% CI, 7.6%-8.4%) using the PCEs and 4.3% (95% CI, 4.1%-4.5%) using the PREVENT equations. Across all age, sex, and racial subgroups, compared with the PCEs, the mean estimated 10-year ASCVD risk was lower using the PREVENT equations, with the largest difference for Black adults (10.9% [95% CI, 10.1%-11.7%] vs 5.1% [95% CI 4.7%-5.4%]) and individuals aged 70 to 75 years (22.8% [95% CI, 21.6%-24.1%] vs 10.2% [95% CI, 9.6%-10.8%]). The use of the PREVENT equations instead of the PCEs could reduce the number of adults meeting criteria for primary prevention statin therapy from 45.4 million (95% CI, 40.3 million-50.4 million) to 28.3 million (95% CI, 25.2 million-31.4 million). In other words, 17.3 million (95% CI, 14.8 million-19.7 million) adults recommended statins based on the PCEs would no longer be recommended statins based on PREVENT equations, including 4.1 million (95% CI, 2.8 million-5.5 million) adults currently taking statins. Based on the PREVENT equations, 44.1% (95% CI, 38.6%-49.5%) of adults eligible for primary prevention statin therapy reported currently taking statins, equating to 15.8 million (95% CI, 13.4 million-18.2 million) individuals eligible for primary prevention statins who reported not taking statins. Conclusions and Relevance: This cross-sectional study found that use of the PREVENT equations was associated with fewer US adults being eligible for primary prevention statin therapy; however, the majority of adults eligible for receiving such therapy based on PREVENT equations did not report statin use.


Subject(s)
Atherosclerosis , Hydroxymethylglutaryl-CoA Reductase Inhibitors , Primary Prevention , Humans , Middle Aged , Male , Female , Cross-Sectional Studies , Aged , Risk Assessment/methods , Atherosclerosis/epidemiology , Atherosclerosis/prevention & control , Adult , United States/epidemiology , Hydroxymethylglutaryl-CoA Reductase Inhibitors/therapeutic use , Primary Prevention/methods , Nutrition Surveys , Cardiovascular Diseases/prevention & control , Cardiovascular Diseases/epidemiology , Risk Factors , Heart Disease Risk Factors
5.
PLoS One ; 19(5): e0300005, 2024.
Article in English | MEDLINE | ID: mdl-38753617

ABSTRACT

Strategies to prevent or delay Alzheimer's disease and related dementias (AD/ADRD) are urgently needed, and blood pressure (BP) management is a promising strategy. Yet the effects of different BP control strategies across the life course on AD/ADRD are unknown. Randomized trials may be infeasible due to prolonged follow-up and large sample sizes. Simulation analysis is a practical approach to estimating these effects using the best available existing data. However, existing simulation frameworks cannot estimate the effects of BP control on both dementia and cardiovascular disease. This manuscript describes the design principles, implementation details, and population-level validation of a novel population-health microsimulation framework, the MIchigan ChROnic Disease SIMulation (MICROSIM), for The Effect of Lower Blood Pressure over the Life Course on Late-life Cognition in Blacks, Hispanics, and Whites (BP-COG) study of the effect of BP levels over the life course on dementia and cardiovascular disease. MICROSIM is an agent-based Monte Carlo simulation designed using computer programming best practices. MICROSIM estimates annual vascular risk factor levels and transition probabilities in all-cause dementia, stroke, myocardial infarction, and mortality in a nationally representative sample of US adults 18+ using the National Health and Nutrition Examination Survey (NHANES). MICROSIM models changes in risk factors over time, cognition and dementia using changes from a pooled dataset of individual participant data from 6 US prospective cardiovascular cohort studies. Cardiovascular risks were estimated using a widely used risk model and BP treatment effects were derived from meta-analyses of randomized trials. MICROSIM is an extensible, open-source framework designed to estimate the population-level impact of different BP management strategies and reproduces US population-level estimates of BP and other vascular risk factors levels, their change over time, and incident all-cause dementia, stroke, myocardial infarction, and mortality.


Subject(s)
Computer Simulation , Humans , Michigan/epidemiology , Chronic Disease , Male , Dementia/epidemiology , Aged , Female , Risk Factors , Monte Carlo Method , Blood Pressure , Middle Aged , Cardiovascular Diseases/epidemiology , Adult , Alzheimer Disease , Aged, 80 and over
6.
Circ Cardiovasc Qual Outcomes ; 17(6): e010288, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38813695

ABSTRACT

BACKGROUND: The large and increasing number of adults living with dementia is a pressing societal priority, which may be partially mitigated through improved population-level blood pressure (BP) control. We explored how tighter population-level BP control affects the incidence of atherosclerotic cardiovascular disease (ASCVD) events and dementia. METHODS: Using an open-source ASCVD and dementia simulation analysis platform, the Michigan Chronic Disease Simulation Model, we evaluated how optimal implementation of 2 BP treatments based on the Eighth Joint National Committee recommendations and SPRINT (Systolic Blood Pressure Intervention Trial) protocol would influence population-level ASCVD events, global cognitive performance, and all-cause dementia. We simulated 3 populations (usual care, Eighth Joint National Committee based, SPRINT based) using nationally representative data to annually update risk factors and assign ASCVD events, global cognitive performance scores, and dementia, applying different BP treatments in each population. We tabulated total ASCVD events, global cognitive performance, all-cause dementia, optimal brain health, and years lived in each state per population. RESULTS: Optimal implementation of SPRINT-based BP treatment strategy, compared with usual care, reduced ASCVD events in the United States by ≈77 000 per year and produced 0.4 more years of stroke- or myocardial infarction-free survival when averaged across all Americans. Population-level gains in years lived free of ASCVD events were greater for SPRINT-based than Eighth Joint National Committee-based treatment. Survival and years spent with optimal brain health improved with optimal SPRINT-based BP treatment implementation versus usual care: the average patient with hypertension lived 0.19 additional years and 0.3 additional years in optimal brain health. SPRINT-based BP treatment increased the number of years lived without dementia (by an average of 0.13 years/person with hypertension), but increased the total number of individuals with dementia, mainly through more adults surviving to advanced ages. CONCLUSIONS: Tighter BP control likely benefits most individuals but is unlikely to reduce dementia prevalence and might even increase the number of older adults living with dementia.


Subject(s)
Antihypertensive Agents , Blood Pressure , Cognition , Dementia , Hypertension , Humans , Cognition/drug effects , Antihypertensive Agents/therapeutic use , Hypertension/drug therapy , Hypertension/diagnosis , Hypertension/epidemiology , Hypertension/physiopathology , Hypertension/mortality , Blood Pressure/drug effects , Aged , Male , Dementia/epidemiology , Dementia/diagnosis , Dementia/mortality , Female , Treatment Outcome , Middle Aged , Risk Factors , Risk Assessment , Incidence , Time Factors , Aged, 80 and over , Michigan/epidemiology , Computer Simulation , Atherosclerosis/epidemiology , Atherosclerosis/diagnosis , Atherosclerosis/drug therapy , United States/epidemiology
9.
medRxiv ; 2024 Feb 11.
Article in English | MEDLINE | ID: mdl-38370803

ABSTRACT

Background: The size/magnitude of cognitive changes after incident heart failure (HF) are unclear. We assessed whether incident HF is associated with changes in cognitive function after accounting for pre-HF cognitive trajectories and known determinants of cognition. Methods: This pooled cohort study included adults without HF, stroke, or dementia from six US population-based cohort studies from 1971-2019: Atherosclerosis Risk in Communities Study, Coronary Artery Risk Development in Young Adults Study, Cardiovascular Health Study, Framingham Offspring Study, Multi-Ethnic Study of Atherosclerosis, and Northern Manhattan Study. Linear mixed-effects models estimated changes in cognition at the time of HF (change in the intercept) and the rate of cognitive change over the years after HF (change in the slope), controlling for pre-HF cognitive trajectories and participant factors. Change in global cognition was the primary outcome. Change in executive function and memory were secondary outcomes. Cognitive outcomes were standardized to a t-score metric (mean [SD], 50 [10]); a 1-point difference represented a 0.1-SD difference in cognition. Results: The study included 29,614 adults (mean [SD] age was 61.1 [10.5] years, 55% female, 70.3% White, 22.2% Black 7.5% Hispanic). During a median follow-up of 6.6 (Q1-Q3: 5-19.8) years, 1,407 (4.7%) adults developed incident HF. Incident HF was associated with an acute decrease in global cognition (-1.08 points; 95% CI -1.36, -0.80) and executive function (-0.65 points; 95% CI -0.96, -0.34) but not memory (-0.51 points; 95% CI -1.37, 0.35) at the time of the event. Greater acute decreases in global cognition after HF were seen in those with older age, female sex and White race. Individuals with incident HF, compared to HF-free individuals, demonstrated faster declines in global cognition (-0.15 points per year; 95% CI, -0.21, -0.09) and executive function (-0.16 points per year; 95% CI -0.23, -0.09) but not memory ( -0.11 points per year; 95% CI -0.26, 0.04) compared with pre-HF slopes. Conclusions: In this pooled cohort study, incident HF was associated with an acute decrease in global cognition and executive function at the time of the event and faster declines in global cognition and executive function over the following years.

11.
JAMA Netw Open ; 6(12): e2349103, 2023 12 01.
Article in English | MEDLINE | ID: mdl-38127344

ABSTRACT

Importance: Buprenorphine is an underused treatment for opioid use disorder (OUD) that can be prescribed in general medical settings. Founded in 2017, the Michigan Opioid Collaborative (MOC) is an outreach and educational program that aims to address clinician and community barriers to buprenorphine access; however, the association between the MOC and buprenorphine treatment is unknown. Objective: To evaluate the association between MOC service use and county-level temporal trends of density of buprenorphine prescribers and patients receiving buprenorphine. Design, Setting, and Participants: This cohort study exploited staggered implementation of MOC services across all Michigan counties. Difference-in-difference analyses were conducted by applying linear fixed-effects regression across all counties to estimate the overall association of MOC engagement with outcomes and linear regression for each MOC-engaged county separately to infer county-specific results using data from May 2015 to August 2020. Analyses were conducted from September 2021 to November 2023. Exposures: MOC engagement. Main Outcomes and Measures: County-level monthly numbers of buprenorphine prescribers and patients receiving buprenorphine (per 100 000 population). Results: Among 83 total counties, 57 counties (68.7%) in Michigan were engaged by MOC by 2020, with 3 (3.6%) initiating engagement in 2017, 19 (22.9%) in 2018, 27 (32.5%) in 2019, and 8 (9.6%) in 2020. Michigan is made up of 83 counties with a total population size of 9 990 000. A total of 5 070 000 (50.8%) were female, 1 410 000 (14.1%) were African American or Black, 530 000 (5.3%) were Hispanic or Latino, and 7 470 000 (74.7%) were non-Hispanic White. The mean (SD) value of median age across counties was 44.8 (6.4). The monthly increases in buprenorphine prescriber numbers in the preengagement (including all time points for nonengaged counties) and postengagement periods were 0.07 and 0.39 per 100 000 population, respectively, with the absolute difference being 0.33 (95% CI, 0.12-0.53) prescribers per 100 000 population (P = .002). The numbers of patients receiving buprenorphine increased by an average of 0.6 and 7.15 per 100 000 population per month in preengagement and postengagement periods, respectively, indicating an estimated additional 6.56 (95% CI, 2.09-11.02) patients receiving buprenorphine per 100 000 population (P = .004) monthly increase after engagement compared with before. Conclusions and Relevance: In this cohort study measuring buprenorphine prescriptions in Michigan over time, counties' engagement in OUD-focused outreach and clinician education services delivered by a multidisciplinary team was associated with a temporal increase in buprenorphine prescribers and patients receiving buprenorphine.


Subject(s)
Buprenorphine , Opioid-Related Disorders , Humans , Female , Male , Buprenorphine/therapeutic use , Cohort Studies , Michigan , Analgesics, Opioid/therapeutic use , Opioid-Related Disorders/drug therapy , Prescriptions
12.
JMIR Hum Factors ; 10: e49025, 2023 10 24.
Article in English | MEDLINE | ID: mdl-37874636

ABSTRACT

BACKGROUND: Direct oral anticoagulant (DOAC) medications are frequently associated with inappropriate prescribing and adverse events. To improve the safe use of DOACs, health systems are implementing population health tools within their electronic health record (EHR). While EHR informatics tools can help increase awareness of inappropriate prescribing of medications, a lack of empowerment (or insufficient empowerment) of nonphysicians to implement change is a key barrier. OBJECTIVE: This study examined how the individual authority of clinical pharmacists and anticoagulation nurses is impacted by and changes the implementation success of an EHR DOAC Dashboard for safe DOAC medication prescribing. METHODS: We conducted semistructured interviews with pharmacists and nurses following the implementation of the EHR DOAC Dashboard at 3 clinical sites. Interview transcripts were coded according to the key determinants of implementation success. The intersections between individual clinician authority and other determinants were examined to identify themes. RESULTS: A high level of individual clinician authority was associated with high levels of key facilitators for effective use of the DOAC Dashboard (communication, staffing and work schedule, job satisfaction, and EHR integration). Conversely, a lack of individual authority was often associated with key barriers to effective DOAC Dashboard use. Positive individual authority was sometimes present with a negative example of another determinant, but no evidence was found of individual authority co-occurring with a positive instance of another determinant. CONCLUSIONS: Increased individual clinician authority is a necessary antecedent to the effective implementation of an EHR DOAC Population Management Dashboard and positively affects other aspects of implementation. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.1186/s13012-020-01044-5.


Subject(s)
Communication , Electronic Health Records , Humans , Group Processes , Informatics , Qualitative Research
13.
medRxiv ; 2023 Aug 02.
Article in English | MEDLINE | ID: mdl-37577693

ABSTRACT

Introduction: Most current clinical risk prediction scores for cardiovascular disease prevention use a composite outcome. Risk prediction scores for specific cardiovascular events could identify people who are at higher risk for some events than others informing personalized care and trial recruitment. We sought to predict risk for multiple different events, describe how those risks differ, and examine if these differences could improve treatment priorities. Methods: We used participant-level data from five cohort studies. We included participants between 40 and 79 years old who had no history of myocardial infarction (MI), stroke, or heart failure (HF). We made separate models to predict 10-year rates of first atherosclerotic cardiovascular disease (ASCVD), first fatal or nonfatal MI, first fatal or nonfatal stroke, new-onset HF, fatal ASCVD, fatal MI, fatal stroke, and all-cause mortality using established ASCVD risk factors. To limit overfitting, we used elastic net regularization with alpha = 0.75. We assessed the models for calibration, discrimination, and for correlations between predicted risks for different events. We also estimated the potential impact of varying treatment based on patients who are high risk for some ASCVD events, but not others. Results: Our study included 24,505 people; 55.6% were women, and 20.7% were non-Hispanic Black. Our models had C-statistics between 0.75 for MI and 0.85 for HF, good calibration, and minimal overfitting. The models were least similar for fatal stroke and all MI (0.58). In 1,840 participants whose risk of MI but not stroke or all-cause mortality was in the top quartile, we estimate one blood pressure-lowering medication would have a 2.4% chance of preventing any ASCVD event per 10 years. A moderate-strength statin would have a 2.1% chance. In 1,039 participants who had top quartile risk of stroke but not MI or mortality, a blood pressure-lowering medication would have a 2.5% chance of preventing an event, but a moderate-strength statin, 1.6%. Conclusion: We developed risk scores for eight key clinical events and found that cardiovascular risk varies somewhat for different clinical events. Future work could determine if tailoring decisions by risk of separate events can improve care.

14.
JAMA ; 330(8): 715-724, 2023 08 22.
Article in English | MEDLINE | ID: mdl-37606674

ABSTRACT

Importance: Aspirin is an effective and low-cost option for reducing atherosclerotic cardiovascular disease (CVD) events and improving mortality rates among individuals with established CVD. To guide efforts to mitigate the global CVD burden, there is a need to understand current levels of aspirin use for secondary prevention of CVD. Objective: To report and evaluate aspirin use for secondary prevention of CVD across low-, middle-, and high-income countries. Design, Setting, and Participants: Cross-sectional analysis using pooled, individual participant data from nationally representative health surveys conducted between 2013 and 2020 in 51 low-, middle-, and high-income countries. Included surveys contained data on self-reported history of CVD and aspirin use. The sample of participants included nonpregnant adults aged 40 to 69 years. Exposures: Countries' per capita income levels and world region; individuals' socioeconomic demographics. Main Outcomes and Measures: Self-reported use of aspirin for secondary prevention of CVD. Results: The overall pooled sample included 124 505 individuals. The median age was 52 (IQR, 45-59) years, and 50.5% (95% CI, 49.9%-51.1%) were women. A total of 10 589 individuals had a self-reported history of CVD (8.1% [95% CI, 7.6%-8.6%]). Among individuals with a history of CVD, aspirin use for secondary prevention in the overall pooled sample was 40.3% (95% CI, 37.6%-43.0%). By income group, estimates were 16.6% (95% CI, 12.4%-21.9%) in low-income countries, 24.5% (95% CI, 20.8%-28.6%) in lower-middle-income countries, 51.1% (95% CI, 48.2%-54.0%) in upper-middle-income countries, and 65.0% (95% CI, 59.1%-70.4%) in high-income countries. Conclusion and Relevance: Worldwide, aspirin is underused in secondary prevention, particularly in low-income countries. National health policies and health systems must develop, implement, and evaluate strategies to promote aspirin therapy.


Subject(s)
Aspirin , Cardiovascular Diseases , Secondary Prevention , Adult , Aged , Female , Humans , Male , Middle Aged , Aspirin/therapeutic use , Cardiovascular Diseases/epidemiology , Cardiovascular Diseases/mortality , Cardiovascular Diseases/prevention & control , Cross-Sectional Studies , Developed Countries/economics , Developed Countries/statistics & numerical data , Developing Countries/economics , Developing Countries/statistics & numerical data , Secondary Prevention/economics , Secondary Prevention/methods , Secondary Prevention/statistics & numerical data , Self Report/economics , Self Report/statistics & numerical data , Cardiovascular Agents/therapeutic use
15.
Crit Care Explor ; 5(6): e0926, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37637354

ABSTRACT

Sepsis survivors are at increased risk for morbidity and functional impairment. There are recommended practices to support recovery after sepsis, but it is unclear how often they are implemented. We sought to assess the current use of recovery-based practices across hospitals. DESIGN: Electronic survey assessing the use of best practices for recovery from COVID-related and non-COVID-related sepsis. Questions included four-point Likert responses of "never" to "always/nearly always." SETTING: Twenty-six veterans affairs hospitals with the highest (n = 13) and lowest (n = 13) risk-adjusted 90-day sepsis survival. SUBJECTS: Inpatient and outpatient clinician leaders. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: For each domain, we calculated the proportion of "always/nearly always" responses and mean Likert scores. We assessed for differences by hospital survival, COVID versus non-COVID sepsis, and sepsis case volume. Across eight domains of care, the proportion "always/nearly always" responses ranged from: 80.7% (social support) and 69.8% (medication management) to 22.5% (physical recovery and adaptation) and 0.0% (emotional support). Higher-survival hospitals more often performed screening for new symptoms/limitations (49.2% vs 35.1% "always/nearly always," p = 0.02) compared with lower-survival hospitals. There was no difference in "always/nearly always" responses for COVID-related versus non-COVID-related sepsis, but small differences in mean Likert score in four domains: care coordination (3.34 vs 3.48, p = 0.01), medication management (3.59 vs 3.65, p = 0.04), screening for new symptoms/limitations (3.13 vs 3.20, p = 0.02), and anticipatory guidance and education (2.97 vs 2.84, p < 0.001). Lower case volume hospitals more often performed care coordination (72.7% vs 43.8% "always/nearly always," p = 0.02), screening for new symptoms/limitations (60.6% vs 35.8%, p < 0.001), and social support (100% vs 74.2%, p = 0.01). CONCLUSIONS: Our findings show variable adoption of practices for sepsis recovery. Future work is needed to understand why some practice domains are employed more frequently than others, and how to facilitate practice implementation, particularly within rarely adopted domains such as emotional support.

16.
Implement Sci Commun ; 4(1): 74, 2023 Jun 29.
Article in English | MEDLINE | ID: mdl-37386501

ABSTRACT

BACKGROUND: Available resources within an organization can determine the implementation success of an intervention. However, few studies have investigated how the required resources change over the phases of implementation. Using stakeholder interviews, we examined the changes in and interactions between available resources and implementation climate in the implementation and sustainment phases of a national implementation effort for a population health tool. METHODS: We conducted a secondary analysis of the interviews with 20 anticoagulation professionals at 17 clinical sites in the Veterans Health Administration health system about their experiences with a population health dashboard for anticoagulant management. Interview transcripts were coded using constructs from the Consolidated Framework for Implementation Research (CFIR) and according to the phase of implementation (pre-implementation, implementation, and sustainment) as defined by the VA Quality Enhancement Research Initiative (QUERI) Roadmap. We analyzed the factors that may determine successful implementation by examining the co-occurrence patterns between available resources and implementation climate across different implementation phases. To illustrate the variations in these determinants across phases, we aggregated and scored coded statements using a previously published CFIR scoring system (- 2 to + 2). Key relationships between available resources and implementation climate were identified and summarized using thematic analysis. RESULTS: The resources necessary to support the successful implementation of an intervention are not static; both the quantity and types of resources shift based on the phases of the intervention. Furthermore, increased resource availability does not guarantee the sustainment of intervention success. Users need different types of support beyond the technical aspects of an intervention, and this support varies over time. Specifically, available resources in the form of technological support and social/emotional support help users establish trust in a new technological-based intervention during the implementation phase. Resources that foster and maintain collaboration between users and other stakeholders help them stay motivated during sustainment. CONCLUSIONS: Our findings highlight the dynamic nature of available resources and their impacts on the implementation climate across different phases of implementation. A better understanding of the dynamics of available resources over time from the users' perspectives will allow the adaptation of resources to better meet the needs of the intervention stakeholders.

17.
Ann Am Thorac Soc ; 20(9): 1309-1315, 2023 09.
Article in English | MEDLINE | ID: mdl-37163757

ABSTRACT

Rationale: Despite the importance of sepsis surveillance, no optimal approach for identifying sepsis hospitalizations exists. The Centers for Disease Control and Prevention Adult Sepsis Event Definition (CDC-ASE) is an electronic medical record-based algorithm that yields more stable estimates over time than diagnostic coding-based approaches but may still result in misclassification. Objectives: We sought to assess three approaches to identifying sepsis hospitalizations, including a modified CDC-ASE. Methods: This cross-sectional study included patients in the Veterans Affairs Ann Arbor Healthcare System admitted via the emergency department (February 2021 to February 2022) with at least one episode of acute organ dysfunction within 48 hours of emergency department presentation. Patients were assessed for community-onset sepsis using three methods: 1) explicit diagnosis codes, 2) the CDC-ASE, and 3) a modified CDC-ASE. The modified CDC-ASE required at least two systemic inflammatory response syndrome criteria instead of blood culture collection and had a more sensitive definition of respiratory dysfunction. Each method was compared with a reference standard of physician adjudication via medical record review. Patients were considered to have sepsis if they had at least one episode of acute organ dysfunction graded as "definitely" or "probably" infection related on physician review. Results: Of 821 eligible hospitalizations, 449 were selected for physician review. Of these, 98 (21.8%) were classified as sepsis by medical record review, 103 (22.9%) by the CDC-ASE, 132 (29.4%) by the modified CDC-ASE, and 37 (8.2%) by diagnostic codes. Accuracy was similar across the three methods of interest (80.6% for the CDC-ASE, 79.6% for the modified CDC-ADE, and 84.2% for diagnostic codes), but sensitivity and specificity varied. The CDC-ASE algorithm had sensitivity of 58.2% (95% confidence interval [CI], 47.2-68.1%) and specificity of 86.9% (95% CI, 82.9-90.2%). The modified CDC-ASE algorithm had greater sensitivity (69.4% [95% CI, 59.3-78.3%]) but lower specificity (81.8% [95% CI, 77.3-85.7%]). Diagnostic codes had lower sensitivity (32.7% [95% CI, 23.5-42.9%]) but greater specificity (98.6% [95% CI, 96.7-99.55%]). Conclusions: There are several approaches to identifying sepsis hospitalizations for surveillance that have acceptable accuracy. These approaches yield varying sensitivity and specificity, so investigators should carefully consider the test characteristics of each method before determining an appropriate method for their intended use.


Subject(s)
Electronic Health Records , Sepsis , Adult , Humans , Multiple Organ Failure/diagnosis , Cross-Sectional Studies , Sepsis/diagnosis , Sepsis/epidemiology , Hospitalization
18.
JAMA Netw Open ; 6(5): e2313879, 2023 05 01.
Article in English | MEDLINE | ID: mdl-37195662

ABSTRACT

Importance: Incident stroke is associated with accelerated cognitive decline. Whether poststroke vascular risk factor levels are associated with faster cognitive decline is uncertain. Objective: To evaluate associations of poststroke systolic blood pressure (SBP), glucose, and low-density lipoprotein (LDL) cholesterol levels with cognitive decline. Design, Setting, and Participants: Individual participant data meta-analysis of 4 US cohort studies (conducted 1971-2019). Linear mixed-effects models estimated changes in cognition after incident stroke. Median (IQR) follow-up was 4.7 (2.6-7.9) years. Analysis began August 2021 and was completed March 2023. Exposures: Time-dependent cumulative mean poststroke SBP, glucose, and LDL cholesterol levels. Main Outcomes and Measures: The primary outcome was change in global cognition. Secondary outcomes were change in executive function and memory. Outcomes were standardized as t scores (mean [SD], 50 [10]); a 1-point difference represents a 0.1-SD difference in cognition. Results: A total of 1120 eligible dementia-free individuals with incident stroke were identified; 982 (87.7%) had available covariate data and 138 (12.3%) were excluded for missing covariate data. Of the 982, 480 (48.9%) were female individuals, and 289 (29.4%) were Black individuals. The median age at incident stroke was 74.6 (IQR, 69.1-79.8; range, 44.1-96.4) years. Cumulative mean poststroke SBP and LDL cholesterol levels were not associated with any cognitive outcome. However, after accounting for cumulative mean poststroke SBP and LDL cholesterol levels, higher cumulative mean poststroke glucose level was associated with faster decline in global cognition (-0.04 points/y faster per each 10-mg/dL increase [95% CI, -0.08 to -0.001 points/y]; P = .046) but not executive function or memory. After restricting to 798 participants with apolipoprotein E4 (APOE4) data and controlling for APOE4 and APOE4 × time, higher cumulative mean poststroke glucose level was associated with a faster decline in global cognition in models without and with adjustment for cumulative mean poststroke SBP and LDL cholesterol levels (-0.05 points/y faster per 10-mg/dL increase [95% CI, -0.09 to -0.01 points/y]; P = .01; -0.07 points/y faster per 10-mg/dL increase [95% CI, -0.11 to -0.03 points/y]; P = .002) but not executive function or memory declines. Conclusions and Relevance: In this cohort study, higher poststroke glucose levels were associated with faster global cognitive decline. We found no evidence that poststroke LDL cholesterol and SBP levels were associated with cognitive decline.


Subject(s)
Cognitive Dysfunction , Stroke , Humans , Female , Male , Cohort Studies , Cholesterol, LDL , Apolipoprotein E4 , Cognitive Dysfunction/epidemiology , Cognitive Dysfunction/etiology , Stroke/complications , Stroke/epidemiology , Stroke/psychology , Risk Factors , Glucose , Survivors
SELECTION OF CITATIONS
SEARCH DETAIL