Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 157
Filter
1.
Qual Health Res ; 34(4): 287-297, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37939257

ABSTRACT

Reducing the prevalence of acute kidney injury (AKI) is an important patient safety objective set forth by the National Quality Forum. Despite international guidelines to prevent AKI, there continues to be an inconsistent uptake of these interventions by cardiac teams across practice settings. The IMPROVE-AKI study was designed to test the effectiveness and implementation of AKI preventive strategies delivered through team-based coaching activities. Qualitative methods were used to identify factors that shaped sites' implementation of AKI prevention strategies. Semi-structured interviews were conducted with staff in a range of roles within the cardiac catheterization laboratories, including nurses, laboratory managers, and interventional cardiologists (N = 50) at multiple time points over the course of the study. Interview transcripts were qualitatively coded, and aggregated code reports were reviewed to construct main themes through memoing. In this paper, we report insights from semi-structured interviews regarding workflow, organizational culture, and leadership factors that impacted implementation of AKI prevention strategies.


Subject(s)
Acute Kidney Injury , Humans , Acute Kidney Injury/prevention & control , Acute Kidney Injury/epidemiology , Qualitative Research , Leadership , Health Facilities , Patient Safety
2.
J Vasc Surg ; 78(5): 1212-1220.e5, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37442215

ABSTRACT

OBJECTIVE: Although the differences in short-term outcomes between male and female patients in abdominal aortic aneurysm (AAA) repair have been well studied, it remains unclear if these sex disparities extend to other long-term adverse outcomes after AAA repair, such as reintervention and late rupture. METHODS: We performed a retrospective cohort study of 13,007 patients who underwent either endovascular (EVAR) or open AAA repair (OAR) between 2003 and 2015 using data from the Vascular Quality Initiative registries. Eligible patients were linked to fee-for-service Medicare claims to identify late outcomes of rupture and aneurysm-specific reintervention. RESULTS: The mean age of our cohort was 76 ± 6.7 years, 22% were female, 94% were White, and 77% underwent EVAR. The 10-year rupture incidence was slightly higher for women at 4.8 per 1000 person-years, vs 3.9 for men, but this difference was not statistically significant after risk adjustment (hazard ratio [HR] = 1.13, 95% confidence interval [CI]: 0.74-1.73). Likewise, we found no sex difference in reintervention rates (5.1 vs 4.8 in women per 1000 person-years) even after risk adjustment (HR = 0.95, 95% CI: 0.83-1.09). Regression models suggest effect modification by repair type for reintervention, where women who underwent index EVAR had a higher risk of reintervention than men (HR = 1.08, 95% CI: 0.93-1.26), whereas women who underwent OAR were at a lower risk of reintervention than men (HR = 0.79, 95% CI: 0.58-1.08); however, neither effect reached statistical significance within each subgroup. In addition, we found that the risk of reintervention for women vs men varied by clinical presentation, where women were less likely to undergo reintervention after an elective or symptomatic AAA repair but were more likely to undergo reintervention after a repair for AAA rupture (HR = 1.70, 95% CI: 1.05-2.75). CONCLUSIONS: Male and female patients who underwent AAA repair had similar rates of reintervention and late aneurysm rupture in the 10 years after their procedure. However, our findings suggest that repair type and clinical presentation may affect the role of sex in clinical outcomes and warrant further exploration in these subgroups.

3.
Article in English | MEDLINE | ID: mdl-37476591

ABSTRACT

Background: Super-utilizers consume the greatest share of resource intensive healthcare (RIHC) and reducing their utilization remains a crucial challenge to healthcare systems in the United States (U.S.). The objective of this study was to predict RIHC among U.S. counties, using routinely collected data from the U.S. government, including information on consumer spending, offering an alternative method for identifying super-utilization among population units rather than individuals. Methods: Cross-sectional data from 5 governmental sources in 2017 were used in a machine learning pipeline, where target-prediction features were selected and used in 4 distinct algorithms. Outcome metrics of RIHC utilization came from the American Hospital Association and included yearly: (1) emergency rooms visit, (2) inpatient days, and (3) hospital expenditures. Target-prediction features included: 149 demographic characteristics from the U.S. Census Bureau, 151 adult and child health characteristics from the Centers for Disease Control and Prevention, 151 community characteristics from the American Community Survey, and 571 consumer expenditures from the Bureau of Labor Statistics. SHAP analysis identified important target-prediction features for 3 RIHC outcome metrics. Results: 2475 counties with emergency rooms and 2491 counties with hospitals were included. The median yearly emergency room visits per capita was 0.450 [IQR:0.318, 0.618], the median inpatient days per capita was 0.368 [IQR: 0.176, 0.826], and the median hospital expenditures per capita was $2104 [IQR: $1299.93, 3362.97]. The coefficient of determination (R2), calculated on the test set, ranged between 0.267 and 0.447. Demographic and community characteristics were among the important predictors for all 3 RIHC outcome metrics. Conclusions: Integrating diverse population characteristics from numerous governmental sources, we predicted 3-outcome metrics of RIHC among U.S. counties with good performance, offering a novel and actionable tool for identifying super-utilizer segments in the population. Wider integration of routinely collected data can be used to develop alternative methods for predicting RIHC among population units.

4.
Catheter Cardiovasc Interv ; 101(5): 877-887, 2023 04.
Article in English | MEDLINE | ID: mdl-36924009

ABSTRACT

BACKGROUND: Endovascular peripheral vascular intervention (PVI) has become the primary revascularization technique used for peripheral artery disease (PAD). Yet, there is limited understanding of long-term outcomes of PVI among women versus men. In this study, our objective was to investigate sex differences in the long-term outcomes of patients undergoing PVI. METHODS: We performed a cohort study of patients undergoing PVI for PAD from January 1, 2010 to September 30, 2015 using data in the Vascular Quality Initiative (VQI) registry. Patients were linked to fee-for-service Medicare claims to identify late outcomes including major amputation, reintervention, major adverse limb event (major amputation or reintervention [MALE]), and mortality. Sex differences in outcomes were evaluated using cumulative incidence curves, Gray's test, and mixed effects Cox proportional hazards regression accounting for patient and lesion characteristics using inverse probability weighted estimates. RESULTS: In this cohort of 15,437 patients, 44% (n = 6731) were women. Women were less likely to present with claudication than men (45% vs. 49%, p < 0.001, absolute standardized difference, d = 0.08) or be able to ambulate independently (ambulatory: 70% vs. 76%, p < 0.001, d = 0.14). There were no major sex differences in lesion characteristics, except for an increased frequency of tibial artery treatment in men (23% vs. 18% in women, p < 0.001, d = 0.12). Among patients with claudication, women had a higher risk-adjusted rate of major amputation (hazard ratio [HR] = 1.72, 95% confidence interval [CI]: 1.18-2.49), but a lower risk of mortality (HR = 0.86, 95% CI: 0.75-0.99). There were no sex differences in reintervention or MALE for patients with claudication. However, among patients with chronic limb-threatening ischemia, women had a lower risk-adjusted hazard of major amputation (HR = 0.79, 95% CI: 0.67-0.93), MALE (HR = 0.86, 95% CI: 0.78-0.96), and mortality (HR = 0.86, 95% CI: 0.79-0.94). CONCLUSION: There is significant heterogeneity in PVI outcomes among men and women, especially after stratifying by symptom severity. A lower overall mortality in women with claudication was accompanied by a higher risk of major amputation. Men with chronic limb-threatening ischemia had a higher risk of major amputation, MALE, and mortality. Developing sex-specific approaches to PVI that prioritizes limb outcomes in women can improve the quality of vascular care for men and women.


Subject(s)
Endovascular Procedures , Peripheral Arterial Disease , Male , Humans , Female , Aged , United States/epidemiology , Chronic Limb-Threatening Ischemia , Cohort Studies , Risk Factors , Endovascular Procedures/adverse effects , Treatment Outcome , Limb Salvage , Medicare , Peripheral Arterial Disease/diagnostic imaging , Peripheral Arterial Disease/therapy , Intermittent Claudication/diagnostic imaging , Intermittent Claudication/therapy , Ischemia/diagnostic imaging , Ischemia/therapy , Retrospective Studies
5.
Clin J Am Soc Nephrol ; 18(3): 315-326, 2023 03 01.
Article in English | MEDLINE | ID: mdl-36787125

ABSTRACT

BACKGROUND: Up to 14% of patients in the United States undergoing cardiac catheterization each year experience AKI. Consistent use of risk minimization preventive strategies may improve outcomes. We hypothesized that team-based coaching in a Virtual Learning Collaborative (Collaborative) would reduce postprocedural AKI compared with Technical Assistance (Assistance), both with and without Automated Surveillance Reporting (Surveillance). METHODS: The IMPROVE AKI trial was a 2×2 factorial cluster-randomized trial across 20 Veterans Affairs medical centers (VAMCs). Participating VAMCs received Assistance, Assistance with Surveillance, Collaborative, or Collaborative with Surveillance for 18 months to implement AKI prevention strategies. The Assistance and Collaborative approaches promoted hydration and limited NPO and contrast dye dosing. We fit logistic regression models for AKI with site-level random effects accounting for the clustering of patients within medical centers with a prespecified interest in exploring differences across the four intervention arms. RESULTS: Among VAMCs' 4517 patients, 510 experienced AKI (235 AKI events among 1314 patients with preexisting CKD). AKI events in each intervention cluster were 110 (13%) in Assistance, 122 (11%) in Assistance with Surveillance, 190 (13%) in Collaborative, and 88 (8%) in Collaborative with Surveillance. Compared with sites receiving Assistance alone, case-mix-adjusted differences in AKI event proportions were -3% (95% confidence interval [CI], -4 to -3) for Assistance with Surveillance, -3% (95% CI, -3 to -2) for Collaborative, and -5% (95% CI, -6 to -5) for Collaborative with Surveillance. The Collaborative with Surveillance intervention cluster had a substantial 46% reduction in AKI compared with Assistance alone (adjusted odds ratio=0.54; 0.40-0.74). CONCLUSIONS: This implementation trial estimates that the combination of Collaborative with Surveillance reduced the odds of AKI by 46% at VAMCs and is suggestive of a reduction among patients with CKD. CLINICAL TRIAL REGISTRY NAME AND REGISTRATION NUMBER: IMPROVE AKI Cluster-Randomized Trial (IMPROVE-AKI), NCT03556293.


Subject(s)
Acute Kidney Injury , Mentoring , Renal Insufficiency, Chronic , Humans , United States , Contrast Media/adverse effects , United States Department of Veterans Affairs , Renal Insufficiency, Chronic/chemically induced , Acute Kidney Injury/chemically induced , Acute Kidney Injury/prevention & control
8.
J Clin Anesth ; 85: 111043, 2023 05.
Article in English | MEDLINE | ID: mdl-36566648

ABSTRACT

BACKGROUND: Earlier a randomized trial showed efficacy of a multifaceted intervention approach for reducing surgical site infection: hand hygiene, vascular care, environmental cleaning, patient decolonization (nasal povidone iodine, chlorhexidine wipes), with feedback on pathogen transmission. The follow-up prospective observational study showed effectiveness when applied to all operating rooms of an inpatient surgical suite. In practice, many organizations will at baseline not be using conditions equivalent to the control groups but instead functionally have had ongoing a single intervention for infection control (e.g., encouraging better hand hygiene). Organizations also differ in how well and long they survey every surgical patient for postoperative surgical site infection. Thus, estimation of the expected net cost savings from implementing multifaceted intervention depends on the relative efficacy of multifaceted approach versus single intervention approaches and on the incidence of surgical site infection, the latter depending itself on the monitoring period for infection development. METHODS: The retrospective cohort analysis included 4865 patients from two single intervention and two multifaceted studies, each of the four studies with matched control groups. We used Poisson regression with robust variance to estimate the relative risk reduction in surgical site infections for the multifaceted approach versus single interventions and, with 30-day follow-up versus ≥60-day follow-up for infection. RESULTS: The multifaceted approach was associated with an estimated 68% reduction in postoperative surgical site infections relative to single interventions (risk ratio 0.32, 97.5% confidence interval 0.15-0.70, P = 0.001). There were approximately 2.61-fold more surgical site infections detected with follow-up for at least 60 days of medical records relative to 30 days of records reviewed (97.5% CI 1.62 to 4.21, P < 0.001). CONCLUSIONS: An evidence-based, multifaceted approach to anesthesia work area infection control can generate substantial reductions in surgical site infections. A follow-up period of at least 60-days is indicated for infection detection.


Subject(s)
Anesthesia , Anti-Infective Agents, Local , Humans , Surgical Wound Infection/epidemiology , Surgical Wound Infection/prevention & control , Retrospective Studies , Follow-Up Studies , Chlorhexidine , Infection Control , Anti-Infective Agents, Local/therapeutic use
9.
AMIA Annu Symp Proc ; 2023: 1209-1217, 2023.
Article in English | MEDLINE | ID: mdl-38222356

ABSTRACT

Several studies have found associations between air pollution and respiratory disease outcomes. However, there is minimal prognostic research exploring whether integrating air quality into clinical prediction models can improve accuracy and utility. In this study, we built models using both logistic regression and random forests to determine the benefits of including air quality data with meteorological and clinical data in prediction of COPD exacerbations requiring medical care. Logistic models were not improved by inclusion of air quality. However, the net benefit curves of random forest models showed greater clinical utility with the addition of air quality data. These models demonstrate a practical and relatively low-cost way to include environmental information into clinical prediction tools to improve the clinical utility of COPD prediction. Findings could be used to provide population level health warnings as well as individual-patient risk assessments.


Subject(s)
Air Pollution , Pulmonary Disease, Chronic Obstructive , Humans , Disease Progression , Pulmonary Disease, Chronic Obstructive/diagnosis , Air Pollution/adverse effects , Risk Assessment , Data Accuracy
10.
BMC Public Health ; 22(1): 2101, 2022 11 17.
Article in English | MEDLINE | ID: mdl-36397061

ABSTRACT

BACKGROUND: Diet is important for chronic disease management, with limited research understanding dietary choices among those with multi-morbidity, the state of having 2 or more chronic conditions. The objective of this study was to identify associations between packaged food and drink purchases and diet-related cardiometabolic multi-morbidity (DRCMM). METHODS: Cross-sectional associations between packaged food and drink purchases and household DRCMM were investigated using a national sample of U.S. households participating in a research marketing study. DRCMM households were defined as household head(s) self-reporting 2 or more diet-related chronic conditions. Separate multivariable logistic regression models were used to model the associations between household DRCMM status and total servings of, and total calories and nutrients from, packaged food and drinks purchased per month, as well as the nutrient density (protein, carbohydrates, and fat per serving) of packaged food and drinks purchased per month, adjusted for household size. RESULTS: Among eligible households, 3795 (16.8%) had DRCMM. On average, households with DRCMM versus without purchased 14.8 more servings per capita, per month, from packaged foods and drinks (p < 0.001). DRCMM households were 1.01 times more likely to purchase fat and carbohydrates in lieu of protein across all packaged food and drinks (p = 0.002, p = 0.000, respectively). DRCMM households averaged fewer grams per serving of protein, carbohydrates, and fat per month across all food and drink purchases (all p < 0.001). When carbonated soft drinks and juices were excluded, the same associations for grams of protein and carbohydrates per serving per month were seen (both p < 0.001) but the association for grams of fat per serving per month attenuated. CONCLUSIONS: DRCMM households purchased greater quantities of packaged food and drinks per capita than non-DRCMM households, which contributed to more fat, carbohydrates, and sodium in the home. However, food and drinks in DRCMM homes on average were lower in nutrient-density. Future studies are needed to understand the motivations for packaged food and drink choices among households with DRCMM to inform interventions targeting the home food environment.


Subject(s)
Cardiovascular Diseases , Multimorbidity , Humans , Cross-Sectional Studies , Nutritive Value , Beverages , Diet , Family Characteristics , Food Packaging , Carbohydrates
11.
Circ Cardiovasc Qual Outcomes ; 15(8): e008635, 2022 08.
Article in English | MEDLINE | ID: mdl-35959674

ABSTRACT

BACKGROUND: The utility of quality dashboards to inform decision-making and improve clinical outcomes is tightly linked to the accuracy of the information they provide and, in turn, accuracy of underlying prediction models. Despite recognition of the need to update prediction models to maintain accuracy over time, there is limited guidance on updating strategies. We compare predefined and surveillance-based updating strategies applied to a model supporting quality evaluations among US veterans. METHODS: We evaluated the performance of a US Department of Veterans Affairs-specific model for postcardiac catheterization acute kidney injury using routinely collected observational data over the 6 years following model development (n=90 295 procedures in 2013-2019). Predicted probabilities were generated from the original model, an annually retrained model, and a surveillance-based approach that monitored performance to inform the timing and method of updates. We evaluated how updating the national model impacted regional quality profiles. We compared observed-to-expected outcome ratios, where values above and below 1 indicated more and fewer adverse outcomes than expected, respectively. RESULTS: The original model overpredicted risk at the national level (observed-to-expected outcome ratio, 0.75 [0.74-0.77]). Annual retraining updated the model 5×; surveillance-based updating retrained once and recalibrated twice. While both strategies improved performance, the surveillance-based approach provided superior calibration (observed-to-expected outcome ratio, 1.01 [0.99-1.03] versus 0.94 [0.92-0.96]). Overprediction by the original model led to optimistic quality assessments, incorrectly indicating most of the US Department of Veterans Affairs' 18 regions observed fewer acute kidney injury events than predicted. Both updating strategies revealed 16 regions performed as expected and 2 regions increasingly underperformed, having more acute kidney injury events than predicted. CONCLUSIONS: Miscalibrated clinical prediction models provide inaccurate pictures of performance across clinical units, and degrading calibration further complicates our understanding of quality. Updating strategies tailored to health system needs and capacity should be incorporated into model implementation plans to promote the utility and longevity of quality reporting tools.


Subject(s)
Acute Kidney Injury , Benchmarking , Acute Kidney Injury/diagnosis , Acute Kidney Injury/epidemiology , Acute Kidney Injury/therapy , Data Collection , Humans
12.
BMC Health Serv Res ; 22(1): 847, 2022 Jun 30.
Article in English | MEDLINE | ID: mdl-35773679

ABSTRACT

BACKGROUND: Super-utilizers represent approximately 5% of the population in the United States (U.S.) and yet they are responsible for over 50% of healthcare expenditures. Using characteristics of hospital service areas (HSAs) to predict utilization of resource intensive healthcare (RIHC) may offer a novel and actionable tool for identifying super-utilizer segments in the population. Consumer expenditures may offer additional value in predicting RIHC beyond typical population characteristics alone. METHODS: Cross-sectional data from 2017 was extracted from 5 unique sources. The outcome was RIHC and included emergency room (ER) visits, inpatient days, and hospital expenditures, all expressed as log per capita. Candidate predictors from 4 broad groups were used, including demographics, adults and child health characteristics, community characteristics, and consumer expenditures. Candidate predictors were expressed as per capita or per capita percent and were aggregated from zip-codes to HSAs using weighed means. Machine learning approaches (Random Forrest, LASSO) selected important features from nearly 1,000 available candidate predictors and used them to generate 4 distinct models, including non-regularized and LASSO regression, random forest, and gradient boosting. Candidate predictors from the best performing models, for each outcome, were used as independent variables in multiple linear regression models. Relative contribution of variables from each candidate predictor group to regression model fit were calculated. RESULTS: The median ER visits per capita was 0.482 [IQR:0.351-0.646], the median inpatient days per capita was 0.395 [IQR:0.214-0.806], and the median hospital expenditures per capita was $2,302 [1$,544.70-$3,469.80]. Using 1,106 variables, the test-set coefficient of determination (R2) from the best performing models ranged between 0.184-0.782. The adjusted R2 values from multiple linear regression models ranged from 0.311-0.8293. Relative contribution of consumer expenditures to model fit ranged from 23.4-33.6%. DISCUSSION: Machine learning models predicted RIHC among HSAs using diverse population data, including novel consumer expenditures and provides an innovative tool to predict population-based healthcare utilization and expenditures. Geographic variation in utilization and spending were identified.


Subject(s)
Delivery of Health Care , Health Expenditures , Adult , Child , Cross-Sectional Studies , Hospitals , Humans , Machine Learning , Patient Acceptance of Health Care , United States
13.
J Am Heart Assoc ; 11(7): e024198, 2022 04 05.
Article in English | MEDLINE | ID: mdl-35322668

ABSTRACT

Background Social risk factors influence rehospitalization rates yet are challenging to incorporate into prediction models. Integration of social risk factors using natural language processing (NLP) and machine learning could improve risk prediction of 30-day readmission following an acute myocardial infarction. Methods and Results Patients were enrolled into derivation and validation cohorts. The derivation cohort included inpatient discharges from Vanderbilt University Medical Center between January 1, 2007, and December 31, 2016, with a primary diagnosis of acute myocardial infarction, who were discharged alive, and not transferred from another facility. The validation cohort included patients from Dartmouth-Hitchcock Health Center between April 2, 2011, and December 31, 2016, meeting the same eligibility criteria described above. Data from both sites were linked to Centers for Medicare & Medicaid Services administrative data to supplement 30-day hospital readmissions. Clinical notes from each cohort were extracted, and an NLP model was deployed, counting mentions of 7 social risk factors. Five machine learning models were run using clinical and NLP-derived variables. Model discrimination and calibration were assessed, and receiver operating characteristic comparison analyses were performed. The 30-day rehospitalization rates among the derivation (n=6165) and validation (n=4024) cohorts were 15.1% (n=934) and 10.2% (n=412), respectively. The derivation models demonstrated no statistical improvement in model performance with the addition of the selected NLP-derived social risk factors. Conclusions Social risk factors extracted using NLP did not significantly improve 30-day readmission prediction among hospitalized patients with acute myocardial infarction. Alternative methods are needed to capture social risk factors.


Subject(s)
Myocardial Infarction , Natural Language Processing , Aged , Electronic Health Records , Humans , Information Storage and Retrieval , Medicare , Myocardial Infarction/diagnosis , Myocardial Infarction/therapy , Patient Readmission , Retrospective Studies , United States/epidemiology
15.
AMIA Annu Symp Proc ; 2022: 512-521, 2022.
Article in English | MEDLINE | ID: mdl-37128461

ABSTRACT

A hospital readmission risk prediction tool for patients with diabetes based on electronic health record (EHR) data is needed. The optimal modeling approach, however, is unclear. In 2,836,569 encounters of 36,641 diabetes patients, deep learning (DL) long short-term memory (LSTM) models predicting unplanned, all-cause, 30-day readmission were developed and compared to several traditional models. Models used EHR data defined by a Common Data Model. The LSTM model Area Under the Receiver Operating Characteristic Curve (AUROC) was significantly greater than that of the next best traditional model [LSTM 0.79 vs Random Forest (RF) 0.72, p<0.0001]. Experiments showed that performance of the LSTM models increased as prior encounter number increased up to 30 encounters. An LSTM model with 16 selected laboratory tests yielded equivalent performance to a model with all 981 laboratory tests. This new DL model may provide the basis for a more useful readmission risk prediction tool for diabetes patients.


Subject(s)
Deep Learning , Diabetes Mellitus , Humans , Patient Readmission , Memory, Short-Term , ROC Curve
16.
J Card Surg ; 36(11): 4213-4223, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34472654

ABSTRACT

OBJECTIVE: Several short-term readmission and mortality prediction models have been developed using clinical risk factors or biomarkers among patients undergoing coronary artery bypass graft (CABG) surgery. The use of biomarkers for long-term prediction of readmission and mortality is less well understood. Given the established association of cardiac biomarkers with short-term adverse outcomes, we hypothesized that 5-year prediction of readmission or mortality may be significantly improved using cardiac biomarkers. MATERIALS AND METHODS: Plasma biomarkers from 1149 patients discharged alive after isolated CABG surgery from eight medical centers were measured in a cohort from the Northern New England Cardiovascular Disease Study Group between 2004 and 2007. We assessed the added predictive value of a biomarker panel with a clinical model against the clinical model alone and compared the model discrimination using the area under the receiver operating characteristic (AUROC) curves. RESULTS: In our cohort, 461 (40%) patients were readmitted or died within 5 years. Long-term outcomes were predicted by applying the STS ASCERT clinical model with an AUROC of 0.69. The biomarker panel with the clinical model resulted in a significantly improved AUROC of 0.74 (p value <.0001). Across 5 years, the hazard ratio for patients in the second to fifth quintile predicted probabilities from the biomarker augmented STS ASCERT model ranged from 2.2 to 7.9 (p values <.001). CONCLUSIONS: We report that a panel of biomarkers significantly improved prediction of long-term readmission or mortality risk following CABG surgery. Our findings suggest biomarkers help clinical care teams better assess the long-term risk of readmission or mortality.


Subject(s)
Coronary Artery Bypass , Patient Readmission , Biomarkers , Hospital Mortality , Humans , ROC Curve , Risk Factors
17.
BMC Cardiovasc Disord ; 21(1): 410, 2021 08 27.
Article in English | MEDLINE | ID: mdl-34452596

ABSTRACT

BACKGROUND: Rates of recommending percutaneous coronary intervention (PCI) and coronary artery bypass grafting (CABG) vary across clinicians. Whether clinicians agree on preferred treatment options for multivessel coronary artery disease patients has not been well studied. METHODS AND RESULTS: We distributed a survey to 104 clinicians from the Northern New England Cardiovascular Study Group through email and at a regional meeting with 88 (84.6%) responses. The survey described three clinical vignettes of multivessel coronary artery disease patients. For each patient vignette participants selected appropriate treatment options and whether they would use a patient decision aid. The likelihood of choosing PCI only or PCI/CABG over CABG only was modeled using a multinomial regression. Across all vignettes, participants selected CABG only as an appropriate treatment option 24.2% of the time, PCI only 25.4% of the time, and both CABG or PCI as appropriate treatment options 50.4% of the time. Surgeons were less likely to choose PCI over CABG (RR 0.14, 95% CI 0.03, 0.59) or both treatments over CABG only (RR 0.10, 95% CI 0.03, 0.34) relative to cardiologists. Overall, 65% of participants responded they would use a patient decision aid with each vignette. CONCLUSIONS: There is a lack of consensus on the appropriate treatment options across cardiologists and surgeons for patients with multivessel coronary artery disease. Treatment choice is influenced by both patient characteristics and clinician specialty.


Subject(s)
Cardiologists/trends , Coronary Artery Bypass/trends , Coronary Artery Disease/therapy , Decision Support Techniques , Nurses/trends , Percutaneous Coronary Intervention/trends , Practice Patterns, Physicians'/trends , Surgeons/trends , Adolescent , Adult , Aged , Aged, 80 and over , Choice Behavior , Clinical Decision-Making , Consensus , Coronary Artery Disease/diagnosis , Cross-Sectional Studies , Female , Health Care Surveys , Health Status , Humans , Male , Middle Aged , New England , Patient Selection , Young Adult
19.
J Biomed Inform ; 120: 103851, 2021 08.
Article in English | MEDLINE | ID: mdl-34174396

ABSTRACT

Social determinants of health (SDoH) are increasingly important factors for population health, healthcare outcomes, and care delivery. However, many of these factors are not reliably captured within structured electronic health record (EHR) data. In this work, we evaluated and adapted a previously published NLP tool to include additional social risk factors for deployment at Vanderbilt University Medical Center in an Acute Myocardial Infarction cohort. We developed a transformation of the SDoH outputs of the tool into the OMOP common data model (CDM) for re-use across many potential use cases, yielding performance measures across 8 SDoH classes of precision 0.83 recall 0.74 and F-measure of 0.78.


Subject(s)
Electronic Health Records , Social Determinants of Health , Academic Medical Centers , Cohort Studies , Delivery of Health Care , Humans
20.
JAMA Netw Open ; 4(5): e215821, 2021 05 03.
Article in English | MEDLINE | ID: mdl-34042996

ABSTRACT

Importance: Increasingly, individuals with atrial fibrillation (AF) use wearable devices (hereafter wearables) that measure pulse rate and detect arrhythmia. The associations of wearables with health outcomes and health care use are unknown. Objective: To characterize patients with AF who use wearables and compare pulse rate and health care use between individuals who use wearables and those who do not. Design, Setting, and Participants: This retrospective, propensity-matched cohort study included 90 days of follow-up of patients in a tertiary care, academic health system. Included patients were adults with at least 1 AF-specific International Statistical Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10) code from 2017 through 2019. Electronic medical records were reviewed to identify 125 individuals who used wearables and had adequate pulse-rate follow-up who were then matched using propensity scores 4 to 1 with 500 individuals who did not use wearables. Data were analyzed from June 2020 through February 2021. Exposure: Using commercially available wearables with pulse rate or rhythm evaluation capabilities. Main Outcomes and Measures: Mean pulse rates from measures taken in the clinic or hospital and a composite health care use score were recorded. The composite outcome included evaluation and management, ablation, cardioversion, telephone encounters, and number of rate or rhythm control medication orders. Results: Among 16 320 patients with AF included in the analysis, 348 patients used wearables and 15 972 individuals did not use wearables. Prior to matching, patients using wearables were younger (mean [SD] age, 64.0 [13.0] years vs 70.0 [13.8] years; P < .001) and healthier (mean [SD] CHA2DS2-VASc [congestive heart failure, hypertension, age ≥ 65 years or 65-74 years, diabetes, prior stroke/transient ischemic attack, vascular disease, sex] score, 3.6 [2.0] vs 4.4 [2.0]; P < .001) compared with individuals not using wearables, with similar gender distribution (148 [42.5%] women vs 6722 women [42.1%]; P = .91). After matching, mean pulse rate was similar between 125 patients using wearables and 500 patients not using wearables (75.01 [95% CI, 72.74-77.27] vs 75.79 [95% CI, 74.68-76.90] beats per minute [bpm]; P = .54), whereas mean composite use score was higher among individuals using wearables (3.55 [95% CI, 3.31-3.80] vs 3.27 [95% CI, 3.14-3.40]; P = .04). Among measures in the composite outcome, there was a significant difference in use of ablation, occurring in 22 individuals who used wearables (17.6%) vs 37 individuals who did not use wearables (7.4%) (P = .001). In the regression analyses, mean composite use score was 0.28 points (95% CI, 0.01 to 0.56 points) higher among individuals using wearables compared with those not using wearables and mean pulse was similar, with a -0.79 bpm (95% CI -3.28 to 1.71 bpm) difference between the groups. Conclusions and Relevance: This study found that follow-up health care use among individuals with AF was increased among those who used wearables compared with those with similar pulse rates who did not use wearables. Given the increasing use of wearables by patients with AF, prospective, randomized, long-term evaluation of the associations of wearable technology with health outcomes and health care use is needed.


Subject(s)
Atrial Fibrillation/physiopathology , Facilities and Services Utilization , Health Services/statistics & numerical data , Heart Rate , Monitoring, Physiologic , Wearable Electronic Devices , Adult , Aged , Female , Follow-Up Studies , Humans , Male , Middle Aged , Propensity Score , Retrospective Studies , Self-Management , Tertiary Healthcare , Utah
SELECTION OF CITATIONS
SEARCH DETAIL
...