Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
1.
Kidney360 ; 2024 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-38664867

RESUMEN

BACKGROUND: CKD is often underdiagnosed during early stages when GFR is preserved due to underutilization of testing for quantitative urine albumin-to-creatinine ratio (UACR) or protein-to-creatinine ratio (UPCR). Semi-quantitative dipstick proteinuria (DSP) on urinalysis is widely obtained but not accurate for identifying clinically significant proteinuria. METHODS: We identified all patients with a urinalysis and UACR or UPCR obtained on the same day at a tertiary referral center. The accuracy of DSP alone or in combination with specific gravity against a gold-standard of UACR ≥30 mg/g or UPCR ≥0.15 g/g, characterizing clinically significant proteinuria, was evaluated using logistic regression. Models were internally validated using 10-fold cross validation. The specific gravity for each DSP above which significant proteinuria is unlikely was determined. RESULTS: Of 11,229 patients, clinically significant proteinuria was present in 4,073 (36%). The area under the receiver operating characteristic curve (95% confidence interval) was 0.77 (0.76, 0.77) using DSP alone and 0.82 (0.82, 0.83) in combination with specific gravity (P<0.001), yielding a specificity of 0.93 (standard error, SE=0.02) and positive likelihood ratio of 9.52 (SE=0.85). The optimal specific gravity cut-offs to identify significant proteinuria were ≤1.0012, 1.0238, and 1.0442, for DSP of trace, 30, and 100 mg/dL. At any specific gravity, a DSP ≥300 mg/dL was extremely likely to represent significant proteinuria. CONCLUSION: Adding specific gravity to DSP improves recognition of clinically significant proteinuria and can be easily used to identify patients with early-stage CKD who may not have otherwise received a quantified proteinuria measurement for both clinical and research purposes.

2.
JACC Heart Fail ; 12(3): 508-520, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38099890

RESUMEN

BACKGROUND: Individuals with acute decompensated heart failure (ADHF) have a varying response to diuretic therapy. Strategies for the early identification of low diuretic efficiency to inform decongestion therapies are lacking. OBJECTIVES: The authors sought to develop and externally validate a machine learning-based phenomapping approach and integer-based diuresis score to identify patients with low diuretic efficiency. METHODS: Participants with ADHF from ROSE-AHF, CARRESS-HF, and ATHENA-HF were pooled in the derivation cohort (n = 794). Multivariable finite-mixture model-based phenomapping was performed to identify phenogroups based on diuretic efficiency (urine output over the first 72 hours per total intravenous furosemide equivalent loop diuretic dose). Phenogroups were externally validated in other pooled ADHF trials (DOSE/ESCAPE). An integer-based diuresis score (BAN-ADHF score: blood urea nitrogen, creatinine, natriuretic peptide levels, atrial fibrillation, diastolic blood pressure, hypertension and home diuretic, and heart failure hospitalization) was developed and validated based on predictors of the diuretic efficiency phenogroups to estimate the probability of low diuretic efficiency using the pooled ADHF trials described earlier. The associations of the BAN-ADHF score with markers and symptoms of congestion, length of stay, in-hospital mortality, and global well-being were assessed using adjusted regression models. RESULTS: Clustering identified 3 phenogroups based on diuretic efficiency: phenogroup 1 (n = 370; 47%) had lower diuretic efficiency (median: 13.1 mL/mg; Q1-Q3: 7.7-19.4 mL/mg) than phenogroups 2 (n = 290; 37%) and 3 (n = 134; 17%) (median: 17.8 mL/mg; Q1-Q3: 10.8-26.1 mL/mg and median: 35.3 mL/mg; Q1-Q3: 17.5-49.0 mL/mg, respectively) (P < 0.001). The median urine output difference in response to 80 mg intravenous twice-daily furosemide between the lowest and highest diuretic efficiency group (phenogroup 1 vs 3) was 3,520 mL/d. The BAN-ADHF score demonstrated good model performance for predicting the lowest diuretic efficiency phenogroup membership (C-index: 0.92 in DOSE/ESCAPE validation cohort) that was superior to measures of kidney function (creatinine or blood urea nitrogen), natriuretic peptide levels, or home diuretic dose (DeLong P < 0.001 for all). Net urine output in response to 80 mg intravenous twice-daily furosemide among patients with a low vs high (5 vs 20) BAN-ADHF score was 2,650 vs 660 mL per 24 hours, respectively. Participants with higher BAN-ADHF scores had significantly lower global well-being, higher natriuretic peptide levels on discharge, a longer in-hospital stay, and a higher risk of in-hospital mortality in both derivation and validation cohorts. CONCLUSIONS: The authors developed and validated a phenomapping strategy and diuresis score for individuals with ADHF and differential response to diuretic therapy, which was associated with length of stay and mortality.


Asunto(s)
Diuréticos , Insuficiencia Cardíaca , Humanos , Diuréticos/uso terapéutico , Furosemida/uso terapéutico , Creatinina , Péptidos Natriuréticos , Enfermedad Aguda
3.
J Diabetes Sci Technol ; : 19322968231212219, 2023 Dec 08.
Artículo en Inglés | MEDLINE | ID: mdl-38063209

RESUMEN

INTRODUCTION: Diabetic cardiomyopathy (DbCM) is characterized by subclinical abnormalities in cardiac structure/function and is associated with a higher risk of overt heart failure (HF). However, there are limited data on optimal strategies to identify individuals with DbCM in contemporary health systems. The aim of this study was to evaluate the prevalence of DbCM in a health system using existing data from the electronic health record (EHR). METHODS: Adult patients with type 2 diabetes mellitus free of cardiovascular disease (CVD) with available data on HF risk in a single-center EHR were included. The presence of DbCM was defined using different definitions: (1) least restrictive: ≥1 echocardiographic abnormality (left atrial enlargement, left ventricle hypertrophy, diastolic dysfunction); (2) intermediate restrictive: ≥2 echocardiographic abnormalities; (3) most restrictive: 3 echocardiographic abnormalities. DbCM prevalence was compared across age, sex, race, and ethnicity-based subgroups, with differences assessed using the chi-squared test. Adjusted logistic regression models were constructed to evaluate significant predictors of DbCM. RESULTS: Among 1921 individuals with type 2 diabetes mellitus, the prevalence of DbCM in the overall cohort was 8.7% and 64.4% in the most and least restrictive definitions, respectively. Across all definitions, older age and Hispanic ethnicity were associated with a higher proportion of DbCM. Females had a higher prevalence than males only in the most restrictive definition. In multivariable-adjusted logistic regression, higher systolic blood pressure, higher creatinine, and longer QRS duration were associated with a higher risk of DbCM across all definitions. CONCLUSIONS: In this single-center, EHR cohort, the prevalence of DbCM varies from 9% to 64%, with a higher prevalence with older age and Hispanic ethnicity.

4.
Am J Med Qual ; 38(5S Suppl 2): S12-S34, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37668271

RESUMEN

The goal of this article is to describe an integrated parallel process for the co-development of written and computable clinical practice guidelines (CPGs) to accelerate adoption and increase the impact of guideline recommendations in clinical practice. From February 2018 through December 2021, interdisciplinary work groups were formed after an initial Kaizen event and using expert consensus and available literature, produced a 12-phase integrated process (IP). The IP includes activities, resources, and iterative feedback loops for developing, implementing, disseminating, communicating, and evaluating CPGs. The IP incorporates guideline standards and informatics practices and clarifies how informaticians, implementers, health communicators, evaluators, and clinicians can help guideline developers throughout the development and implementation cycle to effectively co-develop written and computable guidelines. More efficient processes are essential to create actionable CPGs, disseminate and communicate recommendations to clinical end users, and evaluate CPG performance. Pilot testing is underway to determine how this IP expedites the implementation of CPGs into clinical practice and improves guideline uptake and health outcomes.

5.
J Investig Med ; 71(5): 459-464, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36786195

RESUMEN

We previously developed and validated a model to predict acute kidney injury (AKI) in hospitalized coronavirus disease 2019 (COVID-19) patients and found that the variables with the highest importance included a history of chronic kidney disease and markers of inflammation. Here, we assessed model performance during periods when COVID-19 cases were attributable almost exclusively to individual variants. Electronic Health Record data were obtained from patients admitted to 19 hospitals. The outcome was hospital-acquired AKI. The model, previously built in an Inception Cohort, was evaluated in Delta and Omicron cohorts using model discrimination and calibration methods. A total of 9104 patients were included, with 5676 in the Inception Cohort, 2461 in the Delta cohort, and 967 in the Omicron cohort. The Delta Cohort was younger with fewer comorbidities, while Omicron patients had lower rates of intensive care compared with the other cohorts. AKI occurred in 13.7% of the Inception Cohort, compared with 13.8% of Delta and 14.4% of Omicron (Omnibus p = 0.84). Compared with the Inception Cohort (area under the curve (AUC): 0.78, 95% confidence interval (CI): 0.76-0.80), the model showed stable discrimination in the Delta (AUC: 0.78, 95% CI: 0.75-0.80, p = 0.89) and Omicron (AUC: 0.74, 95% CI: 0.70-0.79, p = 0.37) cohorts. Estimated calibration index values were 0.02 (95% CI: 0.01-0.07) for Inception, 0.08 (95% CI: 0.05-0.17) for Delta, and 0.12 (95% CI: 0.04-0.47) for Omicron cohorts, p = 0.10 for both Delta and Omicron vs Inception. Our model for predicting hospital-acquired AKI remained accurate in different COVID-19 variants, suggesting that risk factors for AKI have not substantially evolved across variants.


Asunto(s)
Lesión Renal Aguda , COVID-19 , Humanos , SARS-CoV-2 , Lesión Renal Aguda/epidemiología , Hospitales
6.
Appl Clin Inform ; 13(5): 1123-1130, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36167337

RESUMEN

OBJECTIVES: We characterized real-time patient portal test result viewing among emergency department (ED) patients and described patient characteristics overall and among those not enrolled in the portal at ED arrival. METHODS: Our observational study at an academic ED used portal log data to trend the proportion of adult patients who viewed results during their visit from May 04, 2021 to April 04, 2022. Correlation was assessed visually and with Kendall's τ. Covariate analysis using binary logistic regression assessed result(s) viewed as a function of time accounting for age, sex, ethnicity, race, language, insurance status, disposition, and social vulnerability index (SVI). A second model only included patients not enrolled in the portal at arrival. We used random forest imputation to account for missingness and Huber-White heteroskedasticity-robust standard errors for patients with multiple encounters (α = 0.05). RESULTS: There were 60,314 ED encounters (31,164 unique patients). In 7,377 (12.2%) encounters, patients viewed results while still in the ED. Patients were not enrolled for portal use at arrival in 21,158 (35.2%) encounters, and 927 (4.4% of not enrolled, 1.5% overall) subsequently enrolled and viewed results in the ED. Visual inspection suggests an increasing proportion of patients who viewed results from roughly 5 to 15% over the study (Kendall's τ = 0.61 [p <0.0001]). Overall and not-enrolled models yielded concordance indices (C) of 0.68 and 0.72, respectively, with significant overall likelihood ratio χ 2 (p <0.0001). Time was independently associated with viewing results in both models after adjustment. Models revealed disparate use between age, race, ethnicity, SVI, sex, insurance status, and disposition groups. CONCLUSION: We observed increased portal-based test result viewing among ED patients over the year since the 21st Century Cures act went into effect, even among those not enrolled at arrival. We observed disparities in those who viewed results.


Asunto(s)
Portales del Paciente , Adulto , Humanos , Servicio de Urgencia en Hospital , Modelos Logísticos , Estudios Retrospectivos
7.
J Am Heart Assoc ; 11(11): e024094, 2022 06 07.
Artículo en Inglés | MEDLINE | ID: mdl-35656988

RESUMEN

Background The WATCH-DM (weight [body mass index], age, hypertension, creatinine, high-density lipoprotein cholesterol, diabetes control [fasting plasma glucose], ECG QRS duration, myocardial infarction, and coronary artery bypass grafting) and TRS-HFDM (Thrombolysis in Myocardial Infarction [TIMI] risk score for heart failure in diabetes) risk scores were developed to predict risk of heart failure (HF) among individuals with type 2 diabetes. WATCH-DM was developed to predict incident HF, whereas TRS-HFDM predicts HF hospitalization among patients with and without a prior HF history. We evaluated the model performance of both scores to predict incident HF events among patients with type 2 diabetes and no history of HF hospitalization across different cohorts and clinical settings with varying baseline risk. Methods and Results Incident HF risk was estimated by the integer-based WATCH-DM and TRS-HFDM scores in participants with type 2 diabetes free of baseline HF from 2 randomized clinical trials (TECOS [Trial Evaluating Cardiovascular Outcomes With Sitagliptin], N=12 028; and Look AHEAD [Look Action for Health in Diabetes] trial, N=4867). The integer-based WATCH-DM score was also validated in electronic health record data from a single large health care system (N=7475). Model discrimination was assessed by the Harrell concordance index and calibration by the Greenwood-Nam-D'Agostino statistic. HF incidence rate was 7.5, 3.9, and 4.1 per 1000 person-years in the TECOS, Look AHEAD trial, and electronic health record cohorts, respectively. Integer-based WATCH-DM and TRS-HFDM scores had similar discrimination and calibration for predicting 5-year HF risk in the Look AHEAD trial cohort (concordance indexes=0.70; Greenwood-Nam-D'Agostino P>0.30 for both). Both scores had lower discrimination and underpredicted HF risk in the TECOS cohort (concordance indexes=0.65 and 0.66, respectively; Greenwood-Nam-D'Agostino P<0.001 for both). In the electronic health record cohort, the integer-based WATCH-DM score demonstrated a concordance index of 0.73 with adequate calibration (Greenwood-Nam-D'Agostino P=0.96). TRS-HFDM score could not be validated in the electronic health record because of unavailability of data on urine albumin/creatinine ratio in most patients in the contemporary clinical practice. Conclusions The WATCH-DM and TRS-HFDM risk scores can discriminate risk of HF among intermediate-risk populations with type 2 diabetes.


Asunto(s)
Diabetes Mellitus Tipo 2 , Insuficiencia Cardíaca , Infarto del Miocardio , Adulto , Creatinina , Diabetes Mellitus Tipo 2/complicaciones , Diabetes Mellitus Tipo 2/diagnóstico , Diabetes Mellitus Tipo 2/epidemiología , Insuficiencia Cardíaca/complicaciones , Insuficiencia Cardíaca/diagnóstico , Insuficiencia Cardíaca/epidemiología , Hospitalización , Humanos , Infarto del Miocardio/epidemiología , Medición de Riesgo/métodos , Factores de Riesgo
8.
Kidney Med ; 4(6): 100463, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35434597

RESUMEN

Rationale & Objective: Acute kidney injury (AKI) is common in patients hospitalized with COVID-19, but validated, predictive models for AKI are lacking. We aimed to develop the best predictive model for AKI in hospitalized patients with coronavirus disease 2019 and assess its performance over time with the emergence of vaccines and the Delta variant. Study Design: Longitudinal cohort study. Setting & Participants: Hospitalized patients with a positive severe acute respiratory syndrome coronavirus 2 polymerase chain reaction result between March 1, 2020, and August 20, 2021 at 19 hospitals in Texas. Exposures: Comorbid conditions, baseline laboratory data, inflammatory biomarkers. Outcomes: AKI defined by KDIGO (Kidney Disease: Improving Global Outcomes) creatinine criteria. Analytical Approach: Three nested models for AKI were built in a development cohort and validated in 2 out-of-time cohorts. Model discrimination and calibration measures were compared among cohorts to assess performance over time. Results: Of 10,034 patients, 5,676, 2,917, and 1,441 were in the development, validation 1, and validation 2 cohorts, respectively, of whom 776 (13.7%), 368 (12.6%), and 179 (12.4%) developed AKI, respectively (P = 0.26). Patients in the validation cohort 2 had fewer comorbid conditions and were younger than those in the development cohort or validation cohort 1 (mean age, 54 ± 16.8 years vs 61.4 ± 17.5 and 61.7 ± 17.3 years, respectively, P < 0.001). The validation cohort 2 had higher median high-sensitivity C-reactive protein level (81.7 mg/L) versus the development cohort (74.5 mg/L; P < 0.01) and higher median ferritin level (696 ng/mL) versus both the development cohort (444 ng/mL) and validation cohort 1 (496 ng/mL; P < 0.001). The final model, which added high-sensitivity C-reactive protein, ferritin, and D-dimer levels, had an area under the curve of 0.781 (95% CI, 0.763-0.799). Compared with the development cohort, discrimination by area under the curve (validation 1: 0.785 [0.760-0.810], P = 0.79, and validation 2: 0.754 [0.716-0.795], P = 0.53) and calibration by estimated calibration index (validation 1: 0.116 [0.041-0.281], P = 0.11, and validation 2: 0.081 [0.045-0.295], P = 0.11) showed stable performance over time. Limitations: Potential billing and coding bias. Conclusions: We developed and externally validated a model to accurately predict AKI in patients with coronavirus disease 2019. The performance of the model withstood changes in practice patterns and virus variants.

9.
BMC Nephrol ; 23(1): 50, 2022 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-35105331

RESUMEN

BACKGROUND: Acute kidney injury (AKI) is a common complication in patients hospitalized with COVID-19 and may require renal replacement therapy (RRT). Dipstick urinalysis is frequently obtained, but data regarding the prognostic value of hematuria and proteinuria for kidney outcomes is scarce. METHODS: Patients with positive severe acute respiratory syndrome-coronavirus 2 (SARS-CoV2) PCR, who had a urinalysis obtained on admission to one of 20 hospitals, were included. Nested models with degree of hematuria and proteinuria were used to predict AKI and RRT during admission. Presence of Chronic Kidney Disease (CKD) and baseline serum creatinine were added to test improvement in model fit. RESULTS: Of 5,980 individuals, 829 (13.9%) developed an AKI during admission, and 149 (18.0%) of those with AKI received RRT. Proteinuria and hematuria degrees significantly increased with AKI severity (P < 0.001 for both). Any degree of proteinuria and hematuria was associated with an increased risk of AKI and RRT. In predictive models for AKI, presence of CKD improved the area under the curve (AUC) (95% confidence interval) to 0.73 (0.71, 0.75), P < 0.001, and adding baseline creatinine improved the AUC to 0.85 (0.83, 0.86), P < 0.001, when compared to the base model AUC using only proteinuria and hematuria, AUC = 0.64 (0.62, 0.67). In RRT models, CKD status improved the AUC to 0.78 (0.75, 0.82), P < 0.001, and baseline creatinine improved the AUC to 0.84 (0.80, 0.88), P < 0.001, compared to the base model, AUC = 0.72 (0.68, 0.76). There was no significant improvement in model discrimination when both CKD and baseline serum creatinine were included. CONCLUSIONS: Proteinuria and hematuria values on dipstick urinalysis can be utilized to predict AKI and RRT in hospitalized patients with COVID-19. We derived formulas using these two readily available values to help prognosticate kidney outcomes in these patients. Furthermore, the incorporation of CKD or baseline creatinine increases the accuracy of these formulas.


Asunto(s)
Lesión Renal Aguda/etiología , COVID-19/complicaciones , Hematuria/diagnóstico , Proteinuria/diagnóstico , Urinálisis/métodos , Lesión Renal Aguda/etnología , Lesión Renal Aguda/terapia , Anciano , Área Bajo la Curva , COVID-19/etnología , Intervalos de Confianza , Creatinina/sangre , Femenino , Hospitalización , Humanos , Estudios Longitudinales , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Insuficiencia Renal Crónica/diagnóstico , Terapia de Reemplazo Renal/estadística & datos numéricos
10.
Eur J Heart Fail ; 24(1): 169-180, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34730265

RESUMEN

AIMS: To evaluate the performance of the WATCH-DM risk score, a clinical risk score for heart failure (HF), in patients with dysglycaemia and in combination with natriuretic peptides (NPs). METHODS AND RESULTS: Adults with diabetes/pre-diabetes free of HF at baseline from four cohort studies (ARIC, CHS, FHS, and MESA) were included. The machine learning- [WATCH-DM(ml)] and integer-based [WATCH-DM(i)] scores were used to estimate the 5-year risk of incident HF. Discrimination was assessed by Harrell's concordance index (C-index) and calibration by the Greenwood-Nam-D'Agostino (GND) statistic. Improvement in model performance with the addition of NP levels was assessed by C-index and continuous net reclassification improvement (NRI). Of the 8938 participants included, 3554 (39.8%) had diabetes and 432 (4.8%) developed HF within 5 years. The WATCH-DM(ml) and WATCH-DM(i) scores demonstrated high discrimination for predicting HF risk among individuals with dysglycaemia (C-indices = 0.80 and 0.71, respectively), with no evidence of miscalibration (GND P ≥0.10). The C-index of elevated NP levels alone for predicting incident HF among individuals with dysglycaemia was significantly higher among participants with low/intermediate (<13) vs. high (≥13) WATCH-DM(i) scores [0.71 (95% confidence interval 0.68-0.74) vs. 0.64 (95% confidence interval 0.61-0.66)]. When NP levels were combined with the WATCH-DM(i) score, HF risk discrimination improvement and NRI varied across the spectrum of risk with greater improvement observed at low/intermediate risk [WATCH-DM(i) <13] vs. high risk [WATCH-DM(i) ≥13] (C-index = 0.73 vs. 0.71; NRI = 0.45 vs. 0.17). CONCLUSION: The WATCH-DM risk score can accurately predict incident HF risk in community-based individuals with dysglycaemia. The addition of NP levels is associated with greater improvement in the HF risk prediction performance among individuals with low/intermediate risk than those with high risk.


Asunto(s)
Trastornos del Metabolismo de la Glucosa/epidemiología , Insuficiencia Cardíaca , Péptidos Natriuréticos , Adulto , Estudios de Cohortes , Insuficiencia Cardíaca/diagnóstico , Insuficiencia Cardíaca/epidemiología , Humanos , Medición de Riesgo/métodos , Factores de Riesgo
11.
Appl Clin Inform ; 12(5): 1074-1081, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34788889

RESUMEN

BACKGROUND: Novel coronavirus disease 2019 (COVID-19) vaccine administration has faced distribution barriers across the United States. We sought to delineate our vaccine delivery experience in the first week of vaccine availability, and our effort to prioritize employees based on risk with a goal of providing an efficient infrastructure to optimize speed and efficiency of vaccine delivery while minimizing risk of infection during the immunization process. OBJECTIVE: This article aims to evaluate an employee prioritization/invitation/scheduling system, leveraging an integrated electronic health record patient portal framework for employee COVID-19 immunizations at an academic medical center. METHODS: We conducted an observational cross-sectional study during January 2021 at a single urban academic center. All employees who met COVID-19 allocation vaccine criteria for phase 1a.1 to 1a.4 were included. We implemented a prioritization/invitation/scheduling framework and evaluated time from invitation to scheduling as a proxy for vaccine interest and arrival to vaccine administration to measure operational throughput. RESULTS: We allotted vaccines for 13,753 employees but only 10,662 employees with an active patient portal account received an invitation. Of those with an active account, 6,483 (61%) scheduled an appointment and 6,251 (59%) were immunized in the first 7 days. About 66% of invited providers were vaccinated in the first 7 days. In contrast, only 41% of invited facility/food service employees received the first dose of the vaccine in the first 7 days (p < 0.001). At the vaccination site, employees waited 5.6 minutes (interquartile range [IQR]: 3.9-8.3) from arrival to vaccination. CONCLUSION: We developed a system of early COVID-19 vaccine prioritization and administration in our health care system. We saw strong early acceptance in those with proximal exposure to COVID-19 but noticed significant difference in the willingness of different employee groups to receive the vaccine.


Asunto(s)
COVID-19 , Vacunación Masiva , Centros Médicos Académicos , Vacunas contra la COVID-19 , Estudios Transversales , Humanos , SARS-CoV-2 , Estados Unidos
12.
JMIR Cardio ; 5(1): e22296, 2021 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-33797396

RESUMEN

BACKGROUND: Professional society guidelines are emerging for cardiovascular care in cancer patients. However, it is not yet clear how effectively the cancer survivor population is screened and treated for cardiomyopathy in contemporary clinical practice. As electronic health records (EHRs) are now widely used in clinical practice, we tested the hypothesis that an EHR-based cardio-oncology registry can address these questions. OBJECTIVE: The aim of this study was to develop an EHR-based pragmatic cardio-oncology registry and, as proof of principle, to investigate care gaps in the cardiovascular care of cancer patients. METHODS: We generated a programmatically deidentified, real-time EHR-based cardio-oncology registry from all patients in our institutional Cancer Population Registry (N=8275, 2011-2017). We investigated: (1) left ventricular ejection fraction (LVEF) assessment before and after treatment with potentially cardiotoxic agents; and (2) guideline-directed medical therapy (GDMT) for left ventricular dysfunction (LVD), defined as LVEF<50%, and symptomatic heart failure with reduced LVEF (HFrEF), defined as LVEF<50% and Problem List documentation of systolic congestive heart failure or dilated cardiomyopathy. RESULTS: Rapid development of an EHR-based cardio-oncology registry was feasible. Identification of tests and outcomes was similar using the EHR-based cardio-oncology registry and manual chart abstraction (100% sensitivity and 83% specificity for LVD). LVEF was documented prior to initiation of cancer therapy in 19.8% of patients. Prevalence of postchemotherapy LVD and HFrEF was relatively low (9.4% and 2.5%, respectively). Among patients with postchemotherapy LVD or HFrEF, those referred to cardiology had a significantly higher prescription rate of a GDMT. CONCLUSIONS: EHR data can efficiently populate a real-time, pragmatic cardio-oncology registry as a byproduct of clinical care for health care delivery investigations.

13.
Appl Clin Inform ; 12(1): 182-189, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33694144

RESUMEN

OBJECTIVE: Clinical decision support (CDS) can contribute to quality and safety. Prior work has shown that errors in CDS systems are common and can lead to unintended consequences. Many CDS systems use Boolean logic, which can be difficult for CDS analysts to specify accurately. We set out to determine the prevalence of certain types of Boolean logic errors in CDS statements. METHODS: Nine health care organizations extracted Boolean logic statements from their Epic electronic health record (EHR). We developed an open-source software tool, which implemented the Espresso logic minimization algorithm, to identify three classes of logic errors. RESULTS: Participating organizations submitted 260,698 logic statements, of which 44,890 were minimized by Espresso. We found errors in 209 of them. Every participating organization had at least two errors, and all organizations reported that they would act on the feedback. DISCUSSION: An automated algorithm can readily detect specific categories of Boolean CDS logic errors. These errors represent a minority of CDS errors, but very likely require correction to avoid patient safety issues. This process found only a few errors at each site, but the problem appears to be widespread, affecting all participating organizations. CONCLUSION: Both CDS implementers and EHR vendors should consider implementing similar algorithms as part of the CDS authoring process to reduce the number of errors in their CDS interventions.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Lógica , Registros Electrónicos de Salud , Humanos , Programas Informáticos
14.
Diabetologia ; 64(7): 1583-1594, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33715025

RESUMEN

AIMS/HYPOTHESIS: Type 2 diabetes is a heterogeneous disease process with variable trajectories of CVD risk. We aimed to evaluate four phenomapping strategies and their ability to stratify CVD risk in individuals with type 2 diabetes and to identify subgroups who may benefit from specific therapies. METHODS: Participants with type 2 diabetes and free of baseline CVD in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial were included in this study (N = 6466). Clustering using Gaussian mixture models, latent class analysis, finite mixture models (FMMs) and principal component analysis was compared. Clustering variables included demographics, medical and social history, laboratory values and diabetes complications. The interaction between the phenogroup and intensive glycaemic, combination lipid and intensive BP therapy for the risk of the primary outcome (composite of fatal myocardial infarction, non-fatal myocardial infarction or unstable angina) was evaluated using adjusted Cox models. The phenomapping strategies were independently assessed in an external validation cohort (Look Action for Health in Diabetes [Look AHEAD] trial: n = 4211; and Bypass Angioplasty Revascularisation Investigation 2 Diabetes [BARI 2D] trial: n = 1495). RESULTS: Over 9.1 years of follow-up, 789 (12.2%) participants had a primary outcome event. FMM phenomapping with three phenogroups was the best-performing clustering strategy in both the derivation and validation cohorts as determined by Bayesian information criterion, Dunn index and improvement in model discrimination. Phenogroup 1 (n = 663, 10.3%) had the highest burden of comorbidities and diabetes complications, phenogroup 2 (n = 2388, 36.9%) had an intermediate comorbidity burden and lowest diabetes complications, and phenogroup 3 (n = 3415, 52.8%) had the fewest comorbidities and intermediate burden of diabetes complications. Significant interactions were observed between phenogroups and treatment interventions including intensive glycaemic control (p-interaction = 0.042) and combination lipid therapy (p-interaction < 0.001) in the ACCORD, intensive lifestyle intervention (p-interaction = 0.002) in the Look AHEAD and early coronary revascularisation (p-interaction = 0.003) in the BARI 2D trial cohorts for the risk of the primary composite outcome. Favourable reduction in the risk of the primary composite outcome with these interventions was noted in low-risk participants of phenogroup 3 but not in other phenogroups. Compared with phenogroup 3, phenogroup 1 participants were more likely to have severe/symptomatic hypoglycaemic events and medication non-adherence on follow-up in the ACCORD and Look AHEAD trial cohorts. CONCLUSIONS/INTERPRETATION: Clustering using FMMs was the optimal phenomapping strategy to identify replicable subgroups of patients with type 2 diabetes with distinct clinical characteristics, CVD risk and response to therapies.


Asunto(s)
Aterosclerosis/diagnóstico , Aterosclerosis/etiología , Diabetes Mellitus Tipo 2/diagnóstico , Anciano , Aterosclerosis/epidemiología , Variación Biológica Poblacional , Factores de Riesgo Cardiometabólico , Enfermedades Cardiovasculares/diagnóstico , Enfermedades Cardiovasculares/epidemiología , Enfermedades Cardiovasculares/etiología , Análisis por Conglomerados , Estudios de Cohortes , Diabetes Mellitus Tipo 2/complicaciones , Diabetes Mellitus Tipo 2/epidemiología , Diabetes Mellitus Tipo 2/terapia , Angiopatías Diabéticas/diagnóstico , Angiopatías Diabéticas/epidemiología , Angiopatías Diabéticas/etiología , Femenino , Estudios de Seguimiento , Humanos , Masculino , Persona de Mediana Edad , Fenotipo , Pronóstico , Medición de Riesgo/métodos , Factores de Riesgo , Estadística como Asunto/métodos , Resultado del Tratamiento , Estados Unidos/epidemiología
15.
ACR Open Rheumatol ; 3(3): 164-172, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33570251

RESUMEN

OBJECTIVE: Rheumatoid arthritis (RA) disease activity assessment is critical for treatment decisions and treat to target (T2T) outcomes. Utilization of the electronic medical record (EMR) and techniques to improve the routine capture of disease activity measures in clinical practice are not well described. We leveraged a Lean Six Sigma (LSS) approach, a data-driven five-step process improvement and problem-solving methodology, coupled with EMR modifications to evaluate improvement in disease activity documentation and patient outcomes. METHODS: A RA registry was established, and structured fields for Routine Assessment of Patient Index Data (RAPID3) and Clinical Disease Activity Index (CDAI) were built in the EMR, along with a dashboard to display provider performance rates. An initial rapid-cycle improvement intervention was launched, and subsequent LSS improvement cycles helped in standardization of clinic workflow, modifying provider behaviors, and motivating better documentation practices. Trends related to CDAI score categories were compared over time using run charts. RESULTS: Our project included 1322 patients with RA and 10 241 encounters between April 2016 and December 2019. Initially, RAPID3 completion rates increased from 16% to 50%, and CDAI from 15% to 44% from the RCI intervention. Post LSS intervention, the RAPID3 rate increased to more than 90% (sustained at 85%), and CDAI rate increased to more than 80% (sustained at 72%). The patients in the low disease/remission category increased from 54% to 66% (p < 0.001), and those in the high disease category decreased from 15% to 7% (p < 0.001), demonstrating improved T2T outcomes. CONCLUSION: Combining EMR modifications with systems redesign utilizing LSS approach led to impressive and sustained improvement in disease activity documentation and T2T outcomes.

16.
J Am Med Inform Assoc ; 28(5): 899-906, 2021 04 23.
Artículo en Inglés | MEDLINE | ID: mdl-33566093

RESUMEN

OBJECTIVE: The electronic health record (EHR) data deluge makes data retrieval more difficult, escalating cognitive load and exacerbating clinician burnout. New auto-summarization techniques are needed. The study goal was to determine if problem-oriented view (POV) auto-summaries improve data retrieval workflows. We hypothesized that POV users would perform tasks faster, make fewer errors, be more satisfied with EHR use, and experience less cognitive load as compared with users of the standard view (SV). METHODS: Simple data retrieval tasks were performed in an EHR simulation environment. A randomized block design was used. In the control group (SV), subjects retrieved lab results and medications by navigating to corresponding sections of the electronic record. In the intervention group (POV), subjects clicked on the name of the problem and immediately saw lab results and medications relevant to that problem. RESULTS: With POV, mean completion time was faster (173 seconds for POV vs 205 seconds for SV; P < .0001), the error rate was lower (3.4% for POV vs 7.7% for SV; P = .0010), user satisfaction was greater (System Usability Scale score 58.5 for POV vs 41.3 for SV; P < .0001), and cognitive task load was less (NASA Task Load Index score 0.72 for POV vs 0.99 for SV; P < .0001). DISCUSSION: The study demonstrates that using a problem-based auto-summary has a positive impact on 4 aspects of EHR data retrieval, including cognitive load. CONCLUSION: EHRs have brought on a data deluge, with increased cognitive load and physician burnout. To mitigate these increases, further development and implementation of auto-summarization functionality and the requisite knowledge base are needed.


Asunto(s)
Presentación de Datos , Registros Electrónicos de Salud , Registros Médicos Orientados a Problemas , Humanos , Almacenamiento y Recuperación de la Información , Interfaz Usuario-Computador , Flujo de Trabajo
17.
J Infect ; 82(1): 41-47, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33038385

RESUMEN

BACKGROUND: We created an electronic health record-based registry using automated data extraction tools to study the epidemiology of bloodstream infections (BSI) in solid organ transplant recipients. The overarching goal was to determine the usefulness of an electronic health record-based registry using data extraction tools for clinical research in solid organ transplantation. METHODS: We performed a retrospective single-center cohort study of adult solid organ transplant recipients from 2010 to 2015. Extraction tools were used to retrieve data from the electronic health record, which was integrated with national data sources. Electronic health records of subjects with positive blood cultures were manually adjudicated using consensus definitions. One-year cumulative incidence, risk factors for BSI acquisition, and 1-year mortality were analyzed by Kaplan-Meier method and Cox modeling, and 30-day mortality with logistic regression. RESULTS: In 917 solid organ transplant recipients the cumulative incidence of BSI was 8.4% (95% confidence interval 6.8-10.4) with central line-associated BSI as the most common source. The proportion of multidrug-resistant isolates increased from 0% in 2010 to 47% in 2015 (p = 0.03). BSI was the strongest risk factor for 1-year mortality (HR=8.44; 4.99-14.27; p<0.001). In 11 of 14 deaths, BSI was the main cause or contributory in patients with non-rapidly fatal underlying conditions. CONCLUSIONS: Our study illustrates the usefulness of an electronic health record-based registry using automated extraction tools for clinical research in the field of solid organ transplantation. A BSI reduces the 1-year survival of solid organ transplant recipients. The most common sources of BSIs in our studies are preventable.


Asunto(s)
Bacteriemia , Trasplante de Órganos , Sepsis , Adulto , Bacteriemia/epidemiología , Estudios de Cohortes , Humanos , Trasplante de Órganos/efectos adversos , Prueba de Estudio Conceptual , Sistema de Registros , Estudios Retrospectivos , Factores de Riesgo , Sepsis/epidemiología
18.
Eur J Heart Fail ; 22(1): 148-158, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31637815

RESUMEN

AIM: To identify distinct phenotypic subgroups in a highly-dimensional, mixed-data cohort of individuals with heart failure (HF) with preserved ejection fraction (HFpEF) using unsupervised clustering analysis. METHODS AND RESULTS: The study included all Treatment of Preserved Cardiac Function Heart Failure with an Aldosterone Antagonist (TOPCAT) participants from the Americas (n = 1767). In the subset of participants with available echocardiographic data (derivation cohort, n = 654), we characterized three mutually exclusive phenogroups of HFpEF participants using penalized finite mixture model-based clustering analysis on 61 mixed-data phenotypic variables. Phenogroup 1 had higher burden of co-morbidities, natriuretic peptides, and abnormalities in left ventricular structure and function; phenogroup 2 had lower prevalence of cardiovascular and non-cardiac co-morbidities but higher burden of diastolic dysfunction; and phenogroup 3 had lower natriuretic peptide levels, intermediate co-morbidity burden, and the most favourable diastolic function profile. In adjusted Cox models, participants in phenogroup 1 (vs. phenogroup 3) had significantly higher risk for all adverse clinical events including the primary composite endpoint, all-cause mortality, and HF hospitalization. Phenogroup 2 (vs. phenogroup 3) was significantly associated with higher risk of HF hospitalization but a lower risk of atherosclerotic event (myocardial infarction, stroke, or cardiovascular death), and comparable risk of mortality. Similar patterns of association were also observed in the non-echocardiographic TOPCAT cohort (internal validation cohort, n = 1113) and an external cohort of patients with HFpEF [Phosphodiesterase-5 Inhibition to Improve Clinical Status and Exercise Capacity in Heart Failure with Preserved Ejection Fraction (RELAX) trial cohort, n = 198], with the highest risk of adverse outcome noted in phenogroup 1 participants. CONCLUSIONS: Machine learning-based cluster analysis can identify phenogroups of patients with HFpEF with distinct clinical characteristics and long-term outcomes.


Asunto(s)
Insuficiencia Cardíaca , Análisis por Conglomerados , Insuficiencia Cardíaca/epidemiología , Humanos , Aprendizaje Automático , Antagonistas de Receptores de Mineralocorticoides , Pronóstico , Volumen Sistólico
19.
Diabetes Care ; 42(12): 2298-2306, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31519694

RESUMEN

OBJECTIVE: To develop and validate a novel, machine learning-derived model to predict the risk of heart failure (HF) among patients with type 2 diabetes mellitus (T2DM). RESEARCH DESIGN AND METHODS: Using data from 8,756 patients free at baseline of HF, with <10% missing data, and enrolled in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial, we used random survival forest (RSF) methods, a nonparametric decision tree machine learning approach, to identify predictors of incident HF. The RSF model was externally validated in a cohort of individuals with T2DM using the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). RESULTS: Over a median follow-up of 4.9 years, 319 patients (3.6%) developed incident HF. The RSF models demonstrated better discrimination than the best performing Cox-based method (C-index 0.77 [95% CI 0.75-0.80] vs. 0.73 [0.70-0.76] respectively) and had acceptable calibration (Hosmer-Lemeshow statistic χ2 = 9.63, P = 0.29) in the internal validation data set. From the identified predictors, an integer-based risk score for 5-year HF incidence was created: the WATCH-DM (Weight [BMI], Age, hyperTension, Creatinine, HDL-C, Diabetes control [fasting plasma glucose], QRS Duration, MI, and CABG) risk score. Each 1-unit increment in the risk score was associated with a 24% higher relative risk of HF within 5 years. The cumulative 5-year incidence of HF increased in a graded fashion from 1.1% in quintile 1 (WATCH-DM score ≤7) to 17.4% in quintile 5 (WATCH-DM score ≥14). In the external validation cohort, the RSF-based risk prediction model and the WATCH-DM risk score performed well with good discrimination (C-index = 0.74 and 0.70, respectively), acceptable calibration (P ≥0.20 for both), and broad risk stratification (5-year HF risk range from 2.5 to 18.7% across quintiles 1-5). CONCLUSIONS: We developed and validated a novel, machine learning-derived risk score that integrates readily available clinical, laboratory, and electrocardiographic variables to predict the risk of HF among outpatients with T2DM.


Asunto(s)
Diabetes Mellitus Tipo 2/complicaciones , Insuficiencia Cardíaca/etiología , Hospitalización/estadística & datos numéricos , Aprendizaje Automático , Medición de Riesgo/métodos , Anciano , Ensayos Clínicos como Asunto , Estudios de Cohortes , Femenino , Estudios de Seguimiento , Insuficiencia Cardíaca/epidemiología , Humanos , Incidencia , Masculino , Persona de Mediana Edad , Pacientes Ambulatorios , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Factores de Riesgo , Factores de Tiempo
20.
J Am Med Inform Assoc ; 26(11): 1344-1354, 2019 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-31512730

RESUMEN

OBJECTIVE: We sought to demonstrate applicability of user stories, progressively elaborated by testable acceptance criteria, as lightweight requirements for agile development of clinical decision support (CDS). MATERIALS AND METHODS: User stories employed the template: As a [type of user], I want [some goal] so that [some reason]. From the "so that" section, CDS benefit measures were derived. Detailed acceptance criteria were elaborated through ensuing conversations. We estimated user story size with "story points," and depicted multiple user stories with a use case diagram or feature breakdown structure. Large user stories were split to fit into 2-week iterations. RESULTS: One example user story was: As a rheumatologist, I want to be advised if my patient with rheumatoid arthritis is not on a disease-modifying anti-rheumatic drug (DMARD), so that they receive optimal therapy and can experience symptom improvement. This yielded a process measure (DMARD use), and an outcome measure (Clinical Disease Activity Index). Following implementation, the DMARD nonuse rate decreased from 3.7% to 1.4%. Patients with a high Clinical Disease Activity Index improved from 13.7% to 7%. For a thromboembolism prevention CDS project, diagrams organized multiple user stories. DISCUSSION: User stories written in the clinician's voice aid CDS governance and lead naturally to measures of CDS effectiveness. Estimation of relative story size helps plan CDS delivery dates. User stories prove to be practical even on larger projects. CONCLUSIONS: User stories concisely communicate the who, what, and why of a CDS request, and serve as lightweight requirements for agile development to meet the demand for increasingly diverse CDS.


Asunto(s)
Recolección de Datos , Sistemas de Apoyo a Decisiones Clínicas , Narración , Registros Electrónicos de Salud , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...