Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 69
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Ann Surg ; 276(1): 180-185, 2022 07 01.
Article in English | MEDLINE | ID: mdl-33074897

ABSTRACT

OBJECTIVE: To demonstrate that a semi-automated approach to health data abstraction provides significant efficiencies and high accuracy. BACKGROUND: Surgical outcome abstraction remains laborious and a barrier to the sustainment of quality improvement registries like ACS-NSQIP. A supervised machine learning algorithm developed for detecting SSi using structured and unstructured electronic health record data was tested to perform semi-automated SSI abstraction. METHODS: A Lasso-penalized logistic regression model with 2011-3 data was trained (baseline performance measured with 10-fold cross-validation). A cutoff probability score from the training data was established, dividing the subsequent evaluation dataset into "negative" and "possible" SSI groups, with manual data abstraction only performed on the "possible" group. We evaluated performance on data from 2014, 2015, and both years. RESULTS: Overall, 6188 patients were in the 2011-3 training dataset and 5132 patients in the 2014-5 evaluation dataset. With use of the semi-automated approach, applying the cut-off score decreased the amount of manual abstraction by >90%, resulting in < 1% false negatives in the "negative" group and a sensitivity of 82%. A blinded review of 10% of the "possible" group, considering only the features selected by the algorithm, resulted in high agreement with the gold standard based on full chart abstraction, pointing towards additional efficiency in the abstraction process by making it possible for abstractors to review limited, salient portions of the chart. CONCLUSION: Semi-automated machine learning-aided SSI abstraction greatly accelerates the abstraction process and achieves very good performance. This could be translated to other post-operative outcomes and reduce cost barriers for wider ACS-NSQIP adoption.


Subject(s)
Machine Learning , Surgical Wound Infection , Algorithms , Electronic Health Records , Humans , Quality Improvement , Surgical Wound Infection/diagnosis
2.
Crit Care Med ; 50(5): 799-809, 2022 05 01.
Article in English | MEDLINE | ID: mdl-34974496

ABSTRACT

OBJECTIVES: Sepsis remains a leading and preventable cause of hospital utilization and mortality in the United States. Despite updated guidelines, the optimal definition of sepsis as well as optimal timing of bundled treatment remain uncertain. Identifying patients with infection who benefit from early treatment is a necessary step for tailored interventions. In this study, we aimed to illustrate clinical predictors of time-to-antibiotics among patients with severe bacterial infection and model the effect of delay on risk-adjusted outcomes across different sepsis definitions. DESIGN: A multicenter retrospective observational study. SETTING: A seven-hospital network including academic tertiary care center. PATIENTS: Eighteen thousand three hundred fifteen patients admitted with severe bacterial illness with or without sepsis by either acute organ dysfunction (AOD) or systemic inflammatory response syndrome positivity. MEASUREMENTS AND MAIN RESULTS: The primary exposure was time to antibiotics. We identified patient predictors of time-to-antibiotics including demographics, chronic diagnoses, vitals, and laboratory results and determined the impact of delay on a composite of inhospital death or length of stay over 10 days. Distribution of time-to-antibiotics was similar across patients with and without sepsis. For all patients, a J-curve relationship between time-to-antibiotics and outcomes was observed, primarily driven by length of stay among patients without AOD. Patient characteristics provided good to excellent prediction of time-to-antibiotics irrespective of the presence of sepsis. Reduced time-to-antibiotics was associated with improved outcomes for all time points beyond 2.5 hours from presentation across sepsis definitions. CONCLUSIONS: Antibiotic timing is a function of patient factors regardless of sepsis criteria. Similarly, we show that early administration of antibiotics is associated with improved outcomes in all patients with severe bacterial illness. Our findings suggest identifying infection is a rate-limiting and actionable step that can improve outcomes in septic and nonseptic patients.


Subject(s)
Bacterial Infections , Sepsis , Shock, Septic , Anti-Bacterial Agents/therapeutic use , Bacterial Infections/drug therapy , Hospital Mortality , Hospitalization , Humans , Retrospective Studies , United States
3.
Ann Surg ; 272(1): 32-39, 2020 07.
Article in English | MEDLINE | ID: mdl-32224733

ABSTRACT

OBJECTIVE: This study sought to compare trends in the development of cirrhosis between patients with NAFLD who underwent bariatric surgery and a well-matched group of nonsurgical controls. SUMMARY OF BACKGROUND DATA: Patients with NAFLD who undergo bariatric surgery generally have improvements in liver histology. However, the long-term effect of bariatric surgery on clinically relevant liver outcomes has not been investigated. METHODS: From a large insurance database, patients with a new NAFLD diagnosis and at least 2 years of continuous enrollment before and after diagnosis were identified. Patients with traditional contraindications to bariatric surgery were excluded. Patients who underwent bariatric surgery were identified and matched 1:2 with patients who did not undergo bariatric surgery based on age, sex, and comorbid conditions. Kaplan-Meier analysis and Cox proportional hazards modeling were used to evaluate differences in progression from NAFLD to cirrhosis. RESULTS: A total of 2942 NAFLD patients who underwent bariatric surgery were identified and matched with 5884 NAFLD patients who did not undergo surgery. Cox proportional hazards modeling found that bariatric surgery was independently associated with a decreased risk of developing cirrhosis (hazard ratio 0.31, 95% confidence interval 0.19-0.52). Male gender was associated with an increased risk of cirrhosis (hazard ratio 2.07, 95% confidence interval 1.31-3.27). CONCLUSIONS: Patients with NAFLD who undergo bariatric surgery are at a decreased risk for progression to cirrhosis compared to well-matched controls. Bariatric surgery should be considered as a treatment strategy for otherwise eligible patients with NAFLD. Future bariatric surgery guidelines should include NAFLD as a comorbid indication when determining eligibility.


Subject(s)
Bariatric Surgery , Liver Cirrhosis/etiology , Liver Cirrhosis/prevention & control , Non-alcoholic Fatty Liver Disease/complications , Obesity, Morbid/surgery , Adolescent , Adult , Aged , Disease Progression , Female , Humans , Male , Middle Aged , Retrospective Studies , Risk
4.
J Gen Intern Med ; 35(5): 1413-1418, 2020 05.
Article in English | MEDLINE | ID: mdl-32157649

ABSTRACT

BACKGROUND: Predicting death in a cohort of clinically diverse, multi-condition hospitalized patients is difficult. This frequently hinders timely serious illness care conversations. Prognostic models that can determine 6-month death risk at the time of hospital admission can improve access to serious illness care conversations. OBJECTIVE: The objective is to determine if the demographic, vital sign, and laboratory data from the first 48 h of a hospitalization can be used to accurately quantify 6-month mortality risk. DESIGN: This is a retrospective study using electronic medical record data linked with the state death registry. PARTICIPANTS: Participants were 158,323 hospitalized patients within a 6-hospital network over a 6-year period. MAIN MEASURES: Main measures are the following: the first set of vital signs, complete blood count, basic and complete metabolic panel, serum lactate, pro-BNP, troponin-I, INR, aPTT, demographic information, and associated ICD codes. The outcome of interest was death within 6 months. KEY RESULTS: Model performance was measured on the validation dataset. A random forest model-mini serious illness algorithm-used 8 variables from the initial 48 h of hospitalization and predicted death within 6 months with an AUC of 0.92 (0.91-0.93). Red cell distribution width was the most important prognostic variable. min-SIA (mini serious illness algorithm) was very well calibrated and estimated the probability of death to within 10% of the actual value. The discriminative ability of the min-SIA was significantly better than historical estimates of clinician performance. CONCLUSION: min-SIA algorithm can identify patients at high risk of 6-month mortality at the time of hospital admission. It can be used to improved access to timely, serious illness care conversations in high-risk patients.


Subject(s)
Algorithms , Hospitalization , Cohort Studies , Hospital Mortality , Hospitals , Humans , Retrospective Studies , Risk Assessment
5.
BMC Med Inform Decis Mak ; 20(1): 6, 2020 01 08.
Article in English | MEDLINE | ID: mdl-31914992

ABSTRACT

BACKGROUND: The ubiquity of electronic health records (EHR) offers an opportunity to observe trajectories of laboratory results and vital signs over long periods of time. This study assessed the value of risk factor trajectories available in the electronic health record to predict incident type 2 diabetes. STUDY DESIGN AND METHODS: Analysis was based on a large 13-year retrospective cohort of 71,545 adult, non-diabetic patients with baseline in 2005 and median follow-up time of 8 years. The trajectories of fasting plasma glucose, lipids, BMI and blood pressure were computed over three time frames (2000-2001, 2002-2003, 2004) before baseline. A novel method, Cumulative Exposure (CE), was developed and evaluated using Cox proportional hazards regression to assess risk of incident type 2 diabetes. We used the Framingham Diabetes Risk Scoring (FDRS) Model as control. RESULTS: The new model outperformed the FDRS Model (.802 vs .660; p-values <2e-16). Cumulative exposure measured over different periods showed that even short episodes of hyperglycemia increase the risk of developing diabetes. Returning to normoglycemia moderates the risk, but does not fully eliminate it. The longer an individual maintains glycemic control after a hyperglycemic episode, the lower the subsequent risk of diabetes. CONCLUSION: Incorporating risk factor trajectories substantially increases the ability of clinical decision support risk models to predict onset of type 2 diabetes and provides information about how risk changes over time.


Subject(s)
Diabetes Mellitus, Type 2/diagnosis , Diabetes Mellitus, Type 2/prevention & control , Adult , Blood Glucose , Female , Humans , Male , Middle Aged , Prognosis , Proportional Hazards Models , Retrospective Studies , Risk Factors
6.
Endocr Pract ; 25(6): 545-553, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30865535

ABSTRACT

Objective: Early identification and management of prediabetes is critical to prevent progression to diabetes. We aimed to assess whether prediabetes is appropriately recognized and managed among patients with impaired fasting glucose (IFG). Methods: We carried out an observational study of Olmsted County residents evaluated at the Mayo Clinic between 1999-2017. We randomly selected 108 subjects with biochemical criteria of IFG and 105 normoglycemic subjects. We reviewed their health records at baseline (1999-2004) and during follow up (2005-2017) collecting demographic and clinical data including vitals, diagnoses, laboratory, and medications associated with cardiovascular comorbidities. The main outcome was documentation of any recognition of prediabetes and management recommendations (lifestyle changes and/or medications). Results: At baseline (1999-2004), 26.85% (29/108) of subjects with IFG were recognized as having prediabetes, and of these 75.86% (22/29) received management recommendations with 6.9% (2/29) getting metformin. During follow-up (2005-2017), 26.67% (28/105) of initial cohort of normoglycemic subjects developed incident IFG and of these, 85.71% (24/28) were recognized as having prediabetes, and 58.33% (14/24) received management recommendations. During the entire study period, 62.50% (85/136) were recognized as having prediabetes of which 75.29% (64/85) had documented management recommendations. High body mass index (BMI) (≥35) was associated with increased recognition (odds ratio [OR] 3.66; confidence interval [CI] 1.065, 12.500; P = .0395), and normal BMI (<25) was associated with a lack of recognition (OR 0.146; CI 0.189, 0.966; P = .0413). Conclusion: Despite evidence supporting the efficacy of lifestyle changes and medications in managing prediabetes, this condition is not fully recognized in routine clinical practice. Increased awareness of diagnostic criteria and appropriate management are essential to enhance diabetes prevention. Abbreviations: BMI = body mass index; CI = confidence interval; EHR = electronic health records; FBG = fasting blood glucose; IFG = impaired fasting glucose; IGT = impaired glucose tolerance; OR = odds ratio.


Subject(s)
Glucose Intolerance , Prediabetic State , Blood Glucose , Cohort Studies , Fasting , Humans
7.
J Med Syst ; 43(7): 185, 2019 May 17.
Article in English | MEDLINE | ID: mdl-31098679

ABSTRACT

Although machine learning models are increasingly being developed for clinical decision support for patients with type 2 diabetes, the adoption of these models into clinical practice remains limited. Currently, machine learning (ML) models are being constructed on local healthcare systems and are validated internally with no expectation that they would validate externally and thus, are rarely transferrable to a different healthcare system. In this work, we aim to demonstrate that (1) even a complex ML model built on a national cohort can be transferred to two local healthcare systems, (2) while a model constructed on a local healthcare system's cohort is difficult to transfer; (3) we examine the impact of training cohort size on the transferability; and (4) we discuss criteria for external validity. We built a model using our previously published Multi-Task Learning-based methodology on a national cohort extracted from OptumLabs® Data Warehouse and transferred the model to two local healthcare systems (i.e., University of Minnesota Medical Center and Mayo Clinic) for external evaluation. The model remained valid when applied to the local patient populations and performed as well as locally constructed models (concordance: .73-.92), demonstrating transferability. The performance of the locally constructed models reduced substantially when applied to each other's healthcare system (concordance: .62-.90). We believe that our modeling approach, in which a model is learned from a national cohort and is externally validated, produces a transferable model, allowing patients at smaller healthcare systems to benefit from precision medicine.


Subject(s)
Decision Support Systems, Clinical , Diabetes Complications/drug therapy , Diabetes Mellitus, Type 2/complications , Machine Learning , Precision Medicine , Adult , Aged , Female , Humans , Male , Middle Aged , Prognosis
8.
Crit Care Med ; 46(4): 500-505, 2018 04.
Article in English | MEDLINE | ID: mdl-29298189

ABSTRACT

OBJECTIVES: To specify when delays of specific 3-hour bundle Surviving Sepsis Campaign guideline recommendations applied to severe sepsis or septic shock become harmful and impact mortality. DESIGN: Retrospective cohort study. SETTING: One health system composed of six hospitals and 45 clinics in a Midwest state from January 01, 2011, to July 31, 2015. PATIENTS: All adult patients hospitalized with billing diagnosis of severe sepsis or septic shock. INTERVENTIONS: Four 3-hour Surviving Sepsis Campaign guideline recommendations: 1) obtain blood culture before antibiotics, 2) obtain lactate level, 3) administer broad-spectrum antibiotics, and 4) administer 30 mL/kg of crystalloid fluid for hypotension (defined as "mean arterial pressure" < 65) or lactate (> 4). MEASUREMENTS AND MAIN RESULTS: To determine the effect of t minutes of delay in carrying out each intervention, propensity score matching of "baseline" characteristics compensated for differences in health status. The average treatment effect in the treated computed as the average difference in outcomes between those treated after shorter versus longer delay. To estimate the uncertainty associated with the average treatment effect in the treated metric and to construct 95% CIs, bootstrap estimation with 1,000 replications was performed. From 5,072 patients with severe sepsis or septic shock, 1,412 (27.8%) had in-hospital mortality. The majority of patients had the four 3-hour bundle recommendations initiated within 3 hours. The statistically significant time in minutes after which a delay increased the risk of death for each recommendation was as follows: lactate, 20.0 minutes; blood culture, 50.0 minutes; crystalloids, 100.0 minutes; and antibiotic therapy, 125.0 minutes. CONCLUSIONS: The guideline recommendations showed that shorter delays indicates better outcomes. There was no evidence that 3 hours is safe; even very short delays adversely impact outcomes. Findings demonstrated a new approach to incorporate time t when analyzing the impact on outcomes and provide new evidence for clinical practice and research.


Subject(s)
Hospital Mortality/trends , Patient Care Bundles/statistics & numerical data , Sepsis/mortality , Sepsis/therapy , Time-to-Treatment/statistics & numerical data , Aged , Anti-Bacterial Agents/administration & dosage , Blood Culture , Crystalloid Solutions/administration & dosage , Female , Humans , Lactic Acid/blood , Male , Middle Aged , Practice Guidelines as Topic , Propensity Score , Retrospective Studies , Shock, Septic/mortality , Shock, Septic/therapy , Time Factors , Time-to-Treatment/standards
9.
J Gen Intern Med ; 33(6): 921-928, 2018 06.
Article in English | MEDLINE | ID: mdl-29383551

ABSTRACT

BACKGROUND: Predicting death in a cohort of clinically diverse, multicondition hospitalized patients is difficult. Prognostic models that use electronic medical record (EMR) data to determine 1-year death risk can improve end-of-life planning and risk adjustment for research. OBJECTIVE: Determine if the final set of demographic, vital sign, and laboratory data from a hospitalization can be used to accurately quantify 1-year mortality risk. DESIGN: A retrospective study using electronic medical record data linked with the state death registry. PARTICIPANTS: A total of 59,848 hospitalized patients within a six-hospital network over a 4-year period. MAIN MEASURES: The last set of vital signs, complete blood count, basic and complete metabolic panel, demographic information, and ICD codes. The outcome of interest was death within 1 year. KEY RESULTS: Model performance was measured on the validation data set. Random forests (RF) outperformed logisitic regression (LR) models in discriminative ability. An RF model that used the final set of demographic, vitals, and laboratory data from the final 48 h of hospitalization had an AUC of 0.86 (0.85-0.87) for predicting death within a year. Age, blood urea nitrogen, platelet count, hemoglobin, and creatinine were the most important variables in the RF model. Models that used comorbidity variables alone had the lowest AUC. In groups of patients with a high probability of death, RF models underestimated the probability by less than 10%. CONCLUSION: The last set of EMR data from a hospitalization can be used to accurately estimate the risk of 1-year mortality within a cohort of multicondition hospitalized patients.


Subject(s)
Electronic Health Records/standards , Hospitalization , Machine Learning/standards , Models, Theoretical , Mortality , Proof of Concept Study , Adult , Aged , Aged, 80 and over , Cohort Studies , Data Analysis , Electronic Health Records/trends , Female , Forecasting , Hospitalization/trends , Humans , Machine Learning/trends , Male , Middle Aged , Mortality/trends , Reproducibility of Results , Retrospective Studies , Risk Factors
10.
J Gen Intern Med ; 33(9): 1447-1453, 2018 09.
Article in English | MEDLINE | ID: mdl-29845466

ABSTRACT

BACKGROUND: Studying diagnostic error at the population level requires an understanding of how diagnoses change over time. OBJECTIVE: To use inter-hospital transfers to examine the frequency and impact of changes in diagnosis on patient risk, and whether health information exchange can improve patient safety by enhancing diagnostic accuracy. DESIGN: Diagnosis coding before and after hospital transfer was merged with responses from the American Hospital Association Annual Survey for a cohort of patients transferred between hospitals to identify predictors of mortality. PARTICIPANTS: Patients (180,337) 18 years or older transferred between 473 acute care hospitals from NY, FL, IA, UT, and VT from 2011 to 2013. MAIN MEASURES: We identified discordant Elixhauser comorbidities before and after transfer to determine the frequency and developed a weighted score of diagnostic discordance to predict mortality. This was included in a multivariate model with inpatient mortality as the dependent variable. We investigated whether health information exchange (HIE) functionality adoption as reported by hospitals improved diagnostic discordance and inpatient mortality. KEY RESULTS: Discordance in diagnoses occurred in 85.5% of all patients. Seventy-three percent of patients gained a new diagnosis following transfer while 47% of patients lost a diagnosis. Diagnostic discordance was associated with increased adjusted inpatient mortality (OR 1.11 95% CI 1.10-1.11, p < 0.001) and allowed for improved mortality prediction. Bilateral hospital HIE participation was associated with reduced diagnostic discordance index (3.69 vs. 1.87%, p < 0.001) and decreased inpatient mortality (OR 0.88, 95% CI 0.89-0.99, p < 0.001). CONCLUSIONS: Diagnostic discordance commonly occurred during inter-hospital transfers and was associated with increased inpatient mortality. Health information exchange adoption was associated with decreased discordance and improved patient outcomes.


Subject(s)
Diagnosis , Diagnostic Errors/prevention & control , Health Information Exchange/standards , Patient Transfer , Risk Management , Adult , Female , Hospital Mortality , Humans , Inpatients , International Classification of Diseases , Male , Patient Transfer/methods , Patient Transfer/standards , Patient Transfer/statistics & numerical data , Prognosis , Quality Improvement , Risk Management/methods , Risk Management/organization & administration , United States
11.
Nurs Res ; 67(4): 331-340, 2018.
Article in English | MEDLINE | ID: mdl-29877986

ABSTRACT

BACKGROUND: Liver transplants account for a high number of procedures with major investments from all stakeholders involved; however, limited studies address liver transplant population heterogeneity pretransplant predictive of posttransplant survival. OBJECTIVE: The aim of the study was to identify novel and meaningful patient clusters predictive of mortality that explains the heterogeneity of liver transplant population, taking a holistic approach. METHODS: A retrospective cohort study of 344 adult patients who underwent liver transplantation between 2008 through 2014. Predictors were summarized severity scores for comorbidities and other suboptimal health states grouped into 11 body systems, the primary reason for transplantation, demographics/environmental factors, and Model for End Liver Disease score. Logistic regression was used to compute the severity scores, hierarchical clustering with weighted Euclidean distance for clustering, Lasso-penalized regression for characterizing the clusters, and Kaplan-Meier analysis to compare survival across the clusters. RESULTS: Cluster 1 included patients with more severe circulatory problems. Cluster 2 represented older patients with more severe primary disease, whereas Cluster 3 contained healthiest patients. Clusters 4 and 5 represented patients with musculoskeletal (e.g., pain) and endocrine problems (e.g., malnutrition), respectively. There was a statistically significant difference for mortality between clusters (p < .001). CONCLUSIONS: This study developed a novel methodology to address heterogeneous and high-dimensional liver transplant population characteristics in a single study predictive of survival. A holistic approach for data modeling and additional psychosocial risk factors has the potential to address holistically nursing challenges on liver transplant care and research.


Subject(s)
Cluster Analysis , Liver Transplantation/mortality , Adult , Aged , Cohort Studies , Comorbidity/trends , Female , Humans , Injury Severity Score , Kaplan-Meier Estimate , Logistic Models , Male , Middle Aged , Midwestern United States , Multivariate Analysis , Proportional Hazards Models , Registries/statistics & numerical data , Retrospective Studies , Risk Factors , Survival Analysis
12.
Infection ; 45(3): 291-298, 2017 Jun.
Article in English | MEDLINE | ID: mdl-27866368

ABSTRACT

BACKGROUND: Physicians frequently rely on the systemic inflammatory response syndrome (SIRS) criteria to detect bloodstream infections (BSIs). We evaluated the diagnostic performance of procalcitonin (PCT) in detecting BSI in patients with and without SIRS. METHODS: We tested the association between BSI, serum PCT levels, contemporaneous SIRS scores and serum lactate using logistic regression in a dataset of 4279 patients. The diagnostic performance of these variables was assessed. RESULTS: In multivariate regression analysis, only log(PCT) was independently associated with BSI (p < 0.05). The mean area under the curve (AUC) of PCT in detecting BSI (0.683; 95% CI 0.65-0.71) was significantly higher than serum lactate (0.615; 95% CI 0.58-0.64) and the SIRS score (0.562; 95% CI 0.53-0.58). The AUC of PCT did not differ significantly by SIRS status. PCT of less than 0.1 ng/mL had a negative predictive value (NPV) of 97.4 and NPV of 96.2% for BSI in the SIRS-negative and SIRS-positive patients, respectively. A PCT of greater than 10 ng/mL had a LR of 6.22 for BSI in SIRS-negative patients. The probability of BSI increased exponentially with rising PCT levels regardless of SIRS status. CONCLUSION: The performance of PCT for the diagnosis of BSI was not affected by SIRS status. Only PCT was independently associated with BSI, while the SIRS criterion and serum lactate were not. A low PCT value may be used to identify patients at a low risk for having BSI in both settings. An elevated PCT value even in a SIRS negative patient should prompt a careful search for BSI.


Subject(s)
Bacteremia/diagnosis , Calcitonin/blood , Lactic Acid/blood , Systemic Inflammatory Response Syndrome/etiology , Aged , Aged, 80 and over , Area Under Curve , Bacteremia/microbiology , Female , Humans , Male , Middle Aged , Minnesota , Retrospective Studies
13.
J Biomed Inform ; 68: 112-120, 2017 04.
Article in English | MEDLINE | ID: mdl-28323112

ABSTRACT

Proper handling of missing data is important for many secondary uses of electronic health record (EHR) data. Data imputation methods can be used to handle missing data, but their use for analyzing EHR data is limited and specific efficacy for postoperative complication detection is unclear. Several data imputation methods were used to develop data models for automated detection of three types (i.e., superficial, deep, and organ space) of surgical site infection (SSI) and overall SSI using American College of Surgeons National Surgical Quality Improvement Project (NSQIP) Registry 30-day SSI occurrence data as a reference standard. Overall, models with missing data imputation almost always outperformed reference models without imputation that included only cases with complete data for detection of SSI overall achieving very good average area under the curve values. Missing data imputation appears to be an effective means for improving postoperative SSI detection using EHR clinical data.


Subject(s)
Data Collection/standards , Electronic Health Records/standards , Quality Improvement , Surgical Wound Infection , Automation , Humans , Registries
14.
Prog Transplant ; 27(1): 98-106, 2017 03.
Article in English | MEDLINE | ID: mdl-27888279

ABSTRACT

OBJECTIVE: Liver transplantation is a costly and risky procedure, representing 25 050 procedures worldwide in 2013, with 6729 procedures performed in the United States in 2014. Considering the scarcity of organs and uncertainty regarding prognosis, limited studies address the variety of risk factors before transplantation that might contribute to predicting patient's survival and therefore developing better models that address a holistic view of transplant patients. This critical review aimed to identify predictors of liver transplant patient survival included in large-scale studies and assess the gap in risk factors from a holistic approach using the Wellbeing Model and the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement. DATA SOURCE: Search of the Cumulative Index to Nursing and Allied Health Literature (CINAHL), Medline, and PubMed from the 1980s to July 2014. STUDY SELECTION: Original longitudinal large-scale studies, of 500 or more subjects, published in English, Spanish, or Portuguese, which described predictors of patient survival after deceased donor liver transplantation. DATA EXTRACTION: Predictors were extracted from 26 studies that met the inclusion criteria. DATA SYNTHESIS: Each article was reviewed and predictors were categorized using a holistic framework, the Wellbeing Model (health, community, environment, relationship, purpose, and security dimensions). CONCLUSIONS: The majority (69.7%) of the predictors represented the Wellbeing Model Health dimension. There were no predictors representing the Wellbeing Dimensions for purpose and relationship nor emotional, mental, and spiritual health. This review showed that there is rigorously conducted research of predictors of liver transplant survival; however, the reported significant results were inconsistent across studies, and further research is needed to examine liver transplantation from a whole-person perspective.


Subject(s)
Liver Transplantation/mortality , Survival Rate , Graft Survival , Humans , Risk Factors , United States
15.
J Med Syst ; 41(10): 161, 2017 Sep 02.
Article in English | MEDLINE | ID: mdl-28866768

ABSTRACT

Commonly used drugs in hospital setting can cause QT prolongation and trigger life-threatening arrhythmias. We evaluate changes in prescribing behavior after the implementation of a clinical decision support system to prevent the use of QT prolonging medications in the hospital setting. We conducted a quasi-experimental study, before and after the implementation of a clinical decision support system integrated in the electronic medical record (QT-alert system). This system detects patients at risk of significant QT prolongation (QTc>500ms) and alerts providers ordering QT prolonging drugs. We reviewed the electronic health record to assess the provider's responses which were classified as "action taken" (QT drug avoided, QT drug changed, other QT drug(s) avoided, ECG monitoring, electrolytes monitoring, QT issue acknowledged, other actions) or "no action taken". Approximately, 15.5% (95/612) of the alerts were followed by a provider's action in the pre-intervention phase compared with 21% (228/1085) in the post-intervention phase (p=0.006). The most common type of actions taken during pre-intervention phase compared to post-intervention phase were ECG monitoring (8% vs. 13%, p=0.002) and QT issue acknowledgment (2.1% vs. 4.1%, p=0.03). Notably, there was no significant difference for other actions including QT drug avoided (p=0.8), QT drug changed (p=0.06) and other QT drug(s) avoided (p=0.3). Our study demonstrated that the QT alert system prompted a higher proportion of providers to take action on patients at risk of complications. However, the overall impact was modest underscoring the need for educating providers and optimizing clinical decision support to further reduce drug-induced QT prolongation.


Subject(s)
Decision Support Systems, Clinical , Arrhythmias, Cardiac , Electrocardiography , Humans , Long QT Syndrome , Torsades de Pointes
16.
J Gen Intern Med ; 31(5): 502-8, 2016 May.
Article in English | MEDLINE | ID: mdl-26850412

ABSTRACT

BACKGROUND: The association between the use of statins and the risk of diabetes and increased mortality within the same population has been a source of controversy, and may underestimate the value of statins for patients at risk. OBJECTIVE: We aimed to assess whether statin use increases the risk of developing diabetes or affects overall mortality among normoglycemic patients and patients with impaired fasting glucose (IFG). DESIGN AND PARTICIPANTS: Observational cohort study of 13,508 normoglycemic patients (n = 4460; 33% taking statins) and 4563 IFG patients (n = 1865; 41% taking statin) among residents of Olmsted County, Minnesota, with clinical data in the Mayo Clinic electronic medical record and at least one outpatient fasting glucose test between 1999 and 2004. Demographics, vital signs, tobacco use, laboratory results, medications and comorbidities were obtained by electronic search for the period 1999-2004. Results were analyzed by Cox proportional hazards models, and the risk of incident diabetes and mortality were analyzed by survival curves using the Kaplan-Meier method. MAIN MEASURES: The main endpoints were new clinical diagnosis of diabetes mellitus and total mortality. KEY RESULTS: After a mean of 6 years of follow-up, statin use was found to be associated with an increased risk of incident diabetes in the normoglycemic (HR 1.19; 95% CI, 1.05 to 1.35; p = 0.007) and IFG groups (HR 1.24; 95%CI, 1.11 to 1.38; p = 0.0001). At the same time, overall mortality decreased in both normoglycemic (HR 0.70; 95% CI, 0.66 to 0.80; p < 0.0001) and IFG patients (HR 0.77, 95% CI, 0.64 to 0.91; p = 0.0029) with statin use. CONCLUSION: In general, recommendations for statin use should not be affected by concerns over an increased risk of developing diabetes, since the benefit of reduced mortality clearly outweighs this small (19-24%) risk.


Subject(s)
Blood Glucose/metabolism , Diabetes Mellitus, Type 2/chemically induced , Hydroxymethylglutaryl-CoA Reductase Inhibitors/adverse effects , Adolescent , Adult , Aged , Databases, Factual , Diabetes Mellitus, Type 2/blood , Diabetes Mellitus, Type 2/epidemiology , Drug Utilization/statistics & numerical data , Fasting/blood , Female , Follow-Up Studies , Humans , Hydroxymethylglutaryl-CoA Reductase Inhibitors/administration & dosage , Incidence , Kaplan-Meier Estimate , Male , Middle Aged , Minnesota/epidemiology , Mortality , Risk Assessment/methods , Young Adult
17.
Nurs Res ; 64(4): 235-45, 2015.
Article in English | MEDLINE | ID: mdl-26126059

ABSTRACT

BACKGROUND: Mobility is critical for self-management. Understanding factors associated with improvement in mobility during home healthcare can help nurses tailor interventions to improve mobility outcomes and keep patients safely at home. OBJECTIVES: The aims were to (a) identify patient and support system factors associated with mobility improvement during home care, (b) evaluate consistency of factors across groups defined by mobility status at the start of home care, and (c) identify patterns of factors associated with improvement and no improvement in mobility within each group. METHODS: Outcome and Assessment Information Set data extracted from a national convenience sample of 270,634 patient records collected from October 1, 2008 to December 31, 2009 from 581 Medicare-certified, home healthcare agencies were used. Patients were placed into groups based on mobility scores at admission. Odds ratios were used to index associations of factors with improvement at discharge. Discriminative pattern mining was used to discover patterns associated with improvement of mobility. RESULTS: Overall, mobility improved for 49.4% of patients; improvement occurred most frequently (80%) among patients who were able, at admission, to walk only with the supervision or assistance of another person at all times. Numerous factors associated with improvement in mobility outcome were similar across the groups (except for those who were chairfast but were able to wheel themselves independently); however, the number, strength, and direction of associations varied. In most groups, data mining-discovered patterns of factors associated with the mobility outcome were composed of combinations of functional and cognitive status and the type and amount of help required at home. DISCUSSION: This study provides new data mining-based information about how factors associated with improvement in mobility group together and vary by mobility at admission. These approaches have potential to provide new insights for clinicians to tailor interventions for improvement of mobility.


Subject(s)
Data Mining , Home Care Services , Mobility Limitation , Outcome Assessment, Health Care/statistics & numerical data , Walking/physiology , Activities of Daily Living , Adult , Age Factors , Aged , Aged, 80 and over , Cluster Analysis , Databases, Factual , Female , Humans , Male , Medicare , Middle Aged , Recovery of Function/physiology , Retrospective Studies , Risk Factors , United States , Young Adult
19.
Stud Health Technol Inform ; 315: 279-283, 2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39049268

ABSTRACT

We developed a method of using the Clinically Aligned Pain Assessment (CAPA) measures to reconstruct the Numeric Rating System (NRS). We used an observational retrospective cohort study design with prospective validation using de-identified adult patient data derived from a major health system. Data between 2011-2017 were used for development and 2018-2020 for validation. All included patients had at least one NRS and CAPA measurement at the same time. An ordinal regression model was built with CAPA components to predict NRS scores. We identified 6,414 and 3,543 simultaneous NRS-CAPA pairs in the development and validation dataset, respectively. All CAPA components were significantly related to NRS, with RMSE of 1.938 and Somers' D of 0.803 on the development dataset, and RMSE of 2.1 and Somers' D of 0.74 when prospectively validated. Our model was capable of accurately reconstructing NRS based on CAPA and was exact when the NRS was [0,7].


Subject(s)
Electronic Health Records , Pain Measurement , Humans , Prospective Studies , Male , Female , Retrospective Studies , Adult , Middle Aged , Reproducibility of Results , Pain/diagnosis
20.
Artif Intell Med ; 154: 102899, 2024 May 24.
Article in English | MEDLINE | ID: mdl-38843692

ABSTRACT

Predictive modeling is becoming an essential tool for clinical decision support, but health systems with smaller sample sizes may construct suboptimal or overly specific models. Models become over-specific when beside true physiological effects, they also incorporate potentially volatile site-specific artifacts. These artifacts can change suddenly and can render the model unsafe. To obtain safer models, health systems with inadequate sample sizes may adopt one of the following options. First, they can use a generic model, such as one purchased from a vendor, but often such a model is not sufficiently specific to the patient population and is thus suboptimal. Second, they can participate in a research network. Paradoxically though, sites with smaller datasets contribute correspondingly less to the joint model, again rendering the final model suboptimal. Lastly, they can use transfer learning, starting from a model trained on a large data set and updating this model to the local population. This strategy can also result in a model that is over-specific. In this paper we present the consensus modeling paradigm, which uses the help of a large site (source) to reach a consensus model at the small site (target). We evaluate the approach on predicting postoperative complications at two health systems with 9,044 and 38,045 patients (rare outcomes at about 1% positive rate), and conduct a simulation study to understand the performance of consensus modeling relative to the other three approaches as a function of the available training sample size at the target site. We found that consensus modeling exhibited the least over-specificity at either the source or target site and achieved the highest combined predictive performance.

SELECTION OF CITATIONS
SEARCH DETAIL