Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
BMC Med Inform Decis Mak ; 23(1): 207, 2023 10 09.
Article in English | MEDLINE | ID: mdl-37814311

ABSTRACT

BACKGROUND: There are many Machine Learning (ML) models which predict acute kidney injury (AKI) for hospitalised patients. While a primary goal of these models is to support clinical decision-making, the adoption of inconsistent methods of estimating baseline serum creatinine (sCr) may result in a poor understanding of these models' effectiveness in clinical practice. Until now, the performance of such models with different baselines has not been compared on a single dataset. Additionally, AKI prediction models are known to have a high rate of false positive (FP) events regardless of baseline methods. This warrants further exploration of FP events to provide insight into potential underlying reasons. OBJECTIVE: The first aim of this study was to assess the variance in performance of ML models using three methods of baseline sCr on a retrospective dataset. The second aim was to conduct an error analysis to gain insight into the underlying factors contributing to FP events. MATERIALS AND METHODS: The Intensive Care Unit (ICU) patients of the Medical Information Mart for Intensive Care (MIMIC)-IV dataset was used with the KDIGO (Kidney Disease Improving Global Outcome) definition to identify AKI episodes. Three different methods of estimating baseline sCr were defined as (1) the minimum sCr, (2) the Modification of Diet in Renal Disease (MDRD) equation and the minimum sCr and (3) the MDRD equation and the mean of preadmission sCr. For the first aim of this study, a suite of ML models was developed for each baseline and the performance of the models was assessed. An analysis of variance was performed to assess the significant difference between eXtreme Gradient Boosting (XGB) models across all baselines. To address the second aim, Explainable AI (XAI) methods were used to analyse the XGB errors with Baseline 3. RESULTS: Regarding the first aim, we observed variances in discriminative metrics and calibration errors of ML models when different baseline methods were adopted. Using Baseline 1 resulted in a 14% reduction in the f1 score for both Baseline 2 and Baseline 3. There was no significant difference observed in the results between Baseline 2 and Baseline 3. For the second aim, the FP cohort was analysed using the XAI methods which led to relabelling data with the mean of sCr in 180 to 0 days pre-ICU as the preferred sCr baseline method. The XGB model using this relabelled data achieved an AUC of 0.85, recall of 0.63, precision of 0.54 and f1 score of 0.58. The cohort size was 31,586 admissions, of which 5,473 (17.32%) had AKI. CONCLUSION: In the absence of a widely accepted method of baseline sCr, AKI prediction studies need to consider the impact of different baseline methods on the effectiveness of ML models and their potential implications in real-world implementations. The utilisation of XAI methods can be effective in providing insight into the occurrence of prediction errors. This can potentially augment the success rate of ML implementation in routine care.


Subject(s)
Acute Kidney Injury , Models, Statistical , Humans , Creatinine , Retrospective Studies , Prognosis
2.
Int J Med Inform ; 162: 104758, 2022 Apr 02.
Article in English | MEDLINE | ID: mdl-35398812

ABSTRACT

BACKGROUND: Machine learning (ML) is a subset of Artificial Intelligence (AI) that is used to predict and potentially prevent adverse patient outcomes. There is increasing interest in the application of these models in digital hospitals to improve clinical decision-making and chronic disease management, particularly for patients with diabetes. The potential of ML models using electronic medical records (EMR) to improve the clinical care of hospitalised patients with diabetes is currently unknown. OBJECTIVE: The aim was to systematically identify and critically review the published literature examining the development and validation of ML models using EMR data for improving the care of hospitalised adult patients with diabetes. METHODS: The Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) guidelines were followed. Four databases were searched (Embase, PubMed, IEEE and Web of Science) for studies published between January 2010 to January 2022. The reference lists of the eligible articles were manually searched. Articles that examined adults and both developed and validated ML models using EMR data were included. Studies conducted in primary care and community care settings were excluded. Studies were independently screened and data was extracted using Covidence® systematic review software. For data extraction and critical appraisal, the Checklist for Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies (CHARMS) was followed. Risk of bias was assessed using the Prediction model Risk Of Bias Assessment Tool (PROBAST). Quality of reporting was assessed by adherence to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) guideline. The IJMEDI checklist was followed to assess quality of ML models and the reproducibility of their outcomes. The external validation methodology of the studies was appraised. RESULTS: Of the 1317 studies screened, twelve met inclusion criteria. Eight studies developed ML models to predict disglycaemic episodes for hospitalized patients with diabetes, one study developed a ML model to predict total insulin dosage, two studies predicted risk of readmission, and one study improved the prediction of hospital readmission for inpatients with diabetes. All included studies were heterogeneous with regard to ML types, cohort, input predictors, sample size, performance and validation metrics and clinical outcomes. Two studies adhered to the TRIPOD guideline. The methodological reporting of all the studies was evaluated to be at high risk of bias. The quality of ML models in all studies was assessed as poor. Robust external validation was not performed on any of the studies. No models were implemented or evaluated in routine clinical care. CONCLUSIONS: This review identified a limited number of ML models which were developed to improve inpatient management of diabetes. No ML models were implemented in real hospital settings. Future research needs to enhance the development, reporting and validation steps to enable ML models for integration into routine clinical care.

SELECTION OF CITATIONS
SEARCH DETAIL