Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters

Database
Country/Region as subject
Language
Affiliation country
Publication year range
1.
J Biomed Inform ; 146: 104504, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37742782

ABSTRACT

OBJECTIVE: To review and critically appraise published and preprint reports of prognostic models of in-hospital mortality of patients in the intensive-care unit (ICU) based on neural representations (embeddings) of clinical notes. METHODS: PubMed and arXiv were searched up to August 1, 2022. At least two reviewers independently selected the studies that developed a prognostic model of in-hospital mortality of intensive-care patients using free-text represented as embeddings and extracted data using the CHARMS checklist. Risk of bias was assessed using PROBAST. Reporting on the model was assessed with the TRIPOD guideline. To assess the machine learning components that were used in the models, we present a new descriptive framework based on different techniques to represent text and provide predictions from text. The study protocol was registered in the PROSPERO database (CRD42022354602). RESULTS: Eighteen studies out of 2,825 were included. All studies used the publicly-available MIMIC dataset. Context-independent word embeddings are widely used. Model discrimination was provided by all studies (AUROC 0.75-0.96), but measures of calibration were scarce. Seven studies used both structural clinical variables and notes. Model discrimination improved when adding clinical notes to variables. None of the models was externally validated and often a simple train/test split was used for internal validation. Our critical appraisal demonstrated a high risk of bias in all studies and concerns regarding their applicability in clinical practice. CONCLUSION: All studies used a neural architecture for prediction and were based on one publicly available dataset. Clinical notes were reported to improve predictive performance when used in addition to only clinical variables. Most studies had methodological, reporting, and applicability issues. We recommend reporting both model discrimination and calibration, using additional data sources, and using more robust evaluation strategies, including prospective and external validation. Finally, sharing data and code is encouraged to improve study reproducibility.

2.
Comput Biol Med ; 163: 107146, 2023 09.
Article in English | MEDLINE | ID: mdl-37356293

ABSTRACT

BACKGROUND: - Subgroup discovery (SGD) is the automated splitting of the data into complex subgroups. Various SGD methods have been applied to the medical domain, but none have been extensively evaluated. We assess the numerical and clinical quality of SGD methods. METHOD: - We applied the improved Subgroup Set Discovery (SSD++), Patient Rule Induction Method (PRIM) and APRIORI - Subgroup Discovery (APRIORI-SD) algorithms to obtain patient subgroups on observational data of 14,548 COVID-19 patients admitted to 73 Dutch intensive care units. Hospital mortality was the clinical outcome. Numerical significance of the subgroups was assessed with information-theoretic measures. Clinical significance of the subgroups was assessed by comparing variable importance on population and subgroup levels and by expert evaluation. RESULTS: - The tested algorithms varied widely in the total number of discovered subgroups (5-62), the number of selected variables, and the predictive value of the subgroups. Qualitative assessment showed that the found subgroups make clinical sense. SSD++ found most subgroups (n = 62), which added predictive value and generally showed high potential for clinical use. APRIORI-SD and PRIM found fewer subgroups (n = 5 and 6), which did not add predictive value and were clinically less relevant. CONCLUSION: - Automated SGD methods find clinical subgroups that are relevant when assessed quantitatively (yield added predictive value) and qualitatively (intensivists consider the subgroups significant). Different methods yield different subgroups with varying degrees of predictive performance and clinical quality. External validation is needed to generalize the results to other populations and future research should explore which algorithm performs best in other settings.


Subject(s)
COVID-19 , Humans , Hospitalization , Intensive Care Units , Hospital Mortality , Algorithms
3.
Int J Med Inform ; 160: 104688, 2022 04.
Article in English | MEDLINE | ID: mdl-35114522

ABSTRACT

BACKGROUND: Building Machine Learning (ML) models in healthcare may suffer from time-consuming and potentially biased pre-selection of predictors by hand that can result in limited or trivial selection of suitable models. We aimed to assess the predictive performance of automating the process of building ML models (AutoML) in-hospital mortality prediction modelling of triage COVID-19 patients at ICU admission versus expert-based predictor pre-selection followed by logistic regression. METHODS: We conducted an observational study of all COVID-19 patients admitted to Dutch ICUs between February and July 2020. We included 2,690 COVID-19 patients from 70 ICUs participating in the Dutch National Intensive Care Evaluation (NICE) registry. The main outcome measure was in-hospital mortality. We asessed model performance (at admission and after 24h, respectively) of AutoML compared to the more traditional approach of predictor pre-selection and logistic regression. FINDINGS: Predictive performance of the autoML models with variables available at admission shows fair discrimination (average AUROC = 0·75-0·76 (sdev = 0·03), PPV = 0·70-0·76 (sdev = 0·1) at cut-off = 0·3 (the observed mortality rate), and good calibration. This performance is on par with a logistic regression model with selection of patient variables by three experts (average AUROC = 0·78 (sdev = 0·03) and PPV = 0·79 (sdev = 0·2)). Extending the models with variables that are available at 24h after admission resulted in models with higher predictive performance (average AUROC = 0·77-0·79 (sdev = 0·03) and PPV = 0·79-0·80 (sdev = 0·10-0·17)). CONCLUSIONS: AutoML delivers prediction models with fair discriminatory performance, and good calibration and accuracy, which is as good as regression models with expert-based predictor pre-selection. In the context of the restricted availability of data in an ICU quality registry, extending the models with variables that are available at 24h after admission showed small (but significantly) performance increase.


Subject(s)
COVID-19 , Triage , Hospital Mortality , Humans , Intensive Care Units , Netherlands/epidemiology , Prognosis , Retrospective Studies , SARS-CoV-2
SELECTION OF CITATIONS
SEARCH DETAIL