Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Med Internet Res ; 24(6): e34295, 2022 06 07.
Article in English | MEDLINE | ID: mdl-35502887

ABSTRACT

BACKGROUND: Machine learning algorithms are currently used in a wide array of clinical domains to produce models that can predict clinical risk events. Most models are developed and evaluated with retrospective data, very few are evaluated in a clinical workflow, and even fewer report performances in different hospitals. In this study, we provide detailed evaluations of clinical risk prediction models in live clinical workflows for three different use cases in three different hospitals. OBJECTIVE: The main objective of this study was to evaluate clinical risk prediction models in live clinical workflows and compare their performance in these setting with their performance when using retrospective data. We also aimed at generalizing the results by applying our investigation to three different use cases in three different hospitals. METHODS: We trained clinical risk prediction models for three use cases (ie, delirium, sepsis, and acute kidney injury) in three different hospitals with retrospective data. We used machine learning and, specifically, deep learning to train models that were based on the Transformer model. The models were trained using a calibration tool that is common for all hospitals and use cases. The models had a common design but were calibrated using each hospital's specific data. The models were deployed in these three hospitals and used in daily clinical practice. The predictions made by these models were logged and correlated with the diagnosis at discharge. We compared their performance with evaluations on retrospective data and conducted cross-hospital evaluations. RESULTS: The performance of the prediction models with data from live clinical workflows was similar to the performance with retrospective data. The average value of the area under the receiver operating characteristic curve (AUROC) decreased slightly by 0.6 percentage points (from 94.8% to 94.2% at discharge). The cross-hospital evaluations exhibited severely reduced performance: the average AUROC decreased by 8 percentage points (from 94.2% to 86.3% at discharge), which indicates the importance of model calibration with data from the deployment hospital. CONCLUSIONS: Calibrating the prediction model with data from different deployment hospitals led to good performance in live settings. The performance degradation in the cross-hospital evaluation identified limitations in developing a generic model for different hospitals. Designing a generic process for model development to generate specialized prediction models for each hospital guarantees model performance in different hospitals.


Subject(s)
Electronic Health Records , Machine Learning , Hospitals , Humans , ROC Curve , Retrospective Studies
2.
J Clin Anesth ; 75: 110473, 2021 12.
Article in English | MEDLINE | ID: mdl-34333447

ABSTRACT

Delirium is a highly relevant complication of surgical interventions. Current research indicates that despite increased awareness for delirium, it is often overlooked. We implemented an AI-based tool to monitor delirium in cardiac surgery patients in our specialist clinic. This appears to be a promising approach to improve detection of delirium, especially for underrecognized forms and in peripheral wards without intensive screening. We present a case in which the AI identified delirium, confirmed by our routine screening and specialist evaluation.


Subject(s)
Cardiac Surgical Procedures , Delirium , Artificial Intelligence , Cardiac Surgical Procedures/adverse effects , Delirium/diagnosis , Delirium/etiology , Hospitals , Humans , Mass Screening , Risk Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...