Interpreting a recurrent neural network's predictions of ICU mortality risk.
J Biomed Inform
; 114: 103672, 2021 02.
Article
em En
| MEDLINE
| ID: mdl-33422663
Deep learning has demonstrated success in many applications; however, their use in healthcare has been limited due to the lack of transparency into how they generate predictions. Algorithms such as Recurrent Neural Networks (RNNs) when applied to Electronic Medical Records (EMR) introduce additional barriers to transparency because of the sequential processing of the RNN and the multi-modal nature of EMR data. This work seeks to improve transparency by: 1) introducing Learned Binary Masks (LBM) as a method for identifying which EMR variables contributed to an RNN model's risk of mortality (ROM) predictions for critically ill children; and 2) applying KernelSHAP for the same purpose. Given an individual patient, LBM and KernelSHAP both generate an attribution matrix that shows the contribution of each input feature to the RNN's sequence of predictions for that patient. Attribution matrices can be aggregated in many ways to facilitate different levels of analysis of the RNN model and its predictions. Presented are three methods of aggregations and analyses: 1) over volatile time periods within individual patient predictions, 2) over populations of ICU patients sharing specific diagnoses, and 3) across the general population of critically ill children.
Palavras-chave
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Assunto principal:
Algoritmos
/
Redes Neurais de Computação
Tipo de estudo:
Etiology_studies
/
Prognostic_studies
/
Risk_factors_studies
Limite:
Child
/
Humans
Idioma:
En
Ano de publicação:
2021
Tipo de documento:
Article