Explaining deep learning for ECG analysis: Building blocks for auditing and knowledge discovery.
Comput Biol Med
; 176: 108525, 2024 Jun.
Article
in En
| MEDLINE
| ID: mdl-38749322
ABSTRACT
Deep neural networks have become increasingly popular for analyzing ECG data because of their ability to accurately identify cardiac conditions and hidden clinical factors. However, the lack of transparency due to the black box nature of these models is a common concern. To address this issue, explainable AI (XAI) methods can be employed. In this study, we present a comprehensive analysis of post-hoc XAI methods, investigating the glocal (aggregated local attributions over multiple samples) and global (concept based XAI) perspectives. We have established a set of sanity checks to identify saliency as the most sensible attribution method. We provide a dataset-wide analysis across entire patient subgroups, which goes beyond anecdotal evidence, to establish the first quantitative evidence for the alignment of model behavior with cardiologists' decision rules. Furthermore, we demonstrate how these XAI techniques can be utilized for knowledge discovery, such as identifying subtypes of myocardial infarction. We believe that these proposed methods can serve as building blocks for a complementary assessment of the internal validity during a certification process, as well as for knowledge discovery in the field of ECG analysis.
Key words
Full text:
1
Collection:
01-internacional
Database:
MEDLINE
Main subject:
Electrocardiography
/
Deep Learning
Limits:
Humans
Language:
En
Journal:
Comput Biol Med
Year:
2024
Document type:
Article
Country of publication:
Estados Unidos