Your browser doesn't support javascript.
loading
Deep learning for liver tumor diagnosis part II: convolutional neural network interpretation using radiologic imaging features.
Wang, Clinton J; Hamm, Charlie A; Savic, Lynn J; Ferrante, Marc; Schobert, Isabel; Schlachter, Todd; Lin, MingDe; Weinreb, Jeffrey C; Duncan, James S; Chapiro, Julius; Letzen, Brian.
Afiliação
  • Wang CJ; Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.
  • Hamm CA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.
  • Savic LJ; Institute of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität, and Berlin Institute of Health, 10117, Berlin, Germany.
  • Ferrante M; Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.
  • Schobert I; Institute of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität, and Berlin Institute of Health, 10117, Berlin, Germany.
  • Schlachter T; Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.
  • Lin M; Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.
  • Weinreb JC; Institute of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität, and Berlin Institute of Health, 10117, Berlin, Germany.
  • Duncan JS; Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.
  • Chapiro J; Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.
  • Letzen B; Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.
Eur Radiol ; 29(7): 3348-3357, 2019 Jul.
Article em En | MEDLINE | ID: mdl-31093705
OBJECTIVES: To develop a proof-of-concept "interpretable" deep learning prototype that justifies aspects of its predictions from a pre-trained hepatic lesion classifier. METHODS: A convolutional neural network (CNN) was engineered and trained to classify six hepatic tumor entities using 494 lesions on multi-phasic MRI, described in Part 1. A subset of each lesion class was labeled with up to four key imaging features per lesion. A post hoc algorithm inferred the presence of these features in a test set of 60 lesions by analyzing activation patterns of the pre-trained CNN model. Feature maps were generated that highlight regions in the original image that correspond to particular features. Additionally, relevance scores were assigned to each identified feature, denoting the relative contribution of a feature to the predicted lesion classification. RESULTS: The interpretable deep learning system achieved 76.5% positive predictive value and 82.9% sensitivity in identifying the correct radiological features present in each test lesion. The model misclassified 12% of lesions. Incorrect features were found more often in misclassified lesions than correctly identified lesions (60.4% vs. 85.6%). Feature maps were consistent with original image voxels contributing to each imaging feature. Feature relevance scores tended to reflect the most prominent imaging criteria for each class. CONCLUSIONS: This interpretable deep learning system demonstrates proof of principle for illuminating portions of a pre-trained deep neural network's decision-making, by analyzing inner layers and automatically describing features contributing to predictions. KEY POINTS: • An interpretable deep learning system prototype can explain aspects of its decision-making by identifying relevant imaging features and showing where these features are found on an image, facilitating clinical translation. • By providing feedback on the importance of various radiological features in performing differential diagnosis, interpretable deep learning systems have the potential to interface with standardized reporting systems such as LI-RADS, validating ancillary features and improving clinical practicality. • An interpretable deep learning system could potentially add quantitative data to radiologic reports and serve radiologists with evidence-based decision support.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Redes Neurais de Computação / Carcinoma Hepatocelular / Aprendizado Profundo / Neoplasias Hepáticas Tipo de estudo: Diagnostic_studies / Observational_studies / Prognostic_studies / Risk_factors_studies Limite: Adult / Aged / Female / Humans / Male / Middle aged Idioma: En Ano de publicação: 2019 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Redes Neurais de Computação / Carcinoma Hepatocelular / Aprendizado Profundo / Neoplasias Hepáticas Tipo de estudo: Diagnostic_studies / Observational_studies / Prognostic_studies / Risk_factors_studies Limite: Adult / Aged / Female / Humans / Male / Middle aged Idioma: En Ano de publicação: 2019 Tipo de documento: Article