Your browser doesn't support javascript.
loading
Transparency of deep neural networks for medical image analysis: A review of interpretability methods.
Salahuddin, Zohaib; Woodruff, Henry C; Chatterjee, Avishek; Lambin, Philippe.
Afiliação
  • Salahuddin Z; The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands. Electronic address: z.salahuddin@maastrichtuniversity.nl.
  • Woodruff HC; The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands; Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht
  • Chatterjee A; The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands.
  • Lambin P; The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands; Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht
Comput Biol Med ; 140: 105111, 2021 Dec 04.
Article em En | MEDLINE | ID: mdl-34891095
Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown the same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair, and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision-making process. Therefore, there is a need to ensure the interpretability of deep neural networks before they can be incorporated into the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally, we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Guideline / Prognostic_studies Idioma: En Ano de publicação: 2021 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Guideline / Prognostic_studies Idioma: En Ano de publicação: 2021 Tipo de documento: Article