Your browser doesn't support javascript.
loading
Explaining a Deep Learning Based Breast Ultrasound Image Classifier with Saliency Maps.
Byra, Michal; Dobruch-Sobczak, Katarzyna; Piotrzkowska-Wroblewska, Hanna; Klimonda, Ziemowit; Litniewski, Jerzy.
Afiliación
  • Byra M; Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.
  • Dobruch-Sobczak K; Radiology Department II, Maria Sklodowska-Curie National Research Institute of Oncology, Warsaw, Poland.
  • Piotrzkowska-Wroblewska H; Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.
  • Klimonda Z; Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.
  • Litniewski J; Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.
J Ultrason ; 22(89): 70-75, 2022 Apr.
Article en En | MEDLINE | ID: mdl-35811586
ABSTRACT
Aim of the study Deep neural networks have achieved good performance in breast mass classification in ultrasound imaging. However, their usage in clinical practice is still limited due to the lack of explainability of decisions conducted by the networks. In this study, to address the explainability problem, we generated saliency maps indicating ultrasound image regions important for the network's classification decisions. Material and

methods:

Ultrasound images were collected from 272 breast masses, including 123 malignant and 149 benign. Transfer learning was applied to develop a deep network for breast mass classification. Next, the class activation mapping technique was used to generate saliency maps for each image. Breast mass images were divided into three regions the breast mass region, the peritumoral region surrounding the breast mass, and the region below the breast mass. The pointing game metric was used to quantitatively assess the overlap between the saliency maps and the three selected US image regions.

Results:

Deep learning classifier achieved the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity of 0.887, 0.835, 0.801, and 0.868, respectively. In the case of the correctly classified test US images, analysis of the saliency maps revealed that the decisions of the network could be associated with the three selected regions in 71% of cases.

Conclusions:

Our study is an important step toward better understanding of deep learning models developed for breast mass diagnosis. We demonstrated that the decisions made by the network can be related to the appearance of certain tissue regions in breast mass US images.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: J Ultrason Año: 2022 Tipo del documento: Article País de afiliación: Polonia

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: J Ultrason Año: 2022 Tipo del documento: Article País de afiliación: Polonia