Your browser doesn't support javascript.
loading
Interpretable deep learning approach for oral cancer classification using guided attention inference network.
Figueroa, Kevin Chew; Song, Bofan; Sunny, Sumsum; Li, Shaobai; Gurushanth, Keerthi; Mendonca, Pramila; Mukhia, Nirza; Patrick, Sanjana; Gurudath, Shubha; Raghavan, Subhashini; Imchen, Tsusennaro; Leivon, Shirley T; Kolur, Trupti; Shetty, Vivek; Bushan, Vidya; Ramesh, Rohan; Pillai, Vijay; Wilder-Smith, Petra; Sigamani, Alben; Suresh, Amritha; Kuriakose, Moni Abraham; Birur, Praveen; Liang, Rongguang.
Afiliación
  • Figueroa KC; The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States.
  • Song B; The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States.
  • Sunny S; Mazumdar Shaw Medical Centre, Bangalore, Karnataka, India.
  • Li S; The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States.
  • Gurushanth K; KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India.
  • Mendonca P; Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India.
  • Mukhia N; KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India.
  • Patrick S; Biocon Foundation, Bangalore, Karnataka, India.
  • Gurudath S; KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India.
  • Raghavan S; KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India.
  • Imchen T; Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India.
  • Leivon ST; Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India.
  • Kolur T; Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India.
  • Shetty V; Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India.
  • Bushan V; Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India.
  • Ramesh R; Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India.
  • Pillai V; Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India.
  • Wilder-Smith P; University of California, Irvine, Beckman Laser Institute & Medical Clinic, Irvine, California, United States.
  • Sigamani A; Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India.
  • Suresh A; Mazumdar Shaw Medical Centre, Bangalore, Karnataka, India.
  • Kuriakose MA; Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India.
  • Birur P; Mazumdar Shaw Medical Centre, Bangalore, Karnataka, India.
  • Liang R; Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India.
J Biomed Opt ; 27(1)2022 01.
Article en En | MEDLINE | ID: mdl-35023333
ABSTRACT

SIGNIFICANCE:

Convolutional neural networks (CNNs) show the potential for automated classification of different cancer lesions. However, their lack of interpretability and explainability makes CNNs less than understandable. Furthermore, CNNs may incorrectly concentrate on other areas surrounding the salient object, rather than the network's attention focusing directly on the object to be recognized, as the network has no incentive to focus solely on the correct subjects to be detected. This inhibits the reliability of CNNs, especially for biomedical applications.

AIM:

Develop a deep learning training approach that could provide understandability to its predictions and directly guide the network to concentrate its attention and accurately delineate cancerous regions of the image.

APPROACH:

We utilized Selvaraju et al.'s gradient-weighted class activation mapping to inject interpretability and explainability into CNNs. We adopted a two-stage training process with data augmentation techniques and Li et al.'s guided attention inference network (GAIN) to train images captured using our customized mobile oral screening devices. The GAIN architecture consists of three streams of network training classification stream, attention mining stream, and bounding box stream. By adopting the GAIN training architecture, we jointly optimized the classification and segmentation accuracy of our CNN by treating these attention maps as reliable priors to develop attention maps with more complete and accurate segmentation.

RESULTS:

The network's attention map will help us to actively understand what the network is focusing on and looking at during its decision-making process. The results also show that the proposed method could guide the trained neural network to highlight and focus its attention on the correct lesion areas in the images when making a decision, rather than focusing its attention on relevant yet incorrect regions.

CONCLUSIONS:

We demonstrate the effectiveness of our approach for more interpretable and reliable oral potentially malignant lesion and malignant lesion classification.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Neoplasias de la Boca / Aprendizaje Profundo Tipo de estudio: Prognostic_studies Límite: Humans Idioma: En Revista: J Biomed Opt Asunto de la revista: ENGENHARIA BIOMEDICA / OFTALMOLOGIA Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Neoplasias de la Boca / Aprendizaje Profundo Tipo de estudio: Prognostic_studies Límite: Humans Idioma: En Revista: J Biomed Opt Asunto de la revista: ENGENHARIA BIOMEDICA / OFTALMOLOGIA Año: 2022 Tipo del documento: Article País de afiliación: Estados Unidos
...