Your browser doesn't support javascript.
loading
An attribution graph-based interpretable method for CNNs.
Zheng, Xiangwei; Zhang, Lifeng; Xu, Chunyan; Chen, Xuanchi; Cui, Zhen.
Afiliación
  • Zheng X; School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China; State Key Laboratory of High-end Server & Storage Technology, Jinan, 250300, Shandong, China. Electronic address: xwzhengcn@163.com.
  • Zhang L; School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China; State Key Laboratory of High-end Server & Storage Technology, Jinan, 250300, Shandong, China. Electronic address: lifengzhangsdnu@163.com.
  • Xu C; School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, Jiangsu, China; State Key Laboratory of High-end Server & Storage Technology, Jinan, 250300, Shandong, China. Electronic address: cyx@njust.edu.cn.
  • Chen X; School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China; State Key Laboratory of High-end Server & Storage Technology, Jinan, 250300, Shandong, China. Electronic address: 1282834189@qq.com.
  • Cui Z; School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, 210094, Jiangsu, China; State Key Laboratory of High-end Server & Storage Technology, Jinan, 250300, Shandong, China. Electronic address: zhen.cui@njust.edu.cn.
Neural Netw ; 179: 106597, 2024 Nov.
Article en En | MEDLINE | ID: mdl-39128275
ABSTRACT
Convolutional Neural Networks (CNNs) have demonstrated outstanding performance in various domains, such as face recognition, object detection, and image segmentation. However, the lack of transparency and limited interpretability inherent in CNNs pose challenges in fields such as medical diagnosis, autonomous driving, finance, and military applications. Several studies have explored the interpretability of CNNs and proposed various post-hoc interpretable methods. The majority of these methods are feature-based, focusing on the influence of input variables on outputs. Few methods undertake the analysis of parameters in CNNs and their overall structure. To explore the structure of CNNs and intuitively comprehend the role of their internal parameters, we propose an Attribution Graph-based Interpretable method for CNNs (AGIC) which models the overall structure of CNNs as graphs and provides interpretability from global and local perspectives. The runtime parameters of CNNs and feature maps of each image sample are applied to construct attribution graphs (At-GCs), where the convolutional kernels are represented as nodes and the SHAP values between kernel outputs are assigned as edges. These At-GCs are then employed to pretrain a newly designed heterogeneous graph encoder based on Deep Graph Infomax (DGI). To comprehensively delve into the overall structure of CNNs, the pretrained encoder is used for two types of interpretable tasks (1) a classifier is attached to the pretrained encoder for the classification of At-GCs, revealing the dependency of At-GC's topological characteristics on the image sample categories, and (2) a scoring aggregation (SA) network is constructed to assess the importance of each node in At-GCs, thus reflecting the relative importance of kernels in CNNs. The experimental results indicate that the topological characteristics of At-GC exhibit a dependency on the sample category used in its construction, which reveals that kernels in CNNs show distinct combined activation patterns for processing different image categories, meanwhile, the kernels that receive high scores from SA network are crucial for feature extraction, whereas low-scoring kernels can be pruned without affecting model performance, thereby enhancing the interpretability of CNNs.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Redes Neurales de la Computación Límite: Humans Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2024 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Redes Neurales de la Computación Límite: Humans Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2024 Tipo del documento: Article
...