Your browser doesn't support javascript.
loading
R-Cut: Enhancing Explainability in Vision Transformers with Relationship Weighted Out and Cut.
Niu, Yingjie; Ding, Ming; Ge, Maoning; Karlsson, Robin; Zhang, Yuxiao; Carballo, Alexander; Takeda, Kazuya.
Afiliación
  • Niu Y; Graduate School of Informatics, Nagoya University, Nagoya 464-8603, Japan.
  • Ding M; Graduate School of Informatics, Nagoya University, Nagoya 464-8603, Japan.
  • Ge M; Graduate School of Informatics, Nagoya University, Nagoya 464-8603, Japan.
  • Karlsson R; Graduate School of Informatics, Nagoya University, Nagoya 464-8603, Japan.
  • Zhang Y; Graduate School of Informatics, Nagoya University, Nagoya 464-8603, Japan.
  • Carballo A; Graduate School of Informatics, Nagoya University, Nagoya 464-8603, Japan.
  • Takeda K; Graduate School of Engineering, Gifu University, Gifu 501-1112, Japan.
Sensors (Basel) ; 24(9)2024 Apr 24.
Article en En | MEDLINE | ID: mdl-38732800
ABSTRACT
Transformer-based models have gained popularity in the field of natural language processing (NLP) and are extensively utilized in computer vision tasks and multi-modal models such as GPT4. This paper presents a novel method to enhance the explainability of transformer-based image classification models. Our method aims to improve trust in classification results and empower users to gain a deeper understanding of the model for downstream tasks by providing visualizations of class-specific maps. We introduce two modules the "Relationship Weighted Out" and the "Cut" modules. The "Relationship Weighted Out" module focuses on extracting class-specific information from intermediate layers, enabling us to highlight relevant features. Additionally, the "Cut" module performs fine-grained feature decomposition, taking into account factors such as position, texture, and color. By integrating these modules, we generate dense class-specific visual explainability maps. We validate our method with extensive qualitative and quantitative experiments on the ImageNet dataset. Furthermore, we conduct a large number of experiments on the LRN dataset, which is specifically designed for automatic driving danger alerts, to evaluate the explainability of our method in scenarios with complex backgrounds. The results demonstrate a significant improvement over previous methods. Moreover, we conduct ablation experiments to validate the effectiveness of each module. Through these experiments, we are able to confirm the respective contributions of each module, thus solidifying the overall effectiveness of our proposed approach.
Palabras clave

Texto completo: 1 Bases de datos: MEDLINE Idioma: En Revista: Sensors (Basel) Año: 2024 Tipo del documento: Article País de afiliación: Japón

Texto completo: 1 Bases de datos: MEDLINE Idioma: En Revista: Sensors (Basel) Año: 2024 Tipo del documento: Article País de afiliación: Japón