SelfGCN: Graph Convolution Network With Self-Attention for Skeleton-Based Action Recognition.
IEEE Trans Image Process
; 33: 4391-4403, 2024.
Article
en En
| MEDLINE
| ID: mdl-39083390
ABSTRACT
Graph Convolutional Networks (GCNs) are widely used for skeleton-based action recognition and achieved remarkable performance. Due to the locality of graph convolution, GCNs can only utilize short-range node dependencies but fail to model long-range node relationships. In addition, existing graph convolution based methods normally use a uniform skeleton topology for all frames, which limits the ability of feature learning. To address these issues, we present the Graph Convolution Network with Self-Attention (SelfGCN), which consists of a mixing features across self-attention and graph convolution (MFSG) module and a temporal-specific spatial self-attention (TSSA) module. The MFSG module models local and global relationships between joints by executing graph convolution and self-attention branches in parallel. Its bi-directional interactive learning strategy utilizes complementary clues in the channel dimensions and the spatial dimensions across both of these branches. The TSSA module uses self-attention to learn the spatial relationships between joints of each frame in a skeleton sequence. It also models the unique spatial features of the single frames. We conduct extensive experiments on three popular benchmark datasets, NTU RGB+D, NTU RGB+D120, and Northwestern-UCLA. The results of the experiment demonstrate that our method achieves or exceeds the record accuracies on all three benchmarks. Our project website is available at https//github.com/SunPengP/SelfGCN.
Texto completo:
1
Base de datos:
MEDLINE
Idioma:
En
Revista:
IEEE Trans Image Process
Asunto de la revista:
INFORMATICA MEDICA
Año:
2024
Tipo del documento:
Article