Your browser doesn't support javascript.
loading
A human activity recognition method based on Vision Transformer.
Han, Huiyan; Zeng, Hongwei; Kuang, Liqun; Han, Xie; Xue, Hongxin.
Affiliation
  • Han H; School of Computer Science and Technology, North University of China, Taiyuan, 030051, China. 20050537@nuc.edu.cn.
  • Zeng H; Shanxi Key Laboratory of Machine Vision and Virtual Reality, Taiyuan, 030051, China. 20050537@nuc.edu.cn.
  • Kuang L; Shanxi Vision Information Processing and Intelligent Robot Engineering Research Center, Taiyuan, 030051, China. 20050537@nuc.edu.cn.
  • Han X; School of Computer Science and Technology, North University of China, Taiyuan, 030051, China.
  • Xue H; Shanxi Key Laboratory of Machine Vision and Virtual Reality, Taiyuan, 030051, China.
Sci Rep ; 14(1): 15310, 2024 07 03.
Article in En | MEDLINE | ID: mdl-38961136
ABSTRACT
Human activity recognition has a wide range of applications in various fields, such as video surveillance, virtual reality and human-computer intelligent interaction. It has emerged as a significant research area in computer vision. GCN (Graph Convolutional networks) have recently been widely used in these fields and have made great performance. However, there are still some challenges including over-smoothing problem caused by stack graph convolutions and deficient semantics correlation to capture the large movements between time sequences. Vision Transformer (ViT) is utilized in many 2D and 3D image fields and has surprised results. In our work, we propose a novel human activity recognition method based on ViT (HAR-ViT). We integrate enhanced AGCL (eAGCL) in 2s-AGCN to ViT to make it process spatio-temporal data (3D skeleton) and make full use of spatial features. The position encoder module orders the non-sequenced information while the transformer encoder efficiently compresses sequence data features to enhance calculation speed. Human activity recognition is accomplished through multi-layer perceptron (MLP) classifier. Experimental results demonstrate that the proposed method achieves SOTA performance on three extensively used datasets, NTU RGB+D 60, NTU RGB+D 120 and Kinetics-Skeleton 400.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Human Activities Limits: Humans Language: En Journal: Sci Rep / Sci. rep. (Nat. Publ. Group) / Scientific reports (Nature Publishing Group) Year: 2024 Document type: Article Affiliation country: China Country of publication: United kingdom

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Human Activities Limits: Humans Language: En Journal: Sci Rep / Sci. rep. (Nat. Publ. Group) / Scientific reports (Nature Publishing Group) Year: 2024 Document type: Article Affiliation country: China Country of publication: United kingdom