Your browser doesn't support javascript.
loading
A Global and Local Feature fused CNN architecture for the sEMG-based hand gesture recognition.
Xiong, Baoping; Chen, Wensheng; Niu, Yinxi; Gan, Zhenhua; Mao, Guojun; Xu, Yong.
Affiliation
  • Xiong B; Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China.
  • Chen W; Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China.
  • Niu Y; Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China.
  • Gan Z; Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China.
  • Mao G; Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China.
  • Xu Y; Computer Science and Mathematics, Fujian University of Technology, Fujian 350116, China. Electronic address: y.xu@fjut.edu.cn.
Comput Biol Med ; 166: 107497, 2023 Sep 18.
Article in En | MEDLINE | ID: mdl-37783073
Deep learning methods have been widely used for the classification of hand gestures using sEMG signals. Existing deep learning architectures only captures local spatial information and has limitations in extracting global temporal dependency to enhance the model's performance. In this paper, we propose a Global and Local Feature fused CNN (GLF-CNN) model that extracts features both globally and locally from sEMG signals to enhance the performance of hand gestures classification. The model contains two independent branches extracting local and global features each and fuses them to learn more diversified features and effectively improve the stability of gesture recognition. Besides, it also exhibits lower computational cost compared to the present approaches. We conduct experiments on five benchmark databases, including the NinaPro DB4, NinaPro DB5, BioPatRec DB1-DB3, and the Mendeley Data. The proposed model achieved the highest average accuracy of 88.34% on these databases, with a 9.96% average accuracy improvement and a 50% reduction in variance compared to the models with the same number of parameters. Moreover, the classification accuracies for the BioPatRec DB1, BioPatRec DB3 and Mendeley Data are 91.4%, 91.0% and 88.6% respectively, corresponding to an improvement of 13.2%, 41.5% and 12.2% over the respective state-of-the-art models. The experimental results demonstrate that the proposed model effectively enhances robustness, with improved gesture recognition performance and generalization ability. It contributes a new way for prosthetic control and human-machine interaction.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Comput Biol Med Year: 2023 Document type: Article Affiliation country: Country of publication:

Full text: 1 Collection: 01-internacional Database: MEDLINE Language: En Journal: Comput Biol Med Year: 2023 Document type: Article Affiliation country: Country of publication: