Your browser doesn't support javascript.
loading
SAST-GCN: Segmentation Adaptive Spatial Temporal-Graph Convolutional Network for P3-Based Video Target Detection.
Lu, Runnan; Zeng, Ying; Zhang, Rongkai; Yan, Bin; Tong, Li.
Afiliação
  • Lu R; Henan Key Laboratory of Imaging and Intelligent Processing, People's Liberation Army of China (PLA) Strategic Support Force Information Engineering University, Zhengzhou, China.
  • Zeng Y; Henan Key Laboratory of Imaging and Intelligent Processing, People's Liberation Army of China (PLA) Strategic Support Force Information Engineering University, Zhengzhou, China.
  • Zhang R; Key Laboratory for Neuroinformation of Ministry of Education, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China.
  • Yan B; Henan Key Laboratory of Imaging and Intelligent Processing, People's Liberation Army of China (PLA) Strategic Support Force Information Engineering University, Zhengzhou, China.
  • Tong L; Henan Key Laboratory of Imaging and Intelligent Processing, People's Liberation Army of China (PLA) Strategic Support Force Information Engineering University, Zhengzhou, China.
Front Neurosci ; 16: 913027, 2022.
Article em En | MEDLINE | ID: mdl-35720707
ABSTRACT
Detecting video-induced P3 is crucial to building the video target detection system based on the brain-computer interface. However, studies have shown that the brain response patterns corresponding to video-induced P3 are dynamic and determined by the interaction of multiple brain regions. This paper proposes a segmentation adaptive spatial-temporal graph convolutional network (SAST-GCN) for P3-based video target detection. To make full use of the dynamic characteristics of the P3 signal data, the data is segmented according to the processing stages of the video-induced P3, and the brain network connections are constructed correspondingly. Then, the spatial-temporal feature of EEG data is extracted by adaptive spatial-temporal graph convolution to discriminate the target and non-target in the video. Especially, a style-based recalibration module is added to select feature maps with higher contributions and increase the feature extraction ability of the network. The experimental results demonstrate the superiority of our proposed model over the baseline methods. Also, the ablation experiments indicate that the segmentation of data to construct the brain connection can effectively improve the recognition performance by reflecting the dynamic connection relationship between EEG channels more accurately.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Diagnostic_studies / Prognostic_studies Idioma: En Revista: Front Neurosci Ano de publicação: 2022 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Tipo de estudo: Diagnostic_studies / Prognostic_studies Idioma: En Revista: Front Neurosci Ano de publicação: 2022 Tipo de documento: Article País de afiliação: China