RESUMO
Motor imagery (MI) classification based on electroencephalogram (EEG) is a widely-used technique in non-invasive brain-computer interface (BCI) systems. Since EEG recordings suffer from heterogeneity across subjects and labeled data insufficiency, designing a classifier that performs the MI independently from the subject with limited labeled samples would be desirable. To overcome these limitations, we propose a novel subject-independent semi-supervised deep architecture (SSDA). The proposed SSDA consists of two parts: an unsupervised and a supervised element. The training set contains both labeled and unlabeled data samples from multiple subjects. First, the unsupervised part, known as the columnar spatiotemporal auto-encoder (CST-AE), extracts latent features from all the training samples by maximizing the similarity between the original and reconstructed data. A dimensional scaling approach is employed to reduce the dimensionality of the representations while preserving their discriminability. Second, a supervised part learns a classifier based on the labeled training samples using the latent features acquired in the unsupervised part. Moreover, we employ center loss in the supervised part to minimize the embedding space distance of each point in a class to its center. The model optimizes both parts of the network in an end-to-end fashion. The performance of the proposed SSDA is evaluated on test subjects who were not seen by the model during the training phase. To assess the performance, we use two benchmark EEG-based MI task datasets. The results demonstrate that SSDA outperforms state-of-the-art methods and that a small number of labeled training samples can be sufficient for strong classification performance.
Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Aprendizado de Máquina , Aprendizagem , Benchmarking , Algoritmos , ImaginaçãoRESUMO
Emotion recognition based on electroencephalography (EEG) signals has been receiving significant attention in the domains of affective computing and brain-computer interfaces (BCI). Although several deep learning methods have been proposed dealing with the emotion recognition task, developing methods that effectively extract and use discriminative features is still a challenge. In this work, we propose the novel spatio-temporal attention neural network (STANN) to extract discriminative spatial and temporal features of EEG signals by a parallel structure of multi-column convolutional neural network and attention-based bidirectional long-short term memory. Moreover, we explore the inter-channel relationships of EEG signals via graph signal processing (GSP) tools. Our experimental analysis demonstrates that the proposed network improves the state-of-the-art results in subject-wise, binary classification of valence and arousal levels as well as four-class classification in the valence-arousal emotion space when raw EEG signals or their graph representations, in an architecture coined as GFT-STANN, are used as model inputs.