Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 21(4)2021 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-33578835

RESUMEN

Emotion recognition has a wide range of potential applications in the real world. Among the emotion recognition data sources, electroencephalography (EEG) signals can record the neural activities across the human brain, providing us a reliable way to recognize the emotional states. Most of existing EEG-based emotion recognition studies directly concatenated features extracted from all EEG frequency bands for emotion classification. This way assumes that all frequency bands share the same importance by default; however, it cannot always obtain the optimal performance. In this paper, we present a novel multi-scale frequency bands ensemble learning (MSFBEL) method to perform emotion recognition from EEG signals. Concretely, we first re-organize all frequency bands into several local scales and one global scale. Then we train a base classifier on each scale. Finally we fuse the results of all scales by designing an adaptive weight learning method which automatically assigns larger weights to more important scales to further improve the performance. The proposed method is validated on two public data sets. For the "SEED IV" data set, MSFBEL achieves average accuracies of 82.75%, 87.87%, and 78.27% on the three sessions under the within-session experimental paradigm. For the "DEAP" data set, it obtains average accuracy of 74.22% for four-category classification under 5-fold cross validation. The experimental results demonstrate that the scale of frequency bands influences the emotion recognition rate, while the global scale that directly concatenating all frequency bands cannot always guarantee to obtain the best emotion recognition performance. Different scales provide complementary information to each other, and the proposed adaptive weight learning method can effectively fuse them to further enhance the performance.


Asunto(s)
Algoritmos , Electroencefalografía , Aprendizaje Automático , Encéfalo , Emociones , Humanos
2.
J Neurosci Methods ; 400: 109978, 2023 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-37806390

RESUMEN

BACKGROUND: Cross-dataset EEG emotion recognition is an extremely challenging task, since data distributions of EEG from different datasets are greatly different, which makes the universal models yield unsatisfactory results. Although there are many methods have been proposed to reduce cross-dataset distribution discrepancies, they still neglected the following two problems. (1) Label space inconsistency: emotional label spaces of subjects from different datasets are different; (2) Uncertainty propagation: the uncertainty of misclassified emotion samples will propagate between datasets. NEW METHOD: To solve these problems, we propose a novel method called domain symmetry and predictive balance (DSPB). For the problem of label space inconsistency, a domain symmetry module is designed to make label spaces of source and target domain to be the same, which randomly selects samples from the source domain and put into the target domain. For the problem of uncertainty propagation, a predictive balance module is proposed to reduce the prediction score of incorrect samples and then effectively reduce distribution differences between EEG from different datasets. RESULTS: Experimental results show that our method achieve 61.48% average accuracies on the three cross-dataset tasks. Moreover, we find that gamma is the most relevant to emotion recognition among the five frequency bands, and the prefrontal and temporal brain regions are the channels carrying the most emotional information among the 62 brain channels. COMPARISON WITH EXISTING METHODS: Compared with the partial domain adaptation method (SPDA) and the unsupervised domain adaptation (MS-MDA), our method improves average accuracies by 15.60% and 23.11%, respectively. CONCLUSION: Besides, data distributions of EEG from different datasets but with the same emotional labels have been well aligned, which demonstrates the effectiveness of DSPB.


Asunto(s)
Emociones , Reconocimiento en Psicología , Humanos , Encéfalo , Lóbulo Temporal , Electroencefalografía
3.
Front Physiol ; 12: 643202, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33737889

RESUMEN

Speech emotion recognition (SER) is a difficult and challenging task because of the affective variances between different speakers. The performances of SER are extremely reliant on the extracted features from speech signals. To establish an effective features extracting and classification model is still a challenging task. In this paper, we propose a new method for SER based on Deep Convolution Neural Network (DCNN) and Bidirectional Long Short-Term Memory with Attention (BLSTMwA) model (DCNN-BLSTMwA). We first preprocess the speech samples by data enhancement and datasets balancing. Secondly, we extract three-channel of log Mel-spectrograms (static, delta, and delta-delta) as DCNN input. Then the DCNN model pre-trained on ImageNet dataset is applied to generate the segment-level features. We stack these features of a sentence into utterance-level features. Next, we adopt BLSTM to learn the high-level emotional features for temporal summarization, followed by an attention layer which can focus on emotionally relevant features. Finally, the learned high-level emotional features are fed into the Deep Neural Network (DNN) to predict the final emotion. Experiments on EMO-DB and IEMOCAP database obtain the unweighted average recall (UAR) of 87.86 and 68.50%, respectively, which are better than most popular SER methods and demonstrate the effectiveness of our propose method.

4.
Cogn Neurodyn ; 14(6): 815-828, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33101533

RESUMEN

In this paper, we present a novel method, called four-dimensional convolutional recurrent neural network, which integrating frequency, spatial and temporal information of multichannel EEG signals explicitly to improve EEG-based emotion recognition accuracy. First, to maintain these three kinds of information of EEG, we transform the differential entropy features from different channels into 4D structures to train the deep model. Then, we introduce CRNN model, which is combined by convolutional neural network (CNN) and recurrent neural network with long short term memory (LSTM) cell. CNN is used to learn frequency and spatial information from each temporal slice of 4D inputs, and LSTM is used to extract temporal dependence from CNN outputs. The output of the last node of LSTM performs classification. Our model achieves state-of-the-art performance both on SEED and DEAP datasets under intra-subject splitting. The experimental results demonstrate the effectiveness of integrating frequency, spatial and temporal information of EEG for emotion recognition.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA