Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros




Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(15)2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39123882

RESUMEN

Aiming at the problem that existing emotion recognition methods fail to make full use of the information in the time, frequency, and spatial domains in the EEG signals, which leads to the low accuracy of EEG emotion classification, this paper proposes a multi-feature, multi-frequency band-based cross-scale attention convolutional model (CATM). The model is mainly composed of a cross-scale attention module, a frequency-space attention module, a feature transition module, a temporal feature extraction module, and a depth classification module. First, the cross-scale attentional convolution module extracts spatial features at different scales for the preprocessed EEG signals; then, the frequency-space attention module assigns higher weights to important channels and spatial locations; next, the temporal feature extraction module extracts temporal features of the EEG signals; and, finally, the depth classification module categorizes the EEG signals into emotions. We evaluated the proposed method on the DEAP dataset with accuracies of 99.70% and 99.74% in the valence and arousal binary classification experiments, respectively; the accuracy in the valence-arousal four-classification experiment was 97.27%. In addition, considering the application of fewer channels, we also conducted 5-channel experiments, and the binary classification accuracies of valence and arousal were 97.96% and 98.11%, respectively. The valence-arousal four-classification accuracy was 92.86%. The experimental results show that the method proposed in this paper exhibits better results compared to other recent methods, and also achieves better results in few-channel experiments.


Asunto(s)
Electroencefalografía , Emociones , Electroencefalografía/métodos , Humanos , Emociones/fisiología , Procesamiento de Señales Asistido por Computador , Atención/fisiología , Algoritmos , Redes Neurales de la Computación , Nivel de Alerta/fisiología
2.
Brain Sci ; 14(8)2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39199509

RESUMEN

In recent years, EEG-based emotion recognition technology has made progress, but there are still problems of low model efficiency and loss of emotional information, and there is still room for improvement in recognition accuracy. To fully utilize EEG's emotional information and improve recognition accuracy while reducing computational costs, this paper proposes a Convolutional-Recurrent Hybrid Network with a dual-stream adaptive approach and an attention mechanism (CSA-SA-CRTNN). Firstly, the model utilizes a CSAM module to assign corresponding weights to EEG channels. Then, an adaptive dual-stream convolutional-recurrent network (SA-CRNN and MHSA-CRNN) is applied to extract local spatial-temporal features. After that, the extracted local features are concatenated and fed into a temporal convolutional network with a multi-head self-attention mechanism (MHSA-TCN) to capture global information. Finally, the extracted EEG information is used for emotion classification. We conducted binary and ternary classification experiments on the DEAP dataset, achieving 99.26% and 99.15% accuracy for arousal and valence in binary classification and 97.69% and 98.05% in ternary classification, and on the SEED dataset, we achieved an accuracy of 98.63%, surpassing relevant algorithms. Additionally, the model's efficiency is significantly higher than other models, achieving better accuracy with lower resource consumption.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA