Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Hear Res ; 453: 109104, 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39255528

RESUMO

Auditory spatial attention detection (ASAD) seeks to determine which speaker in a surround sound field a listener is focusing on based on the one's brain biosignals. Although existing studies have achieved ASAD from a single-trial electroencephalogram (EEG), the huge inter-subject variability makes them generally perform poorly in cross-subject scenarios. Besides, most ASAD methods do not take full advantage of topological relationships between EEG channels, which are crucial for high-quality ASAD. Recently, some advanced studies have introduced graph-based brain topology modeling into ASAD, but how to calculate edge weights in a graph to better capture actual brain connectivity is worthy of further investigation. To address these issues, we propose a new ASAD method in this paper. First, we model a multi-channel EEG segment as a graph, where differential entropy serves as the node feature, and a static adjacency matrix is generated based on inter-channel mutual information to quantify brain functional connectivity. Then, different subjects' EEG graphs are encoded into a shared embedding space through a total variation graph neural network. Meanwhile, feature distribution alignment based on multi-kernel maximum mean discrepancy is adopted to learn subject-invariant patterns. Note that we align EEG embeddings of different subjects to reference distributions rather than align them to each other for the purpose of privacy preservation. A series of experiments on open datasets demonstrate that the proposed model outperforms state-of-the-art ASAD models in cross-subject scenarios with relatively low computational complexity, and feature distribution alignment improves the generalizability of the proposed model to a new subject.

2.
Neuroscience ; 2024 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-39265802

RESUMO

Auditory spatial attention detection (ASAD) aims to decipher the spatial locus of a listener's selective auditory attention from electroencephalogram (EEG) signals. However, current models may exhibit deficiencies in EEG feature extraction, leading to overfitting on small datasets or a decline in EEG discriminability. Furthermore, they often neglect topological relationships between EEG channels and, consequently, brain connectivities. Although graph-based EEG modeling has been employed in ASAD, effectively incorporating both local and global connectivities remains a great challenge. To address these limitations, we propose a new ASAD model. First, time-frequency feature fusion provides a more precise and discriminative EEG representation. Second, EEG segments are treated as graphs, and the graph convolution and global attention mechanism are leveraged to capture local and global brain connections, respectively. A series of experiments are conducted in a leave-trials-out cross-validation manner. On the MAD-EEG and KUL datasets, the accuracies of the proposed model are more than 9% and 3% higher than those of the corresponding state-of-the-art models, respectively, while the accuracy of the proposed model on the SNHL dataset is roughly comparable to that of the state-of-the-art model. EEG time-frequency feature fusion proves to be indispensable in the proposed model. EEG electrodes over the frontal cortex are most important for ASAD tasks, followed by those over the temporal lobe. Additionally, the proposed model performs well even on small datasets. This study contributes to a deeper understanding of the neural encoding related to human hearing and attention, with potential applications in neuro-steered hearing devices.

3.
J Neural Eng ; 21(2)2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38588700

RESUMO

Objective. The instability of the EEG acquisition devices may lead to information loss in the channels or frequency bands of the collected EEG. This phenomenon may be ignored in available models, which leads to the overfitting and low generalization of the model.Approach. Multiple self-supervised learning tasks are introduced in the proposed model to enhance the generalization of EEG emotion recognition and reduce the overfitting problem to some extent. Firstly, channel masking and frequency masking are introduced to simulate the information loss in certain channels and frequency bands resulting from the instability of EEG, and two self-supervised learning-based feature reconstruction tasks combining masked graph autoencoders (GAE) are constructed to enhance the generalization of the shared encoder. Secondly, to take full advantage of the complementary information contained in these two self-supervised learning tasks to ensure the reliability of feature reconstruction, a weight sharing (WS) mechanism is introduced between the two graph decoders. Thirdly, an adaptive weight multi-task loss (AWML) strategy based on homoscedastic uncertainty is adopted to combine the supervised learning loss and the two self-supervised learning losses to enhance the performance further.Main results. Experimental results on SEED, SEED-V, and DEAP datasets demonstrate that: (i) Generally, the proposed model achieves higher averaged emotion classification accuracy than various baselines included in both subject-dependent and subject-independent scenarios. (ii) Each key module contributes to the performance enhancement of the proposed model. (iii) It achieves higher training efficiency, and significantly lower model size and computational complexity than the state-of-the-art (SOTA) multi-task-based model. (iv) The performances of the proposed model are less influenced by the key parameters.Significance. The introduction of the self-supervised learning task helps to enhance the generalization of the EEG emotion recognition model and eliminate overfitting to some extent, which can be modified to be applied in other EEG-based classification tasks.


Assuntos
Eletroencefalografia , Emoções , Aprendizado de Máquina Supervisionado , Aprendizado de Máquina Supervisionado/normas , Conjuntos de Dados como Assunto , Humanos
4.
Neurosci Lett ; 818: 137534, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-37871827

RESUMO

Music-oriented auditory attention detection (AAD) aims at determining which instrument in polyphonic music a listener is paying attention to by analyzing the listener's electroencephalogram (EEG). However, the existing linear models cannot effectively mimic the nonlinearity of the human brain, resulting in limited performance. Thus, a nonlinear music-oriented AAD model is proposed in this paper. Firstly, an auditory feature and a musical feature are fused to represent musical sources precisely and comprehensively. Secondly, the EEG is enhanced if music stimuli are presented in stereo. Thirdly, a neural network architecture is constructed to capture nonlinear and dynamic interactions between the EEG and auditory stimuli. Finally, the musical source most similar to the EEG in the common embedding space is identified as the attended one. Experimental results demonstrate that the proposed model outperforms all baseline models. On 1-s decision windows, it reaches accuracies of 92.6% and 81.7% under mono duo and trio stimuli, respectively. Additionally, it can be easily extended to speech-oriented AAD. This work can open up new possibilities for studies on both brain neural activity decoding and music information retrieval.


Assuntos
Música , Humanos , Percepção Auditiva , Eletroencefalografia , Encéfalo , Redes Neurais de Computação , Estimulação Acústica/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA