Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38598402

RESUMO

Canonical correlation analysis (CCA), Multivariate synchronization index (MSI), and their extended methods have been widely used for target recognition in Brain-computer interfaces (BCIs) based on Steady State Visual Evoked Potentials (SSVEP), and covariance calculation is an important process for these algorithms. Some studies have proved that embedding time-local information into the covariance can optimize the recognition effect of the above algorithms. However, the optimization effect can only be observed from the recognition results and the improvement principle of time-local information cannot be explained. Therefore, we propose a time-local weighted transformation (TT) recognition framework that directly embeds the time-local information into the electroencephalography signal through weighted transformation. The influence mechanism of time-local information on the SSVEP signal can then be observed in the frequency domain. Low-frequency noise is suppressed on the premise of sacrificing part of the SSVEP fundamental frequency energy, the harmonic energy of SSVEP is enhanced at the cost of introducing a small amount of high-frequency noise. The experimental results show that the TT recognition framework can significantly improve the recognition ability of the algorithms and the separability of extracted features. Its enhancement effect is significantly better than the traditional time-local covariance extraction method, which has enormous application potential.


Assuntos
Interfaces Cérebro-Computador , Humanos , Potenciais Evocados Visuais , Reconhecimento Automatizado de Padrão/métodos , Reconhecimento Psicológico , Eletroencefalografia/métodos , Algoritmos , Estimulação Luminosa
2.
J Neural Eng ; 21(4)2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38848710

RESUMO

Objective.Event-related potentials (ERPs) are cerebral responses to cognitive processes, also referred to as cognitive potentials. Accurately decoding ERPs can help to advance research on brain-computer interfaces (BCIs). The spatial pattern of ERP varies with time. In recent years, convolutional neural networks (CNNs) have shown promising results in electroencephalography (EEG) classification, specifically for ERP-based BCIs.Approach.This study proposes an auto-segmented multi-time window dual-scale neural network (AWDSNet). The combination of a multi-window design and a lightweight base network gives AWDSNet good performance at an acceptable cost of computing. For each individual, we create a time window set by calculating the correlation of signedR-squared values, which enables us to determine the length and number of windows automatically. The signal data are segmented based on the obtained window sets in sub-plus-global mode. Then, the multi-window data are fed into a dual-scale CNN model, where the sizes of the convolution kernels are determined by the window sizes. The use of dual-scale spatiotemporal convolution focuses on feature details while also having a large enough receptive length, and the grouping parallelism undermines the increase in the number of parameters that come with dual scaling.Main results.We evaluated the performance of AWDSNet on a public dataset and a self-collected dataset. A comparison was made with four popular methods including EEGNet, DeepConvNet, EEG-Inception, and PPNN. The experimental results show that AWDSNet has excellent classification performance with acceptable computational complexity.Significance.These results indicate that AWDSNet has great potential for applications in ERP decoding.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Potenciais Evocados , Redes Neurais de Computação , Humanos , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Masculino , Adulto , Feminino , Adulto Jovem , Fatores de Tempo
3.
IEEE Trans Cybern ; PP2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38713574

RESUMO

Event-related potentials (ERPs) reflect neurophysiological changes of the brain in response to external events and their associated underlying complex spatiotemporal feature information is governed by ongoing oscillatory activity within the brain. Deep learning methods have been increasingly adopted for ERP-based brain-computer interfaces (BCIs) due to their excellent feature representation abilities, which allow for deep analysis of oscillatory activity within the brain. Features with higher spatiotemporal frequencies usually represent detailed and localized information, while features with lower spatiotemporal frequencies usually represent global structures. Mining EEG features from multiple spatiotemporal frequencies is conducive to obtaining more discriminative information. A multiscale feature fusion octave convolution neural network (MOCNN) is proposed in this article. MOCNN divides the ERP signals into high-, medium-and low-frequency components corresponding to different resolutions and processes them in different branches. By adding mid-and low-frequency components, the feature information used by MOCNN can be enriched, and the required amount of calculations can be reduced. After successive feature mapping using temporal and spatial convolutions, MOCNN realizes interactive learning among different components through the exchange of feature information among branches. Classification is accomplished by feeding the fused deep spatiotemporal features from various components into a fully connected layer. The results, obtained on two public datasets and a self-collected ERP dataset, show that MOCNN can achieve state-of-the-art ERP classification performance. In this study, the generalized concept of octave convolution is introduced into the field of ERP-BCI research, which allows effective spatiotemporal features to be extracted from multiscale networks through branch width optimization and information interaction at various scales.

4.
J Neural Eng ; 21(3)2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38885683

RESUMO

Objective. In brain-computer interfaces (BCIs) that utilize motor imagery (MI), minimizing calibration time has become increasingly critical for real-world applications. Recently, transfer learning (TL) has been shown to effectively reduce the calibration time in MI-BCIs. However, variations in data distribution among subjects can significantly influence the performance of TL in MI-BCIs.Approach.We propose a cross-dataset adaptive domain selection transfer learning framework that integrates domain selection, data alignment, and an enhanced common spatial pattern (CSP) algorithm. Our approach uses a huge dataset of 109 subjects as the source domain. We begin by identifying non-BCI illiterate subjects from this huge dataset, then determine the source domain subjects most closely aligned with the target subjects using maximum mean discrepancy. After undergoing Euclidean alignment processing, features are extracted by multiple composite CSP. The final classification is carried out using the support vector machine.Main results.Our findings indicate that the proposed technique outperforms existing methods, achieving classification accuracies of 75.05% and 76.82% in two cross-dataset experiments, respectively.Significance.By reducing the need for extensive training data, yet maintaining high accuracy, our method optimizes the practical implementation of MI-BCIs.


Assuntos
Interfaces Cérebro-Computador , Imaginação , Transferência de Experiência , Humanos , Imaginação/fisiologia , Transferência de Experiência/fisiologia , Máquina de Vetores de Suporte , Eletroencefalografia/métodos , Movimento/fisiologia , Algoritmos , Aprendizado de Máquina , Bases de Dados Factuais , Masculino
5.
IEEE Trans Biomed Eng ; PP2024 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-39120991

RESUMO

In steady-state visual evoked potential (SSVEP)based brain-computer interfaces (BCIs), various spatial filtering methods based on individual calibration data have been proposed to alleviate the interference of spontaneous activities in SSVEP signals for enhancing the SSVEP detection performance. However, the necessary calibration procedures take time, cause visual fatigue and reduce usability. For the calibration-free scenario, we propose a cross-subject frequency identification method based on transfer superimposed theory for SSVEP frequency decoding. First, a multi-channel signal decomposition model was constructed. Next, we used the cross least squares iterative method to create individual specific transfer spatial filters as well as source subject transfer superposition templates in the source subject. Then, we identified common knowledge among source subjects using a prototype spatial filter to make common transfer spatial filters and common impulse responses. Following, we reconstructed a global transfer superimposition template with SSVEP frequency characteristics. Finally, an ensemble cross-subject transfer learning method was proposed for SSVEP frequency recognition by combining the sourcesubject transfer mode, the global transfer mode, and the sinecosine reference template. Offline tests on two public datasets show that the proposed method significantly outperforms the FBCCA, TTCCA, and CSSFT methods. More importantly, the proposed method can be directly used in online SSVEP recognition without calibration. The proposed algorithm was robust, which is important for a practical BCI.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA