Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(19)2021 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-34640699

RESUMO

Intracortical brain-computer interfaces (iBCIs) translate neural activity into control commands, thereby allowing paralyzed persons to control devices via their brain signals. Recurrent neural networks (RNNs) are widely used as neural decoders because they can learn neural response dynamics from continuous neural activity. Nevertheless, excessively long or short input neural activity for an RNN may decrease its decoding performance. Based on the temporal attention module exploiting relations in features over time, we propose a temporal attention-aware timestep selection (TTS) method that improves the interpretability of the salience of each timestep in an input neural activity. Furthermore, TTS determines the appropriate input neural activity length for accurate neural decoding. Experimental results show that the proposed TTS efficiently selects 28 essential timesteps for RNN-based neural decoders, outperforming state-of-the-art neural decoders on two nonhuman primate datasets (R2=0.76±0.05 for monkey Indy and CC=0.91±0.01 for monkey N). In addition, it reduces the computation time for offline training (reducing 5-12%) and online prediction (reducing 16-18%). When visualizing the attention mechanism in TTS, the preparatory neural activity is consecutively highlighted during arm movement, and the most recent neural activity is highlighted during the resting state in nonhuman primates. Selecting only a few essential timesteps for an RNN-based neural decoder provides sufficient decoding performance and requires only a short computation time.


Assuntos
Interfaces Cérebro-Computador , Animais , Conscientização , Aprendizagem , Movimento , Redes Neurais de Computação
2.
Front Comput Neurosci ; 14: 22, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32296323

RESUMO

Objective: In brain machine interfaces (BMIs), the functional mapping between neural activities and kinematic parameters varied over time owing to changes in neural recording conditions. The variability in neural recording conditions might result in unstable long-term decoding performance. Relevant studies trained decoders with several days of training data to make them inherently robust to changes in neural recording conditions. However, these decoders might not be robust to changes in neural recording conditions when only a few days of training data are available. In time-series prediction and feedback control system, an error feedback was commonly adopted to reduce the effects of model uncertainty. This motivated us to introduce an error feedback to a neural decoder for dealing with the variability in neural recording conditions. Approach: We proposed an evolutionary constructive and pruning neural network with error feedback (ECPNN-EF) as a neural decoder. The ECPNN-EF with partially connected topology decoded the instantaneous firing rates of each sorted unit into forelimb movement of a rat. Furthermore, an error feedback was adopted as an additional input to provide kinematic information and thus compensate for changes in functional mapping. The proposed neural decoder was trained on data collected from a water reward-related lever-pressing task for a rat. The first 2 days of data were used to train the decoder, and the subsequent 10 days of data were used to test the decoder. Main Results: The ECPNN-EF under different settings was evaluated to better understand the impact of the error feedback and partially connected topology. The experimental results demonstrated that the ECPNN-EF achieved significantly higher daily decoding performance with smaller daily variability when using the error feedback and partially connected topology. Significance: These results suggested that the ECPNN-EF with partially connected topology could cope with both within- and across-day changes in neural recording conditions. The error feedback in the ECPNN-EF compensated for decreases in decoding performance when neural recording conditions changed. This mechanism made the ECPNN-EF robust against changes in functional mappings and thus improved the long-term decoding stability when only a few days of training data were available.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa