Neuronal-Plasticity and Reward-Propagation Improved Recurrent Spiking Neural Networks.
Front Neurosci
; 15: 654786, 2021.
Article
em En
| MEDLINE
| ID: mdl-33776644
Different types of dynamics and plasticity principles found through natural neural networks have been well-applied on Spiking neural networks (SNNs) because of their biologically-plausible efficient and robust computations compared to their counterpart deep neural networks (DNNs). Here, we further propose a special Neuronal-plasticity and Reward-propagation improved Recurrent SNN (NRR-SNN). The historically-related adaptive threshold with two channels is highlighted as important neuronal plasticity for increasing the neuronal dynamics, and then global labels instead of errors are used as a reward for the paralleling gradient propagation. Besides, a recurrent loop with proper sparseness is designed for robust computation. Higher accuracy and stronger robust computation are achieved on two sequential datasets (i.e., TIDigits and TIMIT datasets), which to some extent, shows the power of the proposed NRR-SNN with biologically-plausible improvements.
Texto completo:
1
Base de dados:
MEDLINE
Idioma:
En
Ano de publicação:
2021
Tipo de documento:
Article