Your browser doesn't support javascript.
loading
High-performance deep spiking neural networks via at-most-two-spike exponential coding.
Chen, Yunhua; Feng, Ren; Xiong, Zhimin; Xiao, Jinsheng; Liu, Jian K.
Afiliação
  • Chen Y; School of Computer Science and Technology, Guangdong University of Technology, China. Electronic address: yhchen@gdut.edu.cn.
  • Feng R; School of Computer Science and Technology, Guangdong University of Technology, China. Electronic address: 2359091834@qq.com.
  • Xiong Z; School of Computer Science and Technology, Guangdong University of Technology, China. Electronic address: 2216384411@qq.com.
  • Xiao J; School of Electronic Information, Wuhan University, China. Electronic address: xiaojs@whu.edu.cn.
  • Liu JK; School of Computer Science, University of Birmingham, UK. Electronic address: j.liu.22@bham.ac.uk.
Neural Netw ; 176: 106346, 2024 Aug.
Article em En | MEDLINE | ID: mdl-38713970
ABSTRACT
Spiking neural networks (SNNs) provide necessary models and algorithms for neuromorphic computing. A popular way of building high-performance deep SNNs is to convert ANNs to SNNs, taking advantage of advanced and well-trained ANNs. Here we propose an ANN to SNN conversion methodology that uses a time-based coding scheme, named At-most-two-spike Exponential Coding (AEC), and a corresponding AEC spiking neuron model for ANN-SNN conversion. AEC neurons employ quantization-compensating spikes to improve coding accuracy and capacity, with each neuron generating up to two spikes within the time window. Two exponential decay functions with tunable parameters are proposed to represent the dynamic encoding thresholds, based on which pixel intensities are encoded into spike times and spike times are decoded into pixel intensities. The hyper-parameters of AEC neurons are fine-tuned based on the loss function of SNN-decoded values and ANN-activation values. In addition, we design two regularization terms for the number of spikes, providing the possibility to achieve the best trade-off between accuracy, latency and power consumption. The experimental results show that, compared to other similar methods, the proposed scheme not only obtains deep SNNs with higher accuracy, but also has more significant advantages in terms of energy efficiency and inference latency. More details can be found at https//github.com/RPDS2020/AEC.git.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Algoritmos / Potenciais de Ação / Redes Neurais de Computação / Neurônios Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Algoritmos / Potenciais de Ação / Redes Neurais de Computação / Neurônios Idioma: En Ano de publicação: 2024 Tipo de documento: Article