Your browser doesn't support javascript.
loading
MONETA: A Processing-In-Memory-Based Hardware Platform for the Hybrid Convolutional Spiking Neural Network With Online Learning.
Kim, Daehyun; Chakraborty, Biswadeep; She, Xueyuan; Lee, Edward; Kang, Beomseok; Mukhopadhyay, Saibal.
Afiliação
  • Kim D; Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States.
  • Chakraborty B; Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States.
  • She X; Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States.
  • Lee E; Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States.
  • Kang B; Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States.
  • Mukhopadhyay S; Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States.
Front Neurosci ; 16: 775457, 2022.
Article em En | MEDLINE | ID: mdl-35478844
ABSTRACT
We present a processing-in-memory (PIM)-based hardware platform, referred to as MONETA, for on-chip acceleration of inference and learning in hybrid convolutional spiking neural network. MONETAuses 8T static random-access memory (SRAM)-based PIM cores for vector matrix multiplication (VMM) augmented with spike-time-dependent-plasticity (STDP) based weight update. The spiking neural network (SNN)-focused data flow is presented to minimize data movement in MONETAwhile ensuring learning accuracy. MONETAsupports on-line and on-chip training on PIM architecture. The STDP-trained convolutional neural network within SNN (ConvSNN) with the proposed data flow, 4-bit input precision, and 8-bit weight precision shows only 1.63% lower accuracy in CIFAR-10 compared to the STDP accuracy implemented by the software. Further, the proposed architecture is used to accelerate a hybrid SNN architecture that couples off-chip supervised (back propagation through time) and on-chip unsupervised (STDP) training. We also evaluate the hybrid network architecture with the proposed data flow. The accuracy of this hybrid network is 10.84% higher than STDP trained accuracy result and 1.4% higher compared to the backpropagated training-based ConvSNN result with the CIFAR-10 dataset. Physical design of MONETAin 65 nm complementary metal-oxide-semiconductor (CMOS) shows 18.69 tera operation per second (TOPS)/W, 7.25 TOPS/W and 10.41 TOPS/W power efficiencies for the inference mode, learning mode, and hybrid learning mode, respectively.
Palavras-chave

Texto completo: 1 Bases de dados: MEDLINE Idioma: En Revista: Front Neurosci Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Bases de dados: MEDLINE Idioma: En Revista: Front Neurosci Ano de publicação: 2022 Tipo de documento: Article País de afiliação: Estados Unidos