Your browser doesn't support javascript.
loading
GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.
Deng, Lei; Jiao, Peng; Pei, Jing; Wu, Zhenzhi; Li, Guoqi.
Afiliación
  • Deng L; Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, 100084, China; Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106, USA. Electronic address: deng-l12@tsinghua.org.cn.
  • Jiao P; Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, 100084, China. Electronic address: jiaop15@mails.tsinghua.edu.cn.
  • Pei J; Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, 100084, China. Electronic address: peij@mail.tsinghua.edu.cn.
  • Wu Z; Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, 100084, China. Electronic address: wuzhenzhi@mail.tsinghua.edu.cn.
  • Li G; Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, 100084, China; Beijing Innovation Center for Future Chip, Tsinghua University, Beijing, 100084, China. Electronic address: liguoqi@mail.tsinghua.edu.cn.
Neural Netw ; 100: 49-58, 2018 Apr.
Article en En | MEDLINE | ID: mdl-29471195
ABSTRACT
Although deep neural networks (DNNs) are being a revolutionary power to open up the AI era, the notoriously huge hardware overhead has challenged their applications. Recently, several binary and ternary networks, in which the costly multiply-accumulate operations can be replaced by accumulations or even binary logic operations, make the on-chip training of DNNs quite promising. Therefore there is a pressing need to build an architecture that could subsume these networks under a unified framework that achieves both higher performance and less overhead. To this end, two fundamental issues are yet to be addressed. The first one is how to implement the back propagation when neuronal activations are discrete. The second one is how to remove the full-precision hidden weights in the training phase to break the bottlenecks of memory/computation consumption. To address the first issue, we present a multi-step neuronal activation discretization method and a derivative approximation technique that enable the implementing the back propagation algorithm on discrete DNNs. While for the second issue, we propose a discrete state transition (DST) methodology to constrain the weights in a discrete space without saving the hidden weights. Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website https//github.com/AcrossV/Gated-XNOR. More particularly, we find that when both the weights and activations become ternary values, the DNNs can be reduced to sparse binary networks, termed as gated XNOR networks (GXNOR-Nets) since only the event of non-zero weight and non-zero activation enables the control gate to start the XNOR logic operations in the original binary networks. This promises the event-driven hardware design for efficient mobile intelligence. We achieve advanced performance compared with state-of-the-art algorithms. Furthermore, the computational sparsity and the number of states in the discrete space can be flexibly modified to make it suitable for various hardware platforms.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Algoritmos / Redes Neurales de la Computación / Máquina de Vectores de Soporte / Memoria Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2018 Tipo del documento: Article

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Asunto principal: Algoritmos / Redes Neurales de la Computación / Máquina de Vectores de Soporte / Memoria Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2018 Tipo del documento: Article