Your browser doesn't support javascript.
loading
Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators.
Rasch, Malte J; Mackin, Charles; Le Gallo, Manuel; Chen, An; Fasoli, Andrea; Odermatt, Frédéric; Li, Ning; Nandakumar, S R; Narayanan, Pritish; Tsai, Hsinyu; Burr, Geoffrey W; Sebastian, Abu; Narayanan, Vijay.
Afiliação
  • Rasch MJ; IBM Research, TJ Watson Research Center, Yorktown Heights, NY, USA. malte.rasch@ibm.com.
  • Mackin C; IBM Research Almaden, 650 Harry Road, San Jose, CA, USA.
  • Le Gallo M; IBM Research Europe, 8803, Rüschlikon, Switzerland.
  • Chen A; IBM Research Almaden, 650 Harry Road, San Jose, CA, USA.
  • Fasoli A; IBM Research Almaden, 650 Harry Road, San Jose, CA, USA.
  • Odermatt F; IBM Research Europe, 8803, Rüschlikon, Switzerland.
  • Li N; IBM Research, TJ Watson Research Center, Yorktown Heights, NY, USA.
  • Nandakumar SR; IBM Research Europe, 8803, Rüschlikon, Switzerland.
  • Narayanan P; IBM Research Almaden, 650 Harry Road, San Jose, CA, USA.
  • Tsai H; IBM Research Almaden, 650 Harry Road, San Jose, CA, USA.
  • Burr GW; IBM Research Almaden, 650 Harry Road, San Jose, CA, USA.
  • Sebastian A; IBM Research Europe, 8803, Rüschlikon, Switzerland.
  • Narayanan V; IBM Research, TJ Watson Research Center, Yorktown Heights, NY, USA.
Nat Commun ; 14(1): 5282, 2023 Aug 30.
Article em En | MEDLINE | ID: mdl-37648721
ABSTRACT
Analog in-memory computing-a promising approach for energy-efficient acceleration of deep learning workloads-computes matrix-vector multiplications but only approximately, due to nonidealities that often are non-deterministic or nonlinear. This can adversely impact the achievable inference accuracy. Here, we develop an hardware-aware retraining approach to systematically examine the accuracy of analog in-memory computing across multiple network topologies, and investigate sensitivity and robustness to a broad set of nonidealities. By introducing a realistic crossbar model, we improve significantly on earlier retraining approaches. We show that many larger-scale deep neural networks-including convnets, recurrent networks, and transformers-can in fact be successfully retrained to show iso-accuracy with the floating point implementation. Our results further suggest that nonidealities that add noise to the inputs or outputs, not the weights, have the largest impact on accuracy, and that recurrent networks are particularly robust to all nonidealities.

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article