Your browser doesn't support javascript.
loading
Quantization avoids saddle points in distributed optimization.
Bo, Yanan; Wang, Yongqiang.
Afiliação
  • Bo Y; Department of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634.
  • Wang Y; Department of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634.
Proc Natl Acad Sci U S A ; 121(17): e2319625121, 2024 Apr 23.
Article em En | MEDLINE | ID: mdl-38640343
ABSTRACT
Distributed nonconvex optimization underpins key functionalities of numerous distributed systems, ranging from power systems, smart buildings, cooperative robots, vehicle networks to sensor networks. Recently, it has also merged as a promising solution to handle the enormous growth in data and model sizes in deep learning. A fundamental problem in distributed nonconvex optimization is avoiding convergence to saddle points, which significantly degrade optimization accuracy. We find that the process of quantization, which is necessary for all digital communications, can be exploited to enable saddle-point avoidance. More specifically, we propose a stochastic quantization scheme and prove that it can effectively escape saddle points and ensure convergence to a second-order stationary point in distributed nonconvex optimization. With an easily adjustable quantization granularity, the approach allows a user to control the number of bits sent per iteration and, hence, to aggressively reduce the communication overhead. Numerical experimental results using distributed optimization and learning problems on benchmark datasets confirm the effectiveness of the approach.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Proc Natl Acad Sci U S A Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Proc Natl Acad Sci U S A Ano de publicação: 2024 Tipo de documento: Article