Your browser doesn't support javascript.
loading
The Limiting Dynamics of SGD: Modified Loss, Phase-Space Oscillations, and Anomalous Diffusion.
Kunin, Daniel; Sagastuy-Brena, Javier; Gillespie, Lauren; Margalit, Eshed; Tanaka, Hidenori; Ganguli, Surya; Yamins, Daniel L K.
Afiliação
  • Kunin D; Stanford University, Stanford, CA 94305, U.S.A. kunin@stanford.edu.
  • Sagastuy-Brena J; Stanford University, Stanford, CA 94305, U.S.A. jvrsgsty@stanford.edu.
  • Gillespie L; Stanford University, Stanford, CA 94305, U.S.A. gillespl@stanford.edu.
  • Margalit E; Stanford University, Stanford, CA 94305, U.S.A. eshedm@stanford.edu.
  • Tanaka H; NTT Research, Sunnyvale, CA 94085, U.S.A. hidenori.tanaka@ntt-research.com.
  • Ganguli S; Stanford University, Stanford, CA 94305, U.S.A.
  • Yamins DLK; Facebook AI Research, Menlo Park, CA 94025, U.S.A. sganguli@stanford.edu.
Neural Comput ; 36(1): 151-174, 2023 Dec 12.
Article em En | MEDLINE | ID: mdl-38052080
ABSTRACT
In this work, we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). As observed previously, long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance traveled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction among the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion. To build this understanding, we first derive a continuous-time model for SGD with finite learning rates and batch sizes as an underdamped Langevin equation. We study this equation in the setting of linear regression, where we can derive exact, analytic expressions for the phase-space dynamics of the parameters and their instantaneous velocities from initialization to stationarity. Using the Fokker-Planck equation, we show that the key ingredient driving these dynamics is not the original training loss but rather the combination of a modified loss, which implicitly regularizes the velocity, and probability currents that cause oscillations in phase space. We identify qualitative and quantitative predictions of this theory in the dynamics of a ResNet-18 model trained on ImageNet. Through the lens of statistical physics, we uncover a mechanistic origin for the anomalous limiting dynamics of deep neural networks trained with SGD. Understanding the limiting dynamics of SGD, and its dependence on various important hyperparameters like batch size, learning rate, and momentum, can serve as a basis for future work that can turn these insights into algorithmic gains.

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article