Your browser doesn't support javascript.
loading
Scalable estimation strategies based on stochastic approximations: Classical results and new insights.
Airoldi, Edoardo M; Toulis, Panos.
Afiliação
  • Airoldi EM; Harvard University, Department of Statistics.
  • Toulis P; Harvard University, Department of Statistics.
Stat Comput ; 25(4): 781-795, 2015 Jul 01.
Article em En | MEDLINE | ID: mdl-26139959
ABSTRACT
Estimation with large amounts of data can be facilitated by stochastic gradient methods, in which model parameters are updated sequentially using small batches of data at each step. Here, we review early work and modern results that illustrate the statistical properties of these methods, including convergence rates, stability, and asymptotic bias and variance. We then overview modern applications where these methods are useful, ranging from an online version of the EM algorithm to deep learning. In light of these results, we argue that stochastic gradient methods are poised to become benchmark principled estimation procedures for large data sets, especially those in the family of stable proximal methods, such as implicit stochastic gradient descent.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Stat Comput Ano de publicação: 2015 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Stat Comput Ano de publicação: 2015 Tipo de documento: Article