Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Phys Rev E ; 105(3-1): 034130, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35428124

RESUMO

Considerable progress has recently been made with geometrical approaches to understanding and controlling small out-of-equilibrium systems, but a mathematically rigorous foundation for these methods has been lacking. Towards this end, we develop a perturbative solution to the Fokker-Planck equation for one-dimensional driven Brownian motion in the overdamped limit enabled by the spectral properties of the corresponding single-particle Schrödinger operator. The perturbation theory is in powers of the inverse characteristic timescale of variation of the fastest varying control parameter, measured in units of the system timescale, which is set by the smallest eigenvalue of the corresponding Schrödinger operator. It applies to any Brownian system for which the Schrödinger operator has a confining potential. We use the theory to rigorously derive an exact formula for a Riemannian "thermodynamic" metric in the space of control parameters of the system. We show that up to second-order terms in the perturbation theory, optimal dissipation-minimizing driving protocols minimize the length defined by this metric. We also show that a previously proposed metric is calculable from our exact formula with corrections that are exponentially suppressed in a characteristic length scale. We illustrate our formula using the two-dimensional example of a harmonic oscillator with time-dependent spring constant in a time-dependent electric field. Lastly, we demonstrate that the Riemannian geometric structure of the optimal control problem is emergent; it derives from the form of the perturbative expansion for the probability density and persists to all orders of the expansion.

2.
Neural Comput ; 33(6): 1469-1497, 2021 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-34496389

RESUMO

Despite the fact that the loss functions of deep neural networks are highly nonconvex, gradient-based optimization algorithms converge to approximately the same performance from many random initial points. One thread of work has focused on explaining this phenomenon by numerically characterizing the local curvature near critical points of the loss function, where the gradients are near zero. Such studies have reported that neural network losses enjoy a no-bad-local-minima property, in disagreement with more recent theoretical results. We report here that the methods used to find these putative critical points suffer from a bad local minima problem of their own: they often converge to or pass through regions where the gradient norm has a stationary point. We call these gradient-flat regions, since they arise when the gradient is approximately in the kernel of the Hessian, such that the loss is locally approximately linear, or flat, in the direction of the gradient. We describe how the presence of these regions necessitates care in both interpreting past results that claimed to find critical points of neural network losses and in designing second-order methods for optimizing neural networks.


Assuntos
Algoritmos , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA