Skip to yearly menu bar Skip to main content


Poster

SGD Can Converge to Local Maxima

Liu Ziyin · Botao Li · James Simon · Masahito Ueda

Keywords: [ amsgrad ] [ deep learning ] [ convergence ] [ stochastic gradient descent ]


Abstract:

Previous works on stochastic gradient descent (SGD) often focus on its success. In this work, we construct worst-case optimization problems illustrating that, when not in the regimes that the previous works often assume, SGD can exhibit many strange and potentially undesirable behaviors. Specifically, we construct landscapes and data distributions such that (1) SGD converges to local maxima, (2) SGD escapes saddle points arbitrarily slowly, (3) SGD prefers sharp minima over flat ones, and (4) AMSGrad converges to local maxima. We also realize results in a minimal neural network-like example. Our results highlight the importance of simultaneously analyzing the minibatch sampling, discrete-time updates rules, and realistic landscapes to understand the role of SGD in deep learning.

Chat is not available.