Skip to yearly menu bar Skip to main content


Poster

Stochastic Training is Not Necessary for Generalization

Jonas Geiping · Micah Goldblum · Phil Pope · Michael Moeller · Tom Goldstein

Keywords: [ implicit bias ] [ generalization ] [ implicit regularization ] [ optimization ] [ sgd ]


Abstract:

It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks. In this work, we demonstrate that non-stochastic full-batch training can achieve comparably strong performance to SGD on CIFAR-10 using modern architectures. To this end, we show that the implicit regularization of SGD can be completely replaced with explicit regularization. Our observations indicate that the perceived difficulty of full-batch training may be the result of its optimization properties and the disproportionate time and effort spent by the ML community tuning optimizers and hyperparameters for small-batch training.

Chat is not available.