Poster

Stochastic Training is Not Necessary for Generalization

Jonas Geiping · Micah Goldblum · Phil Pope · Michael Moeller · Tom Goldstein

Keywords: [ sgd ] [ optimization ] [ implicit regularization ] [ generalization ] [ implicit bias ]

[ Abstract ]
[ Visit Poster at Spot F2 in Virtual World ] [ OpenReview
Mon 25 Apr 10:30 a.m. PDT — 12:30 p.m. PDT

Abstract:

It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks. In this work, we demonstrate that non-stochastic full-batch training can achieve comparably strong performance to SGD on CIFAR-10 using modern architectures. To this end, we show that the implicit regularization of SGD can be completely replaced with explicit regularization. Our observations indicate that the perceived difficulty of full-batch training may be the result of its optimization properties and the disproportionate time and effort spent by the ML community tuning optimizers and hyperparameters for small-batch training.

Chat is not available.