Skip to yearly menu bar Skip to main content


Poster

Gradient descent with generalized Newton’s method

Zhiqi Bu · Shiyun Xu

Hall 3 + Hall 2B #587
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

We propose the generalized Newton's method (GeN) --- a Hessian-informed approach that applies to any optimizer such as SGD and Adam, and covers the Newton-Raphson method as a sub-case. Our method automatically and dynamically selects the learning rate that accelerates the convergence, without the intensive tuning of the learning rate scheduler. In practice, our method is easily implementable, since it only requires additional forward passes with almost zero computational overhead (in terms of training time and memory cost), if the overhead is amortized over many iterations. We present extensive experiments on language and vision tasks (e.g. GPT and ResNet) to showcase that GeN optimizers match the state-of-the-art performance, which was achieved with carefully tuned learning rate schedulers.

Live content is unavailable. Log in and register to view live content