Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Optimizing (L0,L1)-Smooth Functions by Gradient Methods

Daniil Vankov · Anton Rodomanov · Angelia Nedich · Lalitha Sankar · Sebastian Stich

Hall 3 + Hall 2B #427
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract: We study gradient methods for optimizing (L0,L1)-smooth functions, aclass that generalizes Lipschitz-smooth functions and has gained attention forits relevance in machine learning.We provide new insights into the structure of this function class and developa principled framework for analyzing optimization methods in this setting.While our convergence rate estimates recover existing results for minimizingthe gradient norm in nonconvex problems, our approach significantly improvesthe best-known complexity bounds for convex objectives.Moreover, we show that the gradient method with Polyak stepsizes and thenormalized gradient method achieve nearly the same complexity guarantees asmethods that rely on explicit knowledge of (L0,L1).Finally, we demonstrate that a carefully designed accelerated gradientmethod can be applied to (L0,L1)-smooth functions, further improving allprevious results.

Live content is unavailable. Log in and register to view live content