Poster
Complexity Lower Bounds of Adaptive Gradient Algorithms for Non-convex Stochastic Optimization under Relaxed Smoothness
Michael Crawshaw · Mingrui Liu
Hall 3 + Hall 2B #370
[
Abstract
]
Thu 24 Apr midnight PDT
— 2:30 a.m. PDT
Abstract:
Recent results in non-convex stochastic optimization demonstrate the convergence of popular adaptive algorithms (e.g., AdaGrad) under the (L0,L1)(L0,L1)-smoothness condition, but the rate of convergence is a higher-order polynomial in terms of problem parameters like the smoothness constants. The complexity guaranteed by such algorithms to find an ϵϵ-stationary point may be significantly larger than the optimal complexity of Θ(ΔLσ2ϵ−4)Θ(ΔLσ2ϵ−4) achieved by SGD in the LL-smooth setting, where ΔΔ is the initial optimality gap, σ2σ2 is the variance of stochastic gradient. However, it is currently not known whether these higher-order dependencies can be tightened. To answer this question, we investigate complexity lower bounds for several adaptive optimization algorithms in the (L0,L1)(L0,L1)-smooth setting, with a focus on the dependence in terms of problem parameters Δ,L0,L1Δ,L0,L1. We provide complexity bounds for three variations of AdaGrad, which show at least a quadratic dependence on problem parameters Δ,L0,L1Δ,L0,L1. Notably, we show that the decorrelated variant of AdaGrad-Norm requires at least Ω(Δ2L21σ2ϵ−4)Ω(Δ2L21σ2ϵ−4) stochastic gradient queries to find an ϵϵ-stationary point. We also provide a lower bound for SGD with a broad class of adaptive stepsizes. Our results show that, for certain adaptive algorithms, the (L0,L1)(L0,L1)-smooth setting is fundamentally more difficult than the standard smooth setting, in terms of the initial optimality gap and the smoothness constants.
Live content is unavailable. Log in and register to view live content