Poster
Nonconvex Stochastic Optimization under Heavy-Tailed Noises: Optimal Convergence without Gradient Clipping
Zijian Liu · Zhengyuan Zhou
Hall 3 + Hall 2B #339
[
Abstract
]
Sat 26 Apr midnight PDT
— 2:30 a.m. PDT
Abstract:
Recently, the study of heavy-tailed noises in first-order nonconvex stochastic optimization has gotten a lot of attention since it was recognized as a more realistic condition as suggested by many empirical observations. Specifically, the stochastic noise (the difference between the stochastic and true gradient) is considered to have only a finite pp-th moment where p∈(1,2]p∈(1,2] instead of assuming it always satisfies the classical finite variance assumption. To deal with this more challenging setting, people have proposed different algorithms and proved them to converge at an optimal O(T1−p3p−2)O(T1−p3p−2) rate for smooth objectives after TT iterations. Notably, all these new-designed algorithms are based on the same technique – gradient clipping. Naturally, one may want to know whether the clipping method is a necessary ingredient and the only way to guarantee convergence under heavy-tailed noises. In this work, by revisiting the existing Batched Normalized Stochastic Gradient Descent with Momentum (Batched NSGDM) algorithm, we provide the first convergence result under heavy-tailed noises but without gradient clipping. Concretely, we prove that Batched NSGDM can achieve the optimal O(T1−p3p−2)O(T1−p3p−2) rate even under the relaxed smooth condition. More interestingly, we also establish the first O(T1−p2p)O(T1−p2p) convergence rate in the case where the tail index pp is unknown in advance, which is arguably the common scenario in practice.
Live content is unavailable. Log in and register to view live content