Poster
in
Workshop: SCOPE: SCALABLE OPTIMIZATION FOR EFFICIENT AND ADPATIVE FOUNDATION MODELS
Grams: Gradient Descent with Adaptive Momentum Scaling
Yang Cao · Xiaoyu Li · Zhao Song
Keywords: [ large-scale machine learning ] [ adaptive optimization algorithm ] [ LLM training ] [ gradient descent ]
Abstract:
We introduce $\textbf{Gr}$adient Descent with $\textbf{A}$daptive $\textbf{M}$omentum $\textbf{S}$caling ($\textbf{Grams}$), a novel optimization algorithm that decouples the direction and magnitude of parameter updates in deep learning. Unlike traditional optimizers that directly integrate momentum into updates, Grams separates the update direction, derived from current gradients, from momentum, which is used solely for adaptive magnitude scaling. This approach enables Grams to achieve improved loss descent compared to state-of-the-art cautious and momentum-based optimizers. We theoretically demonstrate that Grams descents faster than other stater-of-the-art optimizers and establish a global convergence guarantee for Grams. We also validate its effectiveness through extensive empirical evaluations. The results demonstrate Grams’ superior performance, including faster convergence and better generalization, compared to widely-used optimizers such as Adam, Lion, and their cautious variants. Our results highlight Grams' potential as a transformative approach for efficient optimization in large-scale machine learning.
Chat is not available.
Successful Page Load