Poster
From Promise to Practice: Realizing High-performance Decentralized Training
Zesen Wang · Jiaojiao Zhang · Xuyang Wu · Mikael Johansson
Hall 3 + Hall 2B #380
Decentralized training of deep neural networks has attracted significant attention for its theoretically superior scalability compared to synchronous data-parallel methods like All-Reduce. However, realizing this potential in multi-node training is challenging due to the complex design space that involves communication topologies, computation patterns, and optimization algorithms. This paper identifies three key factors that can lead to speedups over All-Reduce training and constructs a runtime model to determine when and how decentralization can shorten the per-iteration runtimes. To support the decentralized training of transformer-based models, we introduce a decentralized Adam algorithm that overlaps communications with computations, prove its convergence, and propose an accumulation technique to mitigate the high variance caused by small local batch sizes. We deploy our solution in clusters with up to 64 GPUs, demonstrating its practical advantages in both runtime and generalization performance under a fixed iteration budget.The experiment code is open-source at https://github.com/WangZesen/Decentralized-Training-Exp, and the extension code is open-source at https://github.com/WangZesen/Decent-DP.
Live content is unavailable. Log in and register to view live content