Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 5th Workshop on practical ML for limited/low resource settings (PML4LRS) @ ICLR 2024

Sparsity for Communication-Efficient LoRA

Kevin Kuo · Arian Raje · Kousik Rajesh · Virginia Smith


Abstract: Recently, several works have used unstructured pruning to augment adapter methods. However, these ``sparse adapter'' methods have limited communication benefits in federated learning. In this work, we propose a simple baseline which combines LoRA with a constant sparsity during communication only. On three FL image and text tasks, our method reduces communication costs by up to $10\times$ over vanilla (dense) LoRA and up to $5\times$ over more complex sparse LoRA baselines. Our work highlights the importance of considering system-specific constraints when developing efficient fine-tuning approaches, and serves as a competitive baseline for future work in federated fine-tuning.

Chat is not available.