## FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning

### Nam Hyeon-Woo · Moon Ye-Bin · Tae-Hyun Oh

Keywords: [ communication efficiency ] [ federated learning ]

[ Abstract ]
Thu 28 Apr 2:30 a.m. PDT — 4:30 a.m. PDT

Abstract: In this work, we propose a communication-efficient parameterization, $\texttt{FedPara}$, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our $\texttt{FedPara}$ method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, $\texttt{pFedPara}$, which separates parameters into global and local ones. We show that $\texttt{pFedPara}$ outperforms competing personalized FL methods with more than three times fewer parameters.

Chat is not available.