Skip to yearly menu bar Skip to main content

Lightning Talk
Workshop: Machine Learning for IoT: Datasets, Perception, and Understanding

FedEBA+: Towards Fair and Effective Federated Learning via Entropy-based Model

Lin Wang · Zhichao Wang · Xiaoying Tang


Ensuring fairness is a crucial aspect of Federated Learning (FL), which enables the model to perform consistently across all clients. However, designing an FL algorithm that simultaneously improves global model performance and promotes fairness remains a formidable challenge, as achieving the latter often necessitates a trade-off with the former.To address this challenge, we propose a new FL algorithm, FedEBA+, which enhances fairness while simultaneously improving global model performance. Our approach incorporates a fair aggregation scheme that assigns higher weights to underperforming clients and a novel model update method for FL. Besides, we show the theoretical convergence analysis and demonstrate the fairness of our algorithm.Experimental results reveal that FedEBA+ outperforms other SOTA fairness FL methods in terms of both fairness and the global model’s performance.

Chat is not available.