Skip to yearly menu bar Skip to main content


Poster

On the Importance of Language-driven Representation Learning for Heterogeneous Federated Learning

Yunlu Yan · Chun-Mei Feng · Wangmeng Zuo · Salman Khan · Lei Zhu · Yong Liu

Hall 3 + Hall 2B #535
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Non-Independent and Identically Distributed (Non-IID) training data significantly challenge federated learning (FL), impairing the performance of the global model in distributed frameworks. Inspired by the superior performance and generalizability of language-driven representation learning in centralized settings, we explore its potential to enhance FL for handling non-IID data. In specific, this paper introduces FedGLCL, a novel language-driven FL framework for image-text learning that uniquely integrates global language and local image features through contrastive learning, offering a new approach to tackle non-IID data in FL. FedGLCL redefines FL by avoiding separate local training models for each client. Instead, it uses contrastive learning to harmonize local image features with global textual data, enabling uniform feature learning across different local models. The utilization of a pre-trained text encoder in FedGLCL serves a dual purpose: it not only reduces the variance in local feature representations within FL by providing a stable and rich language context but also aids in mitigating overfitting, particularly to majority classes, by leveraging broad linguistic knowledge. Extensive experiments show that FedGLCL significantly outperforms state-of-the-art FL algorithms across different non-IID scenarios.

Live content is unavailable. Log in and register to view live content