Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Pitfalls of limited data and computation for Trustworthy ML

FEDERATED TRAINING OF DUAL ENCODING MODELS ON SMALL NON-IID CLIENT DATASETS

Raviteja Vemulapalli · Warren Morningstar · Philip Mansfield · Hubert Eichner · Karan Singhal · Arash Afkanpour · Bradley Green


Abstract:

Dual encoding models that encode a pair of inputs are widely used for representation learning. Many approaches train dual encoding models by maximizing agreement between pairs of encodings on centralized training data. However, in many scenarios, datasets are inherently decentralized across many clients, motivating federated learning. In this work, we focus on federated training of dual encoding models on decentralized data composed of many small, non-IID (independent and identically distributed) client datasets. Existing approaches require large and diverse training batches to work well and perform poorly when naively adapted to the setting of small, non-IID client datasets using federated averaging. We observe that large-batch loss computation can be simulated on small individual clients for loss functions that are based on encoding statistics. Based on this insight, we propose a novel federated training approach, Distributed Cross Correlation Optimization (DCCO), which trains dual encoding models using encoding statistics aggregated across clients, without sharing individual samples or encodings. Our experimental results on two datasets demonstrate that the proposed approach outperforms federated variants of existing approaches by a large margin.

Chat is not available.