Skip to yearly menu bar Skip to main content


Virtual oral
in
Affinity Workshop: Tiny Papers Showcase Day (a DEI initiative)

Large Sparse Kernels for Federated Learning

feilong zhang


Abstract:

Existing approaches to address non-iid data in federated learning are often tailored to specific types of heterogeneity and may lack generalizability to all scenarios. In this paper, we present empirical evidence supporting the claim that employing large sparse convolution kernels can lead to enhanced robustness against distribution shifts in the context of federated learning for various non-iid problems, including imbalanced data volumes, different feature spaces, and label distributions. Our experimental results demonstrate that the substitution of convolutional kernels with large sparse kernels can yield substantial improvements in the ability to resist non-iid problems across multiple methods.

Chat is not available.