Skip to yearly menu bar Skip to main content

Workshop: The 4th Workshop on practical ML for Developing Countries: learning under limited/low resource settings

Domain Generalization in Robust Invariant Representation

Gauri Gupta · Ritvik Kapila · KESHAV GUPTA · Ramesh Raskar


Unsupervised approaches for learning representations invariant to common transformations are used quite often for object recognition. Learning invariances makes models more robust and practical to use in real-world scenarios. Since data transformations that do not change the intrinsic properties of the object cause the majority of the complexity in recognition tasks, models that are invariant to these transformations help reduce the amount of training data required. This further increases the model's efficiency and simplifies training. In this paper, we investigate the generalization of invariant representations on out-of-distribution data and try to answer the question: Do model representations invariant to some transformations in a particular seen domain also remain invariant in previously unseen domains? Through extensive experiments, we demonstrate that the invariant model learns unstructured latent representations that are robust to distribution shifts, thus making invariance a desirable property for training in resource-constrained settings.

Chat is not available.