Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Pitfalls of limited data and computation for Trustworthy ML

Learning Unforeseen Robustness from Out-of-distribution Data Using Equivariant Domain Translator

Sicheng Zhu · Bang An · Furong Huang · Sanghyun Hong


Abstract:

Existing approaches to training robust models are typically tailored to scenarios where data variations are available in the training set. While shown effective in achieving robustness to these foreseen variations, these approaches are ineffective in learning unforeseen robustness, i.e., robustness to data variations with unknown characterization or without training examples reflecting them. In this work, we learn such unforeseen robustness by harnessing the variations in the abundant out-of-distribution data. As we attribute the main challenge of using these data to the domain gap, we consider using a domain translator to bridge the gap, with which we bound the intractable robustness on the target distribution. As implied by our analysis, we propose a two-step algorithm that first trains an equivariant domain translator to map out-of-distribution data to the target distribution while preserving the variation, and then regularizes a model’s output consistency on the domain-translated data to improve its robustness. We empirically demonstrate the effectiveness of our method in improving both unforeseen and foreseen robustness in comparison to existing baselines. We also show that training the equivariant domain translator serves as an effective criterion for source data selection.

Chat is not available.