Skip to yearly menu bar Skip to main content


Workshop

What do we need for successful domain generalization?

Aniket Anand Deshmukh · Henry Gouk · Da Li · Cuiling Lan · Kaiyang Zhou · Timothy Hospedales

Virtual

Thu 4 May, 1 a.m. PDT

The real challenge for any machine learning system is to be reliable and robust in any situation, even if it is different compared to training conditions. Existing general purpose approaches to domain generalization (DG)—a problem setting that challenges a model to generalize well to data outside the distribution sampled at training time—have failed to consistently outperform standard empirical risk minimization baselines. In this workshop, we aim to work towards answering a single question: what do we need for successful domain generalization? We conjecture that additional information of some form is required for a general purpose learning methods to be successful in the DG setting. The purpose of this workshop is to identify possible sources of such information, and demonstrate how these extra sources of data can be leveraged to construct models that are robust to distribution shift. Examples areas of interest include using meta-data associated with each domain, examining how multimodal learning can enable robustness to distribution shift, and flexible frameworks for exploiting properties of the data that are known to be invariant to distribution shift.

Chat is not available.
Timezone: America/Los_Angeles

Schedule