Skip to yearly menu bar Skip to main content

Spotlight Poster

Towards Robust Out-of-Distribution Generalization Bounds via Sharpness

Yingtian Zou · Kenji Kawaguchi · Yingnan Liu · Jiashuo Liu · Mong-Li Lee · Wynne Hsu

Halle B #292
[ ]
Wed 8 May 7:30 a.m. PDT — 9:30 a.m. PDT


Generalizing to out-of-distribution (OOD) data or unseen domain, termed OOD generalization, still lacks appropriate theoretical guarantees. Canonical OOD bounds focus on different distance measurements between source and target domains but fail to consider the optimization property of the learned model. As empirically shown in recent work, sharpness of learned minimum influences OOD generalization. To bridge this gap between optimization and OOD generalization, we study the effect of sharpness on how a model tolerates data change in domain shift which is usually captured by "robustness" in generalization. In this paper, we give a rigorous connection between sharpness and robustness, which gives better OOD guarantees for robust algorithms. It also provides a theoretical backing for "flat minima leads to better OOD generalization". Overall, we propose a sharpness-based OOD generalization bound by taking robustness into consideration, resulting in a tighter bound than non-robust guarantees. Our findings are supported by the experiments on a ridge regression model, as well as the experiments on deep learning classification tasks.

Chat is not available.