Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Leveraging Flatness to Improve Information-Theoretic Generalization Bounds for SGD

Ze Peng · Jian Zhang · Yisen Wang · Lei Qi · Yinghuan Shi · Yang Gao

Hall 3 + Hall 2B #343
[ ] [ Project Page ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract: Information-theoretic (IT) generalization bounds have been used to study the generalization of learning algorithms. These bounds are intrinsically data- and algorithm-dependent so that one can exploit the properties of data and algorithm to derive tighter bounds. However, we observe that although the flatness bias is crucial for SGD’s generalization, these bounds fail to capture the improved generalization under better flatness and are also numerically loose. This is caused by the inadequate leverage of SGD's flatness bias in existing IT bounds. This paper derives a more flatness-leveraging IT bound for the flatness-favoring SGD. The bound indicates the learned models generalize better if the large-variance directions of the final weight covariance have small local curvatures in the loss landscape. Experiments on deep neural networks show our bound not only correctly reflects the better generalization when flatness is improved, but is also numerically much tighter. This is achieved by a flexible technique called "omniscient trajectory". When applied to Gradient Descent’s minimax excess risk on convex-Lipschitz-Bounded problems, it improves representative IT bounds’ Ω(1) rates to O(1/n). It also implies a by-pass of memorization-generalization trade-offs. Codes are available at [https://github.com/peng-ze/omniscient-bounds](https://github.com/peng-ze/omniscient-bounds).

Live content is unavailable. Log in and register to view live content