Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bridging the Gap Between Practice and Theory in Deep Learning

Generalization Bounds for Magnitude Based Pruning

Etash Guha · Prasanjit Dubey · Xiaoming Huo


Abstract:

Magnitude-based pruning is a popular technique for improving the efficiency ofneural networks given its simplicity and ease, but also surprisingly maintainsstrong generalization behavior. Explaining this generalization is difficult, andexisting analyses connecting sparsity to generalization rely on more structuredand less practical compression than simple magnitude-based weight dropping. Wecircumvent the need for structured compression by using empirical observationson the distribution of weights and recent random matrix theory to more tightlytie the connection between pruning-based sparsity and generalization and providebounds on how Magnitude-Based Pruning and Iterative Magnitude Pruning affectsgeneralization. We empirically verify that our bounds capture the connectionbetween pruning-based sparsity and generalization more than existing bounds.

Chat is not available.