Skip to yearly menu bar Skip to main content


Poster

Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients

Milad Alizadeh · Shyam Tailor · Luisa Zintgraf · Joost van Amersfoort · Sebastian Farquhar · Nicholas Lane · Yarin Gal

Keywords: [ pruning ] [ Lottery Ticket Hypothesis ] [ Pruning at initialization ]


Abstract:

Pruning neural networks at initialization would enable us to find sparse models that retain the accuracy of the original network while consuming fewer computational resources for training and inference. However, current methods are insufficient to enable this optimization and lead to a large degradation in model performance. In this paper, we identify a fundamental limitation in the formulation of current methods, namely that their saliency criteria look at a single step at the start of training without taking into account the trainability of the network. While pruning iteratively and gradually has been shown to improve pruning performance, explicit consideration of the training stage that will immediately follow pruning has so far been absent from the computation of the saliency criterion. To overcome the short-sightedness of existing methods, we propose Prospect Pruning (ProsPr), which uses meta-gradients through the first few steps of optimization to determine which weights to prune. ProsPr combines an estimate of the higher-order effects of pruning on the loss and the optimization trajectory to identify the trainable sub-network. Our method achieves state-of-the-art pruning performance on a variety of vision classification tasks, with less data and in a single shot compared to existing pruning-at-initialization methods.

Chat is not available.