Robust Pruning at Initialization

Soufiane Hayou · Jean-Francois Ton · Arnaud Doucet · Yee Whye Teh


Keywords: [ pruning ] [ compression ] [ initialization ]

[ Abstract ]
[ Slides [ Paper ]
Tue 4 May 9 a.m. PDT — 11 a.m. PDT


Overparameterized Neural Networks (NN) display state-of-the-art performance. However, there is a growing need for smaller, energy-efficient, neural networks to be able to use machine learning applications on devices with limited computational resources. A popular approach consists of using pruning techniques. While these techniques have traditionally focused on pruning pre-trained NN (LeCun et al.,1990; Hassibi et al., 1993), recent work by Lee et al. (2018) has shown promising results when pruning at initialization. However, for Deep NNs, such procedures remain unsatisfactory as the resulting pruned networks can be difficult to train and, for instance, they do not prevent one layer from being fully pruned. In this paper, we provide a comprehensive theoretical analysis of Magnitude and Gradient based pruning at initialization and training of sparse architectures. This allows us to propose novel principled approaches which we validate experimentally on a variety of NN architectures.

Chat is not available.