Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification

Sharath Girish · Kamal Gupta · Saurabh Singh · Abhinav Shrivastava

MH1-2-3-4 #49

Keywords: [ Deep Learning and representational learning ] [ pruning ] [ sparsity ] [ model compression ] [ quantization ]


Abstract:

We introduce LilNetX, an end-to-end trainable technique for neural networks that enables learning models with specified accuracy-rate-computation trade-off. Prior works approach these problems one at a time and often require post-processing or multistage training which become less practical and do not scale very well for large datasets or architectures. Our method constructs a joint training objective that penalizes the self information of network parameters in a latent representation space to encourage small model size while also introducing priors to increase structured sparsity in the parameter space to reduce computation. When compared with existing state-of-the-art model compression methods, we achieve up to 50% smaller model size and 98% model sparsity on ResNet-20 on the CIFAR-10 dataset as well as 37% smaller model size and 71% structured sparsity on ResNet-50 trained on ImageNet while retaining the same accuracy as those methods. We show that the resulting sparsity can improve the inference time of the models by almost 1.8 times the dense ResNet-50 baseline model. Code is available at https://github.com/Sharath-girish/LilNetX.

Chat is not available.