Skip to yearly menu bar Skip to main content


Poster

Dynamic Sparse Graph for Efficient Deep Learning

Liu Liu · Lei Deng · Xing Hu · Maohua Zhu · Guoqi Li · Yufei Ding · Yuan Xie

Great Hall BC #67

Keywords: [ compression ] [ sparsity ] [ training ] [ acceleration ]


Abstract:

We propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference. The great success of DNNs motivates the pursuing of lightweight models for the deployment onto embedded devices. However, most of the previous studies optimize for inference while neglect training or even complicate it. Training is far more intractable, since (i) the neurons dominate the memory cost rather than the weights in inference; (ii) the dynamic activation makes previous sparse acceleration via one-off optimization on fixed weight invalid; (iii) batch normalization (BN) is critical for maintaining accuracy while its activation reorganization damages the sparsity. To address these issues, DSG activates only a small amount of neurons with high selectivity at each iteration via a dimensionreduction search and obtains the BN compatibility via a double-mask selection. Experiments show significant memory saving (1.7-4.5x) and operation reduction (2.3-4.4x) with little accuracy loss on various benchmarks.

Live content is unavailable. Log in and register to view live content