Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression

baeseong park · Se Jung Kwon · Daehwan Oh · Byeonguk Kim · Dongsoo Lee

[ Abstract ]
Mon 25 Apr 2:30 a.m. PDT — 4:30 a.m. PDT

Abstract:

Even though fine-grained pruning techniques achieve a high compression ratio, conventional sparsity representations (such as CSR) associated with irregular sparsity degrade parallelism significantly. Practical pruning methods, thus, usually lower pruning rates (by structured pruning) to improve parallelism. In this paper, we study fixed-to-fixed (lossless) encoding architecture/algorithm to support fine-grained pruning methods such that sparse neural networks can be stored in a highly regular structure. We first estimate the maximum compression ratio of encoding-based compression using entropy. Then, as an effort to push the compression ratio to the theoretical maximum (by entropy), we propose a sequential fixed-to-fixed encoding scheme. We demonstrate that our proposed compression scheme achieves almost the maximum compression ratio for the Transformer and ResNet-50 pruned by various fine-grained pruning methods.

Chat is not available.