Skip to yearly menu bar Skip to main content


Poster

Simple Augmentation Goes a Long Way: ADRL for DNN Quantization

Lin Ning · Guoyang Chen · Weifeng Zhang · Xipeng Shen

Virtual

Keywords: [ DNN ] [ augmented deep reinforcement learning ] [ mixed precision ] [ quantization ] [ reinforcement learning ]


Abstract:

Mixed precision quantization improves DNN performance by assigning different layers with different bit-width values. Searching for the optimal bit-width for each layer, however, remains a challenge. Deep Reinforcement Learning (DRL) shows some recent promise. It however suffers instability due to function approximation errors, causing large variances in the early training stages, slow convergence, and suboptimal policies in the mixed-precision quantization problem. This paper proposes augmented DRL (ADRL) as a way to alleviate these issues. This new strategy augments the neural networks in DRL with a complementary scheme to boost the performance of learning. The paper examines the effectiveness of ADRL both analytically and empirically, showing that it can produce more accurate quantized models than the state of the art DRL-based quantization while improving the learning speed by 4.5-64 times.

Chat is not available.