Skip to yearly menu bar Skip to main content

In-Person Poster presentation / poster accept

FIT: A Metric for Model Sensitivity

Ben Zandonati · Adrian Pol · Maurizio Pierini · Olya Sirkin · Tal Kopetz

MH1-2-3-4 #69

Keywords: [ quantization ] [ fisher information ] [ General Machine Learning ]


Model compression is vital to the deployment of deep learning on edge devices. Low precision representations, achieved via quantization of weights and activations, can reduce inference time and memory requirements. However, quantifying and predicting the response of a model to the changes associated with this procedure remains challenging. This response is non-linear and heterogeneous throughout the network. Understanding which groups of parameters and activations are more sensitive to quantization than others is a critical stage in maximizing efficiency. For this purpose, we propose FIT. Motivated by an information geometric perspective, FIT combines the Fisher information with a model of quantization. We find that FIT can estimate the final performance of a network without retraining. FIT effectively fuses contributions from both parameter and activation quantization into a single metric. Additionally, FIT is fast to compute when compared to existing methods, demonstrating favourable convergence properties. These properties are validated experimentally across hundreds of quantization configurations, with a focus on layer-wise mixed-precision quantization.

Chat is not available.