Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: Neural Compression: From Information Theory to Applications

Oral 1: Yann Dubois et al., Lossy Compression for Lossless Prediction

Taco Cohen


Abstract:

Most data is "seen" only by algorithms. Yet, data compressors are designed for perceptual fidelity rather than for storing information needed by algorithms performing downstream tasks. So, we are likely storing vast amounts of unneeded information. In this paper, we characterize the minimum bit-rates required to ensure high performance on all predictive tasks that are invariant under a set of transformations. Based on our theory, we design unsupervised objectives for training neural compressors that are closely related to self-supervised learning and generative modeling. Using these objectives, we achieve rate savings of around 60\% on standard datasets, like MNIST, without decreasing classification performance.