Skip to yearly menu bar Skip to main content


Workshop

Neural Compression: From Information Theory to Applications

Stephan Mandt · Robert Bamler · Yingzhen Li · Christopher Schroers · Yang Yang · Max Welling · Taco Cohen

Data compression is a problem of great practical importance, and a new frontier for machine learning research that combines empirical findings (from the deep probabilistic modeling literature) with fundamental theoretical insights (from information theory, source coding, and minimum description length theory). Recent work building on deep generative models such as variational autoencoders, GANs, and normalizing flows showed that novel machine-learning-based compression methods can significantly outperform state-of-the-art classical compression codecs for image and video data. At the same time, these neural compression methods provide new evaluation metrics for model and inference performance on a rate/distortion trade-off. This workshop aims to draw more attention to the young and highly impactful field of neural compression. In contrast to other workshops that focus on practical compression performance, our goal is to bring together researchers from deep learning, information theory, and probabilistic modeling, to learn from each other and to encourage exchange on fundamentally novel issues such as the role of stochasticity in compression algorithms or ethical risks of semantic compression artifacts.

Chat is not available.
Timezone: America/Los_Angeles

Schedule