Skip to yearly menu bar Skip to main content


Poster

Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution

Simiao Li · Yun Zhang · Wei Li · Hanting Chen · Wenjia Wang · Bingyi Jing · Shaohui Lin · Jie Hu

Hall 3 + Hall 2B #116
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Knowledge distillation (KD) is a promising yet challenging model compression approach that transmits rich learning representations from robust but resource-demanding teacher models to efficient student models. Previous methods for image super-resolution (SR) are often tailored to specific teacher-student architectures, limiting their potential for improvement and hindering broader applications. This work presents a novel KD framework for SR models, the multi-granularity Mixture of Priors Knowledge Distillation (MiPKD), which can be universally applied to a wide range of architectures at both feature and block levels. The teacher’s knowledge is effectively integrated with the student's feature via the Feature Prior Mixer, and the reconstructed feature propagates dynamically in the training phase with the Block Prior Mixer. Extensive experiments illustrate the significance of the proposed MiPKD technique.

Live content is unavailable. Log in and register to view live content