Poster
ADAPT: Attentive Self-Distillation and Dual-Decoder Prediction Fusion for Continual Panoptic Segmentation
Ze Yang · Shichao Dong · Ruibo Li · Nan Song · Guosheng Lin
Hall 3 + Hall 2B #490
Panoptic segmentation, which unifies semantic and instance segmentation into a single task, has witnessed considerable success on predefined tasks. However, traditional methods tend to struggle with catastrophic forgetting and poor generalization when learning from a continuous stream of new tasks. While continual learning aims to mitigate these challenges, our study reveals that existing continual panoptic segmentation (CPS) methods often suffer from efficiency or scalability issues. To address these limitations, we propose an efficient adaptation framework that incorporates attentive self-distillation and dual-decoder prediction fusion to efficiently preserve prior knowledge while facilitating model generalization. Specifically, we freeze the majority of model weights, enabling a shared forward pass between the teacher and student models during distillation. Attentive self-distillation then adaptively distills useful knowledge from the old classes without being distracted from non-object regions, which effectively enhances knowledge retention. Additionally, query-level fusion (QLF) is devised to seamlessly integrate the output of the dual decoders without incurring scale inconsistency. Our method achieves state-of-the-art performance on ADE20K and COCO benchmarks. Code is available at https://github.com/Ze-Yang/ADAPT.
Live content is unavailable. Log in and register to view live content