## Language-driven Semantic Segmentation

### Boyi Li · Kilian Weinberger · Serge Belongie · Vladlen Koltun · Rene Ranftl

Keywords: [ transformer ] [ semantic segmentation ]

[ Abstract ]
Mon 25 Apr 10:30 a.m. PDT — 12:30 p.m. PDT

Abstract:

We present LSeg, a novel model for language-driven semantic image segmentation. LSeg uses a text encoder to compute embeddings of descriptive input labels (e.g., grass'' orbuilding'') together with a transformer-based image encoder that computes dense per-pixel embeddings of the input image. The image encoder is trained with a contrastive objective to align pixel embeddings to the text embedding of the corresponding semantic class. The text embeddings provide a flexible label representation in which semantically similar labels map to similar regions in the embedding space (e.g., cat'' andfurry''). This allows LSeg to generalize to previously unseen categories at test time, without retraining or even requiring a single additional training sample. We demonstrate that our approach achieves highly competitive zero-shot performance compared to existing zero- and few-shot semantic segmentation methods, and even matches the accuracy of traditional segmentation algorithms when a fixed label set is provided. Code and demo are available at https://github.com/isl-org/lang-seg.

Chat is not available.