Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Multimodal Representation Learning (MRL): Perks and Pitfalls

CHiLS: Zero-shot Image Classification with Hierarchical Label Sets

Zachary Novack · Saurabh Garg · Julian McAuley · Zachary Lipton

Keywords: [ zero-shot image classification ] [ open vocabulary models ] [ zero-shot learning ] [ CLIP ]


Abstract:

Open vocabulary models (e.g. CLIP) have shown strong performance on zero-shot classification through their ability generate embeddings for each class based on their (natural language) names. Prior works focused on improving the accuracy of these models through prompt engineering or by finetuning with a small amount of labeled downstream data. However, there has been little focus on improving the richness of the class names themselves, which can pose issues when class labels are coarsely-defined and uninformative. We propose Classification with Hierarchical Label Sets (or CHiLS), an alternative strategy for zero-shot classification specially designed for datasets with implicit semantic hierarchies. CHiLS proceeds in three steps: (i) for each class, produce a set of subclasses, using either existing hierarchies or by querying GPT-3; (ii) perform the standard zero-shot CLIP procedure as though these subclasses were the labels of interest; (iii) map the predicted subclass back to its parent to produce the final prediction. Across numerous datasets with underlying hierarchical structure, CHiLS improves accuracy in situations both with and without ground-truth hierarchical information.

Chat is not available.