Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Hard-Meta-Dataset++: Towards Understanding Few-Shot Performance on Difficult Tasks

Samyadeep Basu · Megan Stanley · John Bronskill · Soheil Feizi · Daniela Massiceti

MH1-2-3-4 #39

Keywords: [ evaluation ] [ Meta-Dataset ] [ benchmarks ] [ few-shot learning ] [ Deep Learning and representational learning ]


Abstract:

Few-shot classification is the ability to adapt to any new classification task from only a few training examples. The performance of current top-performing few-shot classifiers varies widely across different tasks where they often fail on a subset of `difficult' tasks.This phenomenon has real-world consequences for deployed few-shot systems where safety and reliability are paramount, yet little has been done to understand these failure cases. In this paper, we study these difficult tasks to gain a more nuanced understanding of the limitations of current methods. To this end, we develop a general and computationally efficient algorithm called FastDiffSel to extract difficult tasks from any large-scale vision dataset. Notably, our algorithm can extract tasks at least 20x faster than existing methods enabling its use on large-scale datasets. We use FastDiffSel to extract difficult tasks from Meta-Datasset, a widely-used few-shot classification benchmark, and other challenging large-scale vision datasets including ORBIT, CURE-OR and ObjectNet. These tasks are curated into Hard-MD++, a new few-shot testing benchmark to promote the development of methods that are robust to even the most difficult tasks. We use Hard-MD++ to stress-test an extensive suite of few-shot classification methods and show that state-of-the-art approaches fail catastrophically on difficult tasks. We believe that our extraction algorithm FastDiffSel and Hard-MD++ will aid researchers in further understanding failure modes of few-shot classification models.

Chat is not available.