Oral

Self-training For Few-shot Transfer Across Extreme Task Differences

Cheng Perng Phoo · Bharath Hariharan

[ Abstract ] [ Livestream: Visit Oral Session 11 ] [ Paper ]
Thu 6 May 1:15 p.m. — 1:30 p.m. PDT
[ Paper ]

Most few-shot learning techniques are pre-trained on a large, labeled “base dataset”. In problem domains where such large labeled datasets are not available for pre-training (e.g., X-ray, satellite images), one must resort to pre-training in a different “source” problem domain (e.g., ImageNet), which can be very different from the desired target task. Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks. In this paper, we present a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain. We show that this improves one-shot performance on the target domain by 2.9 points on average on the challenging BSCD-FSL benchmark consisting of datasets from multiple domains.

Chat is not available.