Skip to yearly menu bar Skip to main content


Poster

Meta-Learning with Latent Embedding Optimization

Andrei A. Rusu · Dushyant Rao · Jakub Sygnowski · Oriol Vinyals · Razvan Pascanu · Simon Osindero · Raia Hadsell

Great Hall BC #28

Keywords: [ optimization ] [ meta-learning ] [ few-shot ] [ miniimagenet ] [ tieredimagenet ] [ hypernetworks ] [ generative ] [ latent embedding ]


Abstract:

Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.

Live content is unavailable. Log in and register to view live content