Skip to yearly menu bar Skip to main content


Poster

Unraveling Model-Agnostic Meta-Learning via The Adaptation Learning Rate

Yingtian Zou · Fusheng Liu · Qianxiao Li

Virtual

Keywords: [ learning rate ] [ optimization ] [ meta-learning ]


Abstract:

Model-Agnostic Meta-Learning (MAML) aims to find initial weights that allow fast adaptation to new tasks. The adaptation (inner loop) learning rate in MAML plays a central role in enabling such fast adaptation. However, how to choose this value in practice and how this choice affects the adaptation error remains less explored. In this paper, we study the effect of the adaptation learning rate in meta-learning with mixed linear regression. First, we present a principled way to estimate optimal adaptation learning rates that minimize the population risk of MAML. Second, we interpret the underlying dependence between the optimal adaptation learning rate and the input data. Finally, we prove that compared with empirical risk minimization (ERM), MAML produces an initialization with a smaller average distance to the task optima, consistent with previous practical findings. These results are corroborated with numerical experiments.

Chat is not available.