Skip to yearly menu bar Skip to main content


Poster

Transformers Can Learn Temporal Difference Methods for In-Context Reinforcement Learning

Jiuqi Wang · Ethan Blaser · Hadi Daneshmand · Shangtong Zhang

Hall 3 + Hall 2B #403
[ ]
Wed 23 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

Traditionally, reinforcement learning (RL) agents learn to solve new tasks by updating their neural network parameters through interactions with the task environment. However, recent works demonstrate that some RL agents, after certain pretraining procedures, can learn to solve unseen new tasks without parameter updates, a phenomenon known as in-context reinforcement learning (ICRL). The empirical success of ICRL is widely attributed to the hypothesis that the forward pass of the pretrained agent neural network implements an RL algorithm. In this paper, we support this hypothesis by showing, both empirically and theoretically, that when a transformer is trained for policy evaluation tasks, it can discover and learn to implement temporal difference learning in its forward pass.

Live content is unavailable. Log in and register to view live content