Skip to yearly menu bar Skip to main content


Poster

Temporal Difference Variational Auto-Encoder

Karol Gregor · George Papamakarios · Frederic Besse · Lars Buesing · Theophane Weber

Great Hall BC #31

Keywords: [ generative models ] [ temporal difference learning ] [ variational auto-encoders ] [ state space models ]


Abstract:

To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics: (a) it should build an abstract state representing the condition of the world; (b) it should form a belief which represents uncertainty on the world; (c) it should go beyond simple step-by-step simulation, and exhibit temporal abstraction. Motivated by the absence of a model satisfying all these requirements, we propose TD-VAE, a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an analogue of temporal difference learning used in reinforcement learning.

Live content is unavailable. Log in and register to view live content