Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Revisiting a Design Choice in Gradient Temporal Difference Learning

Xiaochi Qian · Shangtong Zhang

Hall 3 + Hall 2B #459
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract: Off-policy learning enables a reinforcement learning (RL) agent to reason counterfactually about policies that are not executed and is one of the most important ideas in RL. It, however, can lead to instability when combined with function approximation and bootstrapping, two arguably indispensable ingredients for large-scale reinforcement learning. This is the notorious deadly triad. The seminal work Sutton et al. (2008) pioneers Gradient Temporal Difference learning (GTD) as the first solution to the deadly triad, which has enjoyed massive success thereafter. During the derivation of GTD, some intermediate algorithm, called ATD, was invented but soon deemed inferior. In this paper, we revisit this ATD and prove that a variant of ATD, called AtTD, is also an effective solution to the deadly triad. Furthermore, this AtTD only needs one set of parameters and one learning rate. By contrast, GTD has two sets of parameters and two learning rates, making it hard to tune in practice. We provide asymptotic analysis for AtTD and finite sample analysis for a variant of AtTD that additionally involves a projection operator. The convergence rate of this variant is on par with the canonical on-policy temporal difference learning.

Live content is unavailable. Log in and register to view live content