Poster
Revisiting a Design Choice in Gradient Temporal Difference Learning
Xiaochi Qian · Shangtong Zhang
Hall 3 + Hall 2B #459
[
Abstract
]
Fri 25 Apr midnight PDT
— 2:30 a.m. PDT
Abstract:
Off-policy learning enables a reinforcement learning (RL) agent to reason counterfactually about policies that are not executed and is one of the most important ideas in RL. It, however, can lead to instability when combined with function approximation and bootstrapping, two arguably indispensable ingredients for large-scale reinforcement learning. This is the notorious deadly triad. The seminal work Sutton et al. (2008) pioneers Gradient Temporal Difference learning (GTD) as the first solution to the deadly triad, which has enjoyed massive success thereafter. During the derivation of GTD, some intermediate algorithm, called A⊤TD, was invented but soon deemed inferior. In this paper, we revisit this A⊤TD and prove that a variant of A⊤TD, called A⊤tTD, is also an effective solution to the deadly triad. Furthermore, this A⊤tTD only needs one set of parameters and one learning rate. By contrast, GTD has two sets of parameters and two learning rates, making it hard to tune in practice. We provide asymptotic analysis for A⊤tTD and finite sample analysis for a variant of A⊤tTD that additionally involves a projection operator. The convergence rate of this variant is on par with the canonical on-policy temporal difference learning.
Live content is unavailable. Log in and register to view live content