Skip to yearly menu bar Skip to main content


Virtual presentation / top 5% paper

The Role of Coverage in Online Reinforcement Learning

Tengyang Xie · Dylan Foster · Yu Bai · Nan Jiang · Sham Kakade

Keywords: [ Reinforcement learning theory ] [ offline RL ] [ learnability ] [ online RL ] [ general function approximation ] [ Theory ]


Abstract:

Coverage conditions---which assert that the data logging distribution adequately covers the state space---play a fundamental role in determining the sample complexity of offline reinforcement learning. While such conditions might seem irrelevant to online reinforcement learning at first glance, we establish a new connection by showing---somewhat surprisingly---that the mere existence of a data distribution with good coverage can enable sample-efficient online RL. Concretely, we show that coverability---that is, existence of a data distribution that satisfies a ubiquitous coverage condition called concentrability---can be viewed as a structural property of the underlying MDP, and can be exploited by standard algorithms for sample-efficient exploration, even when the agent does not know said distribution. We complement this result by proving that several weaker notions of coverage, despite being sufficient for offline RL, are insufficient for online RL. We also show that existing complexity measures for online RL, including Bellman rank and Bellman-Eluder dimension, fail to optimally capture coverability, and propose a new complexity measure, the self-normalized coefficient, to provide a unification.

Chat is not available.