Skip to yearly menu bar Skip to main content


Poster

Computationally Efficient RL under Linear Bellman Completeness for Deterministic Dynamics

Runzhe Wu · Ayush Sekhari · Akshay Krishnamurthy · Wen Sun

Hall 3 + Hall 2B #393
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT
 
Oral presentation: Oral Session 6D
Sat 26 Apr 12:30 a.m. PDT — 2 a.m. PDT

Abstract:

We study computationally and statistically efficient Reinforcement Learning algorithms for the linear Bellman Complete setting. This setting uses linear function approximation to capture value functions and unifies existing models like linear Markov Decision Processes (MDP) and Linear Quadratic Regulators (LQR). While it is known from the prior works that this setting is statistically tractable, it remained open whether a computationally efficient algorithm exists. Our work provides a computationally efficient algorithm for the linear Bellman complete setting that works for MDPs with large action spaces, random initial states, and random rewards but relies on the underlying dynamics to be deterministic. Our approach is based on randomization: we inject random noise into least squares regression problems to perform optimistic value iteration. Our key technical contribution is to carefully design the noise to only act in the null space of the training data to ensure optimism while circumventing a subtle error amplification issue.

Live content is unavailable. Log in and register to view live content