Provable Rich Observation Reinforcement Learning with Combinatorial Latent States

Dipendra Kumar Misra · Qinghua Liu · Chi Jin · John Langford

Keywords: [ Factored MDP ] [ State abstraction ] [ Noise-contrastive learning ] [ Rich observation ] [ Reinforcement learning theory ]

[ Abstract ]
[ Paper ]
Tue 4 May 9 a.m. PDT — 11 a.m. PDT


We propose a novel setting for reinforcement learning that combines two common real-world difficulties: presence of observations (such as camera images) and factored states (such as location of objects). In our setting, the agent receives observations generated stochastically from a "latent" factored state. These observations are "rich enough" to enable decoding of the latent state and remove partial observability concerns. Since the latent state is combinatorial, the size of state space is exponential in the number of latent factors. We create a learning algorithm FactoRL (Fact-o-Rel) for this setting, which uses noise-contrastive learning to identify latent structures in emission processes and discover a factorized state space. We derive polynomial sample complexity guarantees for FactoRL which polynomially depend upon the number factors, and very weakly depend on the size of the observation space. We also provide a guarantee of polynomial time complexity when given access to an efficient planning algorithm.

Chat is not available.