Skip to yearly menu bar Skip to main content


Poster

Observational Overfitting in Reinforcement Learning

Yilun Du · Behnam Neyshabur · Stephen Tu · YiDing Jiang · Xingyou Song


Abstract:

A major component of overfitting in model-free reinforcement learning (RL) involves the case where the agent may mistakenly correlate reward with certain spurious features from the observations generated by the Markov Decision Process (MDP). We provide a general framework for analyzing this scenario, which we use to design multiple synthetic benchmarks from only modifying the observation space of an MDP. When an agent overfits to different observation spaces even if the underlying MDP dynamics is fixed, we term this observational overfitting. Our experiments expose intriguing properties especially with regards to implicit regularization, and also corroborate results from previous works in RL generalization and supervised learning (SL).

Chat is not available.