Skip to yearly menu bar Skip to main content


Poster

Variance Reduction for Reinforcement Learning in Input-Driven Environments

Hongzi Mao · Shaileshh Bojja Venkatakrishnan · Malte Schwarzkopf · Mohammad Alizadeh

Great Hall BC #34

Keywords: [ reinforcement learning ] [ policy gradient ] [ variance reduction ] [ baseline ] [ input-driven environments ]


Abstract:

We consider reinforcement learning in input-driven environments, where an exogenous, stochastic input process affects the dynamics of the system. Input processes arise in many applications, including queuing systems, robotics control with disturbances, and object tracking. Since the state dynamics and rewards depend on the input process, the state alone provides limited information for the expected future returns. Therefore, policy gradient methods with standard state-dependent baselines suffer high variance during training. We derive a bias-free, input-dependent baseline to reduce this variance, and analytically show its benefits over state-dependent baselines. We then propose a meta-learning approach to overcome the complexity of learning a baseline that depends on a long sequence of inputs. Our experimental results show that across environments from queuing systems, computer networks, and MuJoCo robotic locomotion, input-dependent baselines consistently improve training stability and result in better eventual policies.

Live content is unavailable. Log in and register to view live content