Infinite-horizon Off-Policy Policy Evaluation with Multiple Behavior Policies

Xinyun Chen, Lu Wang, Yizhe Hang, Heng Ge, Hongyuan Zha

Keywords: importance sampling, off policy policy evaluation, variance reduction

Abstract: We consider off-policy policy evaluation when the trajectory data are generated by multiple behavior policies. Recent work has shown the key role played by the state or state-action stationary distribution corrections in the infinite horizon context for off-policy policy evaluation. We propose estimated mixture policy (EMP), a novel class of partially policy-agnostic methods to accurately estimate those quantities. With careful analysis, we show that EMP gives rise to estimates with reduced variance for estimating the state stationary distribution correction while it also offers a useful induction bias for estimating the state-action stationary distribution correction. In extensive experiments with both continuous and discrete environments, we demonstrate that our algorithm offers significantly improved accuracy compared to the state-of-the-art methods.

Similar Papers

GenDICE: Generalized Offline Estimation of Stationary Values
Ruiyi Zhang, Bo Dai, Lihong Li, Dale Schuurmans,
Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation
Ziyang Tang, Yihao Feng, Lihong Li, Dengyong Zhou, Qiang Liu,
Ranking Policy Gradient
Kaixiang Lin, Jiayu Zhou,