Estimating Gradients for Discrete Random Variables by Sampling without Replacement

Wouter Kool, Herke van Hoof, Max Welling

Keywords: policy gradient, variational inference

Tuesday: RL and Estimation

Abstract: We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples. We show that our estimator can be derived as the Rao-Blackwellization of three different estimators. Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations. The resulting estimator is closely related to other gradient estimators. Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings.

Similar Papers

SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models
Yucen Luo, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ryan P. Adams, Ricky T. Q. Chen,
Adaptive Correlated Monte Carlo for Contextual Categorical Sequence Generation
Xinjie Fan, Yizhe Zhang, Zhendong Wang, Mingyuan Zhou,