Skip to yearly menu bar Skip to main content

Workshop: Generalizable Policy Learning in the Physical World

ShiftNorm: On Data Efficiency in Reinforcement Learning with Shift Normalization

Sicong Liu · Xi Zhang · Yushuo Li · Yifan Zhang · Jian Cheng


We propose ShiftNorm, a simple yet promising data augmentation that can be applied to standard model-free algorithms to improve sample-efficiency in high-dimensional image-based reinforcement learning (RL).Concretely, the differentiable ShiftNorm leverages original samples with reparameterized virtual samples, and hasten the image encoder to generate invariant representations. Our approach demonstrates certify substantial advances, enabling it to outperform the new state-of-the-art on 8 of 9 tasks on the DeepMind Control Suite at 500k steps.

Chat is not available.