Skip to yearly menu bar Skip to main content


Poster

CoBERL: Contrastive BERT for Reinforcement Learning

Andrea Banino · Adria Puigdomenech Badia · Jacob C Walker · Tim Scholtes · Jovana Mitrovic · Charles Blundell

Keywords: [ contrastive learning ] [ deep reinforcement learning ] [ transformer ] [ representation learning ] [ reinforcement learning ]


Abstract:

Many reinforcement learning (RL) agents require a large amount of experience to solve tasks. We propose Contrastive BERT for RL (COBERL), an agent that combines a new contrastive loss and a hybrid LSTM-transformer architecture to tackle the challenge of improving data efficiency. COBERL enables efficient and robust learning from pixels across a wide variety of domains. We use bidirectional masked prediction in combination with a generalization of a recent contrastive method to learn better representations for RL, without the need of hand engineered data augmentations. We find that COBERL consistently improves data efficiency across the full Atari suite, a set of control tasks and a challenging 3D environment, and often it also increases final score performance.

Chat is not available.