Skip to yearly menu bar Skip to main content

Workshop: Generalizable Policy Learning in the Physical World

Prompts and Pre-Trained Language Models for Offline Reinforcement Learning

Denis Tarasov · Vladislav Kurenkov · Sergey Kolesnikov


In this preliminary study, we introduce a simple way to leverage pre-trained language models in deep offline RL settings that are not naturally suited for textual representation. We propose using a state transformation into a human-readable text and a minimal fine-tuning of the pre-trained language model when training with deep offline RL algorithms. This approach shows consistent performance gains on the NeoRL MuJoCo datasets. Our experiments suggest that LM fine-tuning is crucial for good performance on robotics tasks. However, we also show that it is not necessary when working with finance environments in order to retain significant improvement in the final performance.

Chat is not available.