Recent work has demonstrated that current reinforcement learning methods are able to master complex tasks given enough resources. However, these successes have mainly been confined to single and unchanging environments. By contrast, the real world is both complex and dynamic, rendering it impossible to anticipate each new scenario. Many standard learning approaches require tremendous resources in data and compute to re-train. However, learning also offers the potential to develop versatile agents that adapt and continue to learn across environment changes and shifting goals and feedback. To achieve this, agents must be able to apply knowledge gained in past experience to the situation at hand. We aim to bring together areas of research that provide different perspectives on how to extract and apply this knowledge.
The BeTR-RL workshop aims to bring together researchers from different backgrounds with a common interest in how to extend current reinforcement learning algorithms to operate in changing environments and tasks. Specifically, we are interested in the following lines of work: leveraging previous experience to learn representations or learning algorithms that transfer to new tasks (transfer and meta-learning), generalizing to new scenarios without any explicit adaptation (multi-task and goal-conditioned RL), and learning new capabilities while retaining the previously learned skills (continual learning). The workshop aims at further developing these research directions while determining similarities and trade-offs.