Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Agent Learning in Open-Endedness

Dojo: A Large Scale Benchmark for Multi-Task Reinforcement Learning

Dominik Schmidt


Abstract:

We introduce Dojo, a reinforcement learning environment intended as a benchmark for evaluating RL agents' capabilities in the areas of multi-task learning, generalization, transfer learning, and curriculum learning. In this work, we motivate our benchmark, compare it to existing methods, and empirically demonstrate its suitability for the purpose of studying cross-task generalization. We establish a multi-task baseline across the whole benchmark as a reference for future research and discuss the achieved results and encountered issues. Finally, we provide experimental protocols and evaluation procedures to ensure that results are comparable across experiments. We also supply tools allowing researchers to easily understand their agents' performance across a wide variety of metrics.

Chat is not available.