Poster
in
Workshop: Self-Improving Foundation Models Without Human Supervision
Yes, Q-learning Helps Offline In-Context RL
Denis Tarasov · Alexander Nikulin · Ilya Zisman · Albina Klepach · Andrei Polubarov · Lyubaykin Nikita · Alexander Derevyagin · Igor Kiselev · Vladislav Kurenkov
Keywords: [ offline reinforcement learning ] [ in-context reinforcement learning ] [ reinforcement learning ]
Existing scalable offline In-Context Reinforcement Learning (ICRL) methods have predominantly relied on supervised training objectives, which are known for having limitations in offline RL settings. In this work, we investigate the integration of reinforcement learning (RL) objectives into a scalable offline ICRL framework. Through experiments across more than 150 datasets derived from GridWorld andMuJoCo environments, we demonstrate that optimizing RL objectives improves performance by approximately 30% on average compared to the widely established Algorithm Distillation (AD) baseline across various dataset coverages, structures, expertise levels, and environmental complexities. Our results also reveal that offline RL-based methods, outperform online approahces, which are not specifically designed for offline scenarios. These findings underscore the importance of aligning the learning objectives with RL’s reward-maximization goal and demonstrates that offline RL is a promising direction for applying in ICRL settings.