Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 7th Robot Learning Workshop: Towards Robots with Human-Level Abilities

Navigation with QPHIL: Quantizing Planner for Hierarchical Implicit Q-Learning

Alexi Canesse · Mathieu Petitbois · Ludovic Denoyer · sylvain lamprier · Rémy Portelas


Abstract:

Offline Reinforcement Learning (RL) has emerged as a powerful alternative to imitation learning for behavior modeling in various domains, particularly in complex long range navigation tasks. An existing challenge with Offline RL is the signal-to-noise ratio, i.e. how to mitigate incorrect policy updates due to errors in value estimates. Towards this, multiple works have demonstrated the advantage of hierarchical offline RL methods, which decouples high-level path planning from low-level path following. In this work, we present a novel hierarchical transformer-based approach leveraging a learned quantizer of the state space to tackle long horizon navigation tasks. This quantization enables the training of a simpler zone-conditioned low-level policy and simplifies planning, which is reduced to discrete autoregressive prediction. Among other benefits, zone-level reasoning in planning enables explicit trajectory stitching rather than implicit stitching based on noisy value function estimates. By combining this transformer-based planner with recent advancements in offline RL, our proposed approach achieves state-of-the-art results in complex long-distance navigation environments.

Chat is not available.