Skip to yearly menu bar Skip to main content


Oral
in
Workshop: 7th Robot Learning Workshop: Towards Robots with Human-Level Abilities

Environment as Policy: Generative Curriculum Learning for Autonomous Racing

Jiaxu Xing · Hongze Wang · Nico Messikommer · Davide Scaramuzza

[ ] [ Project Page ]
 
presentation: 7th Robot Learning Workshop: Towards Robots with Human-Level Abilities
Sat 26 Apr 5:55 p.m. PDT — 3 a.m. PDT

Abstract:

Reinforcement learning (RL) has achieved outstanding success in complex robot control tasks, such as drone racing, where the RL agents have outperformed human champions in a known racing track. However, these agents fail in unseen rack configurations, always requiring complete retraining when presented with new track layouts. This work aims to develop RL agents that generalize effectively to novel track configurations without retraining. The naı̈ve solution of training directly on a diverse set of track layouts can overburden the agent, resulting in suboptimal policy learning as the increased complexity of the environment impairs the agent’s ability to learn to fly. To enhance the generalizability of the RL agent, we propose an adaptive environment-shaping framework that dynamically adjusts the training environment based on the agent’s performance. We achieve this by leveraging a secondary RL policy to design environments that strike a balance between being challenging and achievable, allowing the agent to adapt and improve progressively. Using our adaptive environment shaping, one single racing policy efficiently learns to race in diverse challenging tracks. Experimental results validated in both simulation and the real world show that our method enables drones to successfully fly complex and unseen race tracks, outperforming existing environment-shaping techniques.

Chat is not available.