Poster
in
Workshop: How Far Are We From AGI
LEAGUE++: EMPOWERING CONTINUAL ROBOT LEARNING THROUGH GUIDED SKILL ACQUISITION WITH LARGE LANGUAGE MODELS
Zhaoyi Li · Kelin Yu · Shuo Cheng · Danfei Xu
Keywords: [ TAMP ] [ Robotic Learning ] [ LLM ] [ rl ] [ curriculum learning ] [ lifelong learning ] [ Continuous Learning ]
To support daily human tasks, robots need to tackle intricate, long-term tasks andcontinuously acquire new skills to handle new problems. Deep reinforcementlearning (DRL) offers potential for learning fine-grained skills but relies heavilyon human-defined rewards and faces challenges with long-horizon tasks. Task andMotion Planning (TAMP) are adept at handling long-horizon tasks but often needtailored domain-specific skills, resulting in practical limitations and inefficiencies.To address these challenges, we developed LEAGUE++, a framework that lever-ages Large Language Models (LLMs) to harmoniously integrate TAMP and DRLfor continuous skill learning in long-horizon tasks. Our framework achieves auto-matic task decomposition, operator creation, and dense reward generation for ef-ficiently acquiring the desired skills. To facilitate new skill learning, LEAGUE++maintains a symbolic skill library and utilizes the existing model from semantic-related skill to warm start the training. Our method, LEAGUE++, demonstratessuperior performance compared to baselines across four challenging simulatedtask domains. Furthermore, we demonstrate the ability to reuse learned skills toexpedite learning in new task domains.