Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Generative Models for Robot Learning

Generative Quality Diversity Imitation Learning for Robot Skill Acquisition

Zhenglin Wan · Xingrui Yu · David Bossens · Yueming Lyu · Qing Guo · Flint Xiaofeng Fan · Ivor Tsang


Abstract:

Imitation learning (IL) has demonstrated significant potential in robot learning, enabling agents to acquire skills from expert demonstrations. However, traditional IL methods are typically limited to learning a single behavior, as demonstrations often reflect only one expert's strategy. In this work, we introduce Generative Quality Diversity Imitation Learning (G-QDIL), a novel framework that leverages generative model formalisms to enable robots to learn a diverse repertoire of skills from limited demonstrations. By integrating quality diversity optimization with adversarial imitation learning (AIL), our framework allows constructing a large archive of diverse and high-performing control policies. G-QDIL is compatible with any inverse reinforcement learning (IRL) method and significantly improves the performance of generative adversarial IL algorithms (GAIL and VAIL) on challenging continuous control tasks in MuJoCo environments. Notably, our method achieves 2x expert performance in the Humanoid environment, demonstrating its potential for real-world robot applications. Hereby, this work bridges the gap between generative models and robot learning, and offers a scalable and data-efficient approach for synthesizing diverse behaviors in complex, multimodal environments.

Chat is not available.