Workshop
Generative Models for Robot Learning
Ziwei Wang · Congyue Deng · Changliu Liu · Zhenyu Jiang · Haoran Geng · Huazhe Xu · Yansong Tang · Philip Torr · Ziwei Liu · Angelique Taylor · Yuke Zhu
Topaz 220-225
Sun 27 Apr, 6 p.m. PDT
Next generation of robots should combine ideas from other fields such as computer vision, natural language processing, machine learning and many others, because the close-loop system is required to deal with complex tasks based on multimodal input in the complicated real environment. This workshop proposal focuses on generative models for robot learning, which lies in the important and fundamental field of AI and robotics. Learning-based methods in robotics have achieved high success rate and generalization ability in a wide variety of tasks such as manipulation, navigation, SLAM, scene reconstruction, proprioception, and physics modeling. However, robot learning faces several challenges including the expensive cost of data collection and weak transferability across different tasks and scenarios. Inspired by the significant progress in computer vision and natural language processing, efforts have been made to combine generative models with robot learning to address the above challenges such as synthesizing high-quality data, and incorporating generation frameworks into representation and policy learning. Besides, pre-trained large language models (LLMs), vision-language models (VLMs) and vision-language-action (VLA) models are adapted to various downstream tasks to fully leverage the rich commonsense knowledge. This progressive development enables robot learning frameworks to be applied in complex and diverse real-world tasks. This workshop aims to enable interdisciplinary communication for researchers in the broader community, so that more attention can be drawn to this field. In this workshop, the state-of-the-art process and promising future directions will be discussed, which will inspire new ideas and fantastic applications in related fields.
Live content is unavailable. Log in and register to view live content