Skip to yearly menu bar Skip to main content


Poster

GROOT-2: Weakly Supervised Multimodal Instruction Following Agents

Shaofei Cai · Bowei Zhang · Zihao Wang · Haowei Lin · Xiaojian Ma · Anji Liu · Yitao Liang

Hall 3 + Hall 2B #572
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Developing agents that can follow multimodal instructions remains a fundamental challenge in robotics and AI. Although large-scale pre-training on unlabeled datasets has enabled agents to learn diverse behaviors, these agents often struggle with following instructions. While augmenting the dataset with instruction labels can mitigate this issue, acquiring such high-quality annotations at scale is impractical. To address this issue, we frame the problem as a semi-supervised learning task and introduce \agent, a multimodal instructable agent trained using a novel approach that combines weak supervision with latent variable models. Our method consists of two key components: constrained self-imitating, which utilizes large amounts of unlabeled demonstrations to enable the policy to learn diverse behaviors, and human intention alignment, which uses a smaller set of labeled demonstrations to ensure the latent space reflects human intentions. \agent’s effectiveness is validated across four diverse environments, ranging from video games to robotic manipulation, demonstrating its robust multimodal instruction-following capabilities.

Live content is unavailable. Log in and register to view live content