Vision-Based Manipulators Need to Also See from Their Hands

Kyle Hsu · Moo Kim · Rafael Rafailov · Jiajun Wu · Chelsea Finn

Keywords: [ reinforcement learning ] [ manipulation ] [ out-of-distribution generalization ] [ robotics ]

[ Abstract ]
[ Visit Poster at Spot D2 in Virtual World ] [ OpenReview
Mon 25 Apr 10:30 a.m. PDT — 12:30 p.m. PDT
Oral presentation: Oral 1: AI Applications
Mon 25 Apr 5 p.m. PDT — 6:30 p.m. PDT


We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations. Compared with the more commonly used global third-person perspective, a hand-centric (eye-in-hand) perspective affords reduced observability, but we find that it consistently improves training efficiency and out-of-distribution generalization. These benefits hold across a variety of learning algorithms, experimental settings, and distribution shifts, and for both simulated and real robot apparatuses. However, this is only the case when hand-centric observability is sufficient; otherwise, including a third-person perspective is necessary for learning, but also harms out-of-distribution generalization. To mitigate this, we propose to regularize the third-person information stream via a variational information bottleneck. On six representative manipulation tasks with varying hand-centric observability adapted from the Meta-World benchmark, this results in a state-of-the-art reinforcement learning agent operating from both perspectives improving its out-of-distribution generalization on every task. While some practitioners have long put cameras in the hands of robots, our work systematically analyzes the benefits of doing so and provides simple and broadly applicable insights for improving end-to-end learned vision-based robotic manipulation.

Chat is not available.