Skip to yearly menu bar Skip to main content


Poster

REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context in New Environments

Kaustubh Sridhar · Souradeep Dutta · Dinesh Jayaraman · Insup Lee

Hall 3 + Hall 2B #425
[ ] [ Project Page ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT
 
Oral presentation: Oral Session 3A
Thu 24 Apr 7:30 p.m. PDT — 9 p.m. PDT

Abstract:

Building generalist agents that can rapidly adapt to new environments is a key challenge for deploying AI in the digital and real worlds. Is scaling current agent architectures the most effective way to build generalist agents? We propose a novel approach to pre-train relatively small policies on relatively small datasets and adapt them to unseen environments via in-context learning, without any finetuning. Our key idea is that retrieval offers a powerful bias for fast adaptation. Indeed, we demonstrate that even a simple retrieval-based 1-nearest neighbor agent offers a surprisingly strong baseline for today's state-of-the-art generalist agents. From this starting point, we construct a semi-parametric agent, REGENT, that trains a transformer-based policy on sequences of queries and retrieved neighbors. REGENT can generalize to unseen robotics and game-playing environments via retrieval augmentation and in-context learning, achieving this with up to 3x fewer parameters and up to an order-of-magnitude fewer pre-training datapoints, significantly outperforming today's state-of-the-art generalist agents.

Live content is unavailable. Log in and register to view live content