Poster
in
Workshop: Workshop on Large Language Models for Agents
Preference-Conditioned Language-Guided Abstraction
Andi Peng · Andreea Bobu · Belinda Li · Theodore Sumers · Ilia Sucholutsky · Nishanth Kumar · Thomas L. Griffiths · Julie Shah
Learning from demonstrations is a common way for users to teach robots, but it is prone to spurious feature correlations. Recent work constructs state abstractions, i.e. visual representations containing task-relevant features, from language as a way to perform more generalizable learning.However, these abstractions also depend on a user's preference for what matters in a task, which may be hard to describe or infeasible to exhaustively specify using language alone.How do we construct abstractions to capture these latent preferences? We observe that how humans behave reveals how they see the world.Our key insight is that changes in human behavior inform us that there are differences in preferences for how humans see the world, i.e. their state abstractions.In this work, we propose using language models (LMs) to query for those preferences directly given knowledge that a change in behavior has occurred.In our framework, we use the LM in two ways: first, given a text description of the task and knowledge of behavioral change between states, we query the LM for possible hidden preferences; second, given the most likely preference, we query the LM to construct the state abstraction. In this framework, the LM is also able to ask the human directly when uncertain about its own estimate.We demonstrate our framework's ability to construct effective preference-conditioned abstractions in simulated experiments, a user study, as well as on a real Spot robot performing mobile manipulation tasks.