Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Preference Transformer: Modeling Human Preferences using Transformers for RL

Changyeon Kim · Jongjin Park · Jinwoo Shin · Honglak Lee · Pieter Abbeel · Kimin Lee

MH1-2-3-4 #126

Keywords: [ human-in-the-loop reinforcement learning ] [ preference-based reinforcement learning ] [ deep reinforcement learning ] [ Reinforcement Learning ]


Abstract:

Preference-based reinforcement learning (RL) provides a framework to train agents using human preferences between two behaviors. However, preference-based RL has been challenging to scale since it requires a large amount of human feedback to learn a reward function aligned with human intent. In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers. Unlike prior approaches assuming human judgment is based on the Markovian rewards which contribute to the decision equally, we introduce a new preference model based on the weighted sum of non-Markovian rewards. We then design the proposed preference model using a transformer architecture that stacks causal and bidirectional self-attention layers. We demonstrate that Preference Transformer can solve a variety of control tasks using real human preferences, while prior approaches fail to work. We also show that Preference Transformer can induce a well-specified reward and attend to critical events in the trajectory by automatically capturing the temporal dependencies in human decision-making. Code is available on the project website: https://sites.google.com/view/preference-transformer.

Chat is not available.