Skip to yearly menu bar Skip to main content


Poster

Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

Javier Rando · Tony Wang · Stewart Slocum · Dmitrii Krasheninnikov · Usman Anwar · Micah Carroll · Xander Davies · Claudia Shi · Thomas Gilbert · Rachel Freedman · Charbel-Raphael Segerie · Phillip Christoffersen · Jacob Pfau · Tomek Korbak · Xin Chen · Lauro Langosco · Samuel Marks · Erdem Bıyık · Dorsa Sadigh · David Krueger · Pedro Freire · Mehul Damani · Jérémy Scheurer · David Lindner · Anca Dragan · Anand Siththaranjan · Dylan Hadfield-Menell · Max Nadeau · Stephen Casper · Peter Hase · Andi Peng · Eric Michaud

Hall 3 + Hall 2B #496
[ ] [ Project Page ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state-of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-layered approach to the development of safer AI systems.

Live content is unavailable. Log in and register to view live content