Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICLR 2025 Workshop on Bidirectional Human-AI Alignment

Active Human Feedback Collection via Neural Contextual Dueling Bandits

Arun Verma · Xiaoqiang Lin · Zhongxiang Dai · Daniela Rus · Bryan Kian Hsiang Low


Abstract:

Collecting human preference feedback is often expensive, leading recent works to develop principled algorithms to select them more efficiently. However, these works assume that the underlying reward function is linear, an assumption that does not hold in many real-life applications, such as online recommendation and LLM alignment. To address this limitation, we propose Neural-ADB, an algorithm based on the neural contextual dueling bandit framework that provides a principled and practical method for collecting human preference feedback when the underlying latent reward function is non-linear. We theoretically show that when preference feedback follows the Bradley-Terry-Luce model, the worst suboptimality gap of the policy learned by Neural-ADB decreases at a sub-linear rate as the preference dataset increases. Experimental results on preference datasets further corroborate the effectiveness of Neural-ADB.

Chat is not available.