Skip to yearly menu bar Skip to main content


Poster

Neural Dueling Bandits: Preference-Based Optimization with Human Feedback

Arun Verma · Zhongxiang Dai · Xiaoqiang Lin · Patrick Jaillet · Bryan Kian Hsiang Low

Hall 3 + Hall 2B #423
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Contextual dueling bandit is used to model the bandit problems, where a learner's goal is to find the best arm for a given context using observed noisy human preference feedback over the selected arms for the past contexts. However, existing algorithms assume the reward function is linear, which can be complex and non-linear in many real-life applications like online recommendations or ranking web search results. To overcome this challenge, we use a neural network to estimate the reward function using preference feedback for the previously selected arms. We propose upper confidence bound- and Thompson sampling-based algorithms with sub-linear regret guarantees that efficiently select arms in each round. We also extend our theoretical results to contextual bandit problems with binary feedback, which is in itself a non-trivial contribution. Experimental results on the problem instances derived from synthetic datasets corroborate our theoretical results.

Live content is unavailable. Log in and register to view live content