Skip to yearly menu bar Skip to main content


Achieving the Pareto Frontier of Regret Minimization and Best Arm Identification in Multi-Armed Bandits

Wang Chi Cheung · Vincent Tan · Zixin Zhong

Halle B #186
[ ] [ Project Page ]
Tue 7 May 1:45 a.m. PDT — 3:45 a.m. PDT


We study the Pareto frontier of two archetypal objectives in multi-armed bandits, namely, regret minimization (RM) and best arm identification (BAI) with a fixed horizon. It is folklore that the balance between exploitation and exploration is crucial for both RM and BAI, but exploration is more critical in achieving the optimal performance for the latter objective. To this end, we design and analyze the BoBW-lil’UCB(\gamma) algorithm. Complementarily, by establishing lower bounds on the regret achievable by any algorithm with a given BAI failure probability, we show that (i) no algorithm can simultaneously perform optimally for both the RM and BAI objectives, and (ii) BoBW-lil’UCB(\gamma) achieves order-wise optimal performance for RM or BAI under different values of \gamma. Our work elucidates the trade-off more precisely by showing how the constants in previous works depend on certain hardness parameters. Finally, we show that BoBW-lil’UCB outperforms a close competitor UCB_{\alpha} (Degenne et al., 2019) in terms of the time complexity and the regret on diverse datasets such as MovieLens and Published Kinase Inhibitor Set.

Chat is not available.