Skip to yearly menu bar Skip to main content


Poster

Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits

Yuwei Luo · Mohsen Bayati

Hall 3 + Hall 2B #5
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract: This paper is motivated by recent research in the dd-dimensional stochastic linear bandit literature, which has revealed an unsettling discrepancy: algorithms like Thompson sampling and Greedy demonstrate promising empirical performance, yet this contrasts with their pessimistic theoretical regret bounds. The challenge arises from the fact that while these algorithms may perform poorly in certain problem instances, they generally excel in typical instances. To address this, we propose a new data-driven technique that tracks the geometric properties of the uncertainty ellipsoid around the main problem parameter. This methodology enables us to formulate a data-driven frequentist regret bound, which incorporates the geometric information, for a broad class of base algorithms, including Greedy, OFUL, and Thompson sampling. This result allows us to identify and course-correct" problem instances in which the base algorithms perform poorly. The course-corrected algorithms achieve the minimax optimal regret of order ˜O(dT)~O(dT) for a TT-period decision-making scenario, effectively maintaining the desirable attributes of the base algorithms, including their empirical efficacy. We present simulation results to validate our findings using synthetic and real data.

Live content is unavailable. Log in and register to view live content