## $\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization

### Carl Hvarfner · Danny Stoll · Artur Souza · Marius Lindauer · Frank Hutter · Luigi Nardi

Keywords: [ bayesian optimization ] [ hyperparameter optimization ] [ meta-learning ]

[ Abstract ]
Wed 27 Apr 10:30 a.m. PDT — 12:30 p.m. PDT

Abstract: Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose $\pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, $\pi$BO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when $\pi$BO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that $\pi$BO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that $\pi$BO improves on the state-of-the-art performance for a popular deep learning task, with a $12.5\times$ time-to-accuracy speedup over prominent BO approaches.

Chat is not available.