Skip to yearly menu bar Skip to main content


Contributed Talk
in
Workshop: 5th Workshop on practical ML for limited/low resource settings (PML4LRS) @ ICLR 2024

Towards Bandit-based Optimization for Automated Machine Learning

Amir Rezaei Balef

[ ]
Sat 11 May 1:45 a.m. PDT — 1:55 a.m. PDT

Abstract:

ML is essential for modern data-driven technology; however, its performance depends on choosing the optimal modelling approach, which can be prohibitive in low-resource settings. Efficient hyperparameter optimization (HPO) methods exist, but in practice, we often also need to decide between different model alternatives, e.g. deep learning or tree-based methods, giving rise to the combined algorithm and hyperparameter optimization problem (CASH). Automated Machine Learning (AutoML) systems address this problem typically by using HPO to search the joint hierarchical space of models and their hyperparameters. To increase efficiency, we study an alternative approach by conducting HPO for each model independently and trying to allocate more budget to the most promising HPO run. We use Multi-armed Bandits (MABs) as an efficient framework to balance exploration and exploitation in this setting and identify promising directions for future research. Concretely, we study how to leverage Extreme Bandits and propose two quantile-based algorithms to efficiently explore the extreme values for this practical setting. We empirically study the performance of state-of-the-art MAB methods on two AutoML benchmarks showing that even basic MAB methods with our adjusted regret term yield resource-efficient alternatives to searching the whole space jointly.

Chat is not available.