Skip to yearly menu bar Skip to main content


Poster

Standard Gaussian Process is All You Need for High-Dimensional Bayesian Optimization

Zhitong Xu · Haitao Wang · Jeff Phillips · Shandian Zhe

Hall 3 + Hall 2B #399
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT
 
Oral presentation: Oral Session 5D
Fri 25 Apr 7:30 p.m. PDT — 9 p.m. PDT

Abstract:

A long-standing belief holds that Bayesian Optimization (BO) with standard Gaussian processes (GP) --- referred to as standard BO --- underperforms in high-dimensional optimization problems. While this belief seems plausible, it lacks both robust empirical evidence and theoretical justification. To address this gap, we present a systematic investigation. First, through a comprehensive evaluation across twelve benchmarks, we found that while the popular Square Exponential (SE) kernel often leads to poor performance, using Mat\'ern kernels enables standard BO to consistently achieve top-tier results, frequently surpassing methods specifically designed for high-dimensional optimization. Second, our theoretical analysis reveals that the SE kernel’s failure primarily stems from improper initialization of the length-scale parameters, which are commonly used in practice but can cause gradient vanishing in training. We provide a probabilistic bound to characterize this issue, showing that Mat\'ern kernels are less susceptible and can robustly handle much higher dimensions. Third, we propose a simple robust initialization strategy that dramatically improves the performance of the SE kernel, bringing it close to state-of-the-art methods, without requiring additional priors or regularization. We prove another probabilistic bound that demonstrates how the gradient vanishing issue can be effectively mitigated with our method. Our findings advocate for a re-evaluation of standard BO’s potential in high-dimensional settings.

Live content is unavailable. Log in and register to view live content