ICLR 2018
Skip to yearly menu bar Skip to main content


Workshop

Gradient-based Optimization of Neural Network Architecture

Will Grathwohl · Elliot Creager · Seyed Ghasemipour · Richard Zemel

East Meeting Level 8 + 15 #3

Neural networks can learn relevant features from data, but their predictive accuracy and propensity to overfit are sensitive to the values of the discrete hyperparameters that specify the network architecture (number of hidden layers, number of units per layer, etc.). Previous work optimized these hyperparmeters via grid search, random search, and black box optimization techniques such as Bayesian optimization. Bolstered by recent advances in gradient-based optimization of discrete stochastic objectives, we instead propose to directly model a distribution over possible architectures and use variational optimization to jointly optimize the network architecture and weights in one training pass. We discuss an implementation of this approach that estimates gradients via the Concrete relaxation, and show that it finds compact and accurate architectures for convolutional neural networks applied to the CIFAR10 and CIFAR100 datasets.

Live content is unavailable. Log in and register to view live content