arXiv Analytics

Sign in

arXiv:1802.07191 [cs.LG]AbstractReferencesReviewsResources

Neural Architecture Search with Bayesian Optimisation and Optimal Transport

Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, Eric Xing

Published 2018-02-11Version 1

Bayesian Optimisation (BO) refers to a class of methods for global optimisation of a function $f$ which is only accessible via point evaluations. It is typically used in settings where $f$ is expensive to evaluate. A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model. Conventional BO methods have focused on Euclidean and categorical domains, which, in the context of model selection, only permits tuning scalar hyper-parameters of machine learning algorithms. However, with the surge of interest in deep learning, there is an increasing demand to tune neural network \emph{architectures}. In this work, we develop NASBOT, a Gaussian process based BO framework for neural architecture search. To accomplish this, we develop a distance metric in the space of neural network architectures which can be computed efficiently via an optimal transport program. This distance might be of independent interest to the deep learning community as it may find applications outside of BO. We demonstrate that NASBOT outperforms other alternatives for architecture search in several cross validation based model selection tasks on multi-layer perceptrons and convolutional neural networks.

Related articles: Most relevant | Search more
arXiv:2006.07556 [cs.LG] (Published 2020-06-13)
Neural Architecture Search using Bayesian Optimisation with Weisfeiler-Lehman Kernel
arXiv:1807.01332 [cs.LG] (Published 2018-07-03)
Multi-Level Feature Abstraction from Convolutional Neural Networks for Multimodal Biometric Identification
arXiv:1707.09641 [cs.LG] (Published 2017-07-30)
Visual Explanations for Convolutional Neural Networks via Input Resampling