Bayesian optimization is known to be difficult to scale to high dimensions, because the acquisition step requires solving a non-convex optimization problem in the same search space. In order to scale the method and keep its benefits, we propose an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems. We show that our algorithm converges globally and obtains a fast local rate when the function is strongly convex. Further, if the objective has an invariant subspace, our method automatically adapts to the effective dimension without changing the algorithm. Our method scales well to high dimensions and makes use of a global Gaussian process model. When combined with the SafeOpt algorithm...
plenary presentationInternational audienceBayesian Optimization (BO) is a popular approach to the gl...
International audienceBayesian Optimization (BO) is a surrogate-based global optimization strategy t...
Most machine learning methods require careful selection of hyper-parameters in order to train a high...
Bayesian optimization (BO) is one of the most powerful strategies to solve expensive black-box optim...
Scaling Bayesian optimisation (BO) to high-dimensional search spaces is a active and open research p...
Recent advances have extended the scope of Bayesian optimization (BO) to expensive-to-evaluate black...
Linear particle accelerators require outstanding synchronization of the subsystems to provide high q...
International audienceBayesian optimization is known to be a method of choice when it comes to solvi...
Bayesian optimization (BO) is a powerful approach for seeking the global optimum of expensive black-...
Bayesian optimization forms a set of powerful tools that allows efficient blackbox optimization and...
The increasing availability of structured but high dimensional data has opened new opportunities for...
We are concerned primarily with improving the practical applicability of Bayesian optimization. We m...
Bayesian optimization is a powerful technique for the optimization of expensive black-box functions....
Parameter tuning is a notoriously time-consuming task in accelerator facilities. As tool for global ...
Bayesian Optimization (BO) is an effective method for optimizing expensive-to-evaluate black-box fun...
plenary presentationInternational audienceBayesian Optimization (BO) is a popular approach to the gl...
International audienceBayesian Optimization (BO) is a surrogate-based global optimization strategy t...
Most machine learning methods require careful selection of hyper-parameters in order to train a high...
Bayesian optimization (BO) is one of the most powerful strategies to solve expensive black-box optim...
Scaling Bayesian optimisation (BO) to high-dimensional search spaces is a active and open research p...
Recent advances have extended the scope of Bayesian optimization (BO) to expensive-to-evaluate black...
Linear particle accelerators require outstanding synchronization of the subsystems to provide high q...
International audienceBayesian optimization is known to be a method of choice when it comes to solvi...
Bayesian optimization (BO) is a powerful approach for seeking the global optimum of expensive black-...
Bayesian optimization forms a set of powerful tools that allows efficient blackbox optimization and...
The increasing availability of structured but high dimensional data has opened new opportunities for...
We are concerned primarily with improving the practical applicability of Bayesian optimization. We m...
Bayesian optimization is a powerful technique for the optimization of expensive black-box functions....
Parameter tuning is a notoriously time-consuming task in accelerator facilities. As tool for global ...
Bayesian Optimization (BO) is an effective method for optimizing expensive-to-evaluate black-box fun...
plenary presentationInternational audienceBayesian Optimization (BO) is a popular approach to the gl...
International audienceBayesian Optimization (BO) is a surrogate-based global optimization strategy t...
Most machine learning methods require careful selection of hyper-parameters in order to train a high...