Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizer's hyperparameters, such as its step size. Recent work has shown how the step size can itself be optimized alongside the model parameters by manually deriving expressions for "hypergradients" ahead of time. We show how to automatically compute hypergradients with a simple and elegant modification to backpropagation. This allows us to easily apply the method to other optimizers and hyperparameters (e.g. momentum coefficients). We can even recursively apply the method to its own hyper-hyperparameters, and so on ad infinitum. As these towers of optimizers grow taller, they become less sensitive to the initial choice of hyperparameters....
Gradient-based methods are often used for optimization. They form the basis of several neural networ...
In this paper, we propose a simple, fast and easy to implement algorithm LOSSGRAD (locally optimal s...
The performance of optimizers, particularly in deep learning, depends considerably on their chosen h...
In the recent years, there have been significant developments in the field of machine learning, with...
Automatic learning research focuses on the development of methods capable of extracting useful infor...
We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validati...
Estimating hyperparameters has been a long-standing problem in machine learning. We consider the cas...
Machine learning algorithms have been used widely in various applications and areas. To fit a machin...
Hyperparameter optimization in machine learning is a critical task that aims to find the hyper-param...
Abstract Most models in machine learning contain at least one hyperparameter to control for model co...
Part I: Theory - Basics of Hyperparameter Optimization - Exhausive Searches - Surrogate-based Op...
Neural networks have emerged as a powerful and versatile class of machine learning models, revolutio...
Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparamete...
Hyperparameter tuning is a fundamental aspect of machine learning research. Setting up the infrastru...
Most machine learning algorithms are configured by a set of hyperparameters whose values must be car...
Gradient-based methods are often used for optimization. They form the basis of several neural networ...
In this paper, we propose a simple, fast and easy to implement algorithm LOSSGRAD (locally optimal s...
The performance of optimizers, particularly in deep learning, depends considerably on their chosen h...
In the recent years, there have been significant developments in the field of machine learning, with...
Automatic learning research focuses on the development of methods capable of extracting useful infor...
We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validati...
Estimating hyperparameters has been a long-standing problem in machine learning. We consider the cas...
Machine learning algorithms have been used widely in various applications and areas. To fit a machin...
Hyperparameter optimization in machine learning is a critical task that aims to find the hyper-param...
Abstract Most models in machine learning contain at least one hyperparameter to control for model co...
Part I: Theory - Basics of Hyperparameter Optimization - Exhausive Searches - Surrogate-based Op...
Neural networks have emerged as a powerful and versatile class of machine learning models, revolutio...
Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparamete...
Hyperparameter tuning is a fundamental aspect of machine learning research. Setting up the infrastru...
Most machine learning algorithms are configured by a set of hyperparameters whose values must be car...
Gradient-based methods are often used for optimization. They form the basis of several neural networ...
In this paper, we propose a simple, fast and easy to implement algorithm LOSSGRAD (locally optimal s...
The performance of optimizers, particularly in deep learning, depends considerably on their chosen h...