In this paper we study a family of gradient descent algorithms to approximate the regression function from reproducing kernel Hilbert spaces (RKHSs), the family being characterized by a polynomial decreasing rate of step sizes (or learning rate). By solving a bias-variance trade-off we obtain an early stopping rule and some probabilistic upper bounds for the convergence of the algorithms. We also discuss the implication of these results in the context of classification where some fast convergence rates can be achieved for plug-in classifiers. Some connections are addressed with Boosting, Landweber iterations, and the online learning algorithms as stochastic approximations of the gradient descent method
International audienceWe investigate the construction of early stopping rules in the non-parametric ...
International audienceWe investigate the construction of early stopping rules in the non-parametric ...
This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilb...
AbstractWe propose an early stopping algorithm for learning gradients. The motivation is to choose “...
AbstractWe propose a stochastic gradient descent algorithm for learning the gradient of a regression...
Early stopping is a form of regularization based on choosing when to stop running an iterative algor...
Early stopping is a form of regularization based on choosing when to stop running an iterative algor...
AbstractLearning gradients is one approach for variable selection and feature covariation estimation...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKH...
We prove rates of convergence in the statistical sense for kernel-based least squares regression usi...
International audienceThis paper studies an intriguing phenomenon related to the good generalization...
International audienceWe investigate the construction of early stopping rules in the non-parametric ...
International audienceWe investigate the construction of early stopping rules in the non-parametric ...
International audienceWe investigate the construction of early stopping rules in the non-parametric ...
International audienceWe investigate the construction of early stopping rules in the non-parametric ...
This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilb...
AbstractWe propose an early stopping algorithm for learning gradients. The motivation is to choose “...
AbstractWe propose a stochastic gradient descent algorithm for learning the gradient of a regression...
Early stopping is a form of regularization based on choosing when to stop running an iterative algor...
Early stopping is a form of regularization based on choosing when to stop running an iterative algor...
AbstractLearning gradients is one approach for variable selection and feature covariation estimation...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKH...
We prove rates of convergence in the statistical sense for kernel-based least squares regression usi...
International audienceThis paper studies an intriguing phenomenon related to the good generalization...
International audienceWe investigate the construction of early stopping rules in the non-parametric ...
International audienceWe investigate the construction of early stopping rules in the non-parametric ...
International audienceWe investigate the construction of early stopping rules in the non-parametric ...
International audienceWe investigate the construction of early stopping rules in the non-parametric ...
This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilb...