We study generalization properties of distributed algorithms in the setting of nonparametric regression over a reproducing kernel Hilbert space (RKHS). We first investigate distributed stochastic gradient methods (SGM), with mini-batches and multi-passes over the data. We show that optimal generalization error bounds (up to a logarithmic factor) can be retained for distributed SGM provided that the partition level is not too large. We then extend our results to spectral algorithms (SA), including kernel ridge regression (KRR), kernel principal component analysis, and gradient methods. Our results are superior to the state-of-the-art theory. Particularly, our results show that distributed SGM has a smaller theoretical computational complexit...
We prove rates of convergence in the statistical sense for kernel-based least squares regression usi...
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the general...
International audienceIn the context of statistical supervised learning, the noiseless linear model ...
AbstractWe propose a stochastic gradient descent algorithm for learning the gradient of a regression...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
In this paper, we study regression problems over a separable Hilbert space with the square loss, cov...
Stochastic and data-distributed optimization algorithms have received lots of attention from the mac...
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKH...
In this paper, we study regression problems over a separable Hilbert space with the square loss, cov...
International audienceWe consider stochastic optimization problems defined over reproducing kernel H...
In recent studies, the generalization properties for distributed learning and random features assume...
The first part of this dissertation considers distributed learning problems over networked agents. T...
SpotlightInternational audienceSketching and stochastic gradient methods are arguably the most commo...
International audienceIn this paper, we investigate the impact of compression on stochastic gradient...
We prove rates of convergence in the statistical sense for kernel-based least squares regression usi...
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the general...
International audienceIn the context of statistical supervised learning, the noiseless linear model ...
AbstractWe propose a stochastic gradient descent algorithm for learning the gradient of a regression...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
In this paper, an online learning algorithm is proposed as sequential stochastic approximation of a ...
In this paper, we study regression problems over a separable Hilbert space with the square loss, cov...
Stochastic and data-distributed optimization algorithms have received lots of attention from the mac...
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKH...
In this paper, we study regression problems over a separable Hilbert space with the square loss, cov...
International audienceWe consider stochastic optimization problems defined over reproducing kernel H...
In recent studies, the generalization properties for distributed learning and random features assume...
The first part of this dissertation considers distributed learning problems over networked agents. T...
SpotlightInternational audienceSketching and stochastic gradient methods are arguably the most commo...
International audienceIn this paper, we investigate the impact of compression on stochastic gradient...
We prove rates of convergence in the statistical sense for kernel-based least squares regression usi...
In this paper, we use tools from rate-distortion theory to establish new upper bounds on the general...
International audienceIn the context of statistical supervised learning, the noiseless linear model ...