We study the block-coordinate forward–backward algorithm in which the blocks are updated in a random and possibly parallel manner, according to arbitrary probabilities. The algorithm allows different stepsizes along the block-coordinates to fully exploit the smoothness properties of the objective function. In the convex case and in an infinite dimensional setting, we establish almost sure weak convergence of the iterates and the asymptotic rate o(1/n) for the mean of the function values. We derive linear rates under strong convexity and error bound conditions. Our analysis is based on an abstract convergence principle for stochastic descent algorithms which allows to extend and simplify existing results
Two types of low cost-per-iteration gradient descent methods have been extensively studied in par-al...
We consider the problem of minimizing block-separable (non-smooth) convex functions subject to linea...
Abstract — We consider learning problems over training sets in which both, the number of training ex...
We propose and analyze a new parallel coordinate descent method—‘NSync— in which at each iteration a...
Abstract We propose and analyze a new parallel coordinate descent method-'NSyncin which at each...
International audienceRecent random block-coordinate fixed point algorithms are particularly well su...
We study and develop (stochastic) primal-dual block-coordinate descentmethods for convex problems ba...
In this paper we analyze the randomized block-coordinate descent (RBCD) methods proposed in [11, 15]...
International audienceReference [11] investigated the almost sure weak convergence of block-coordina...
In this paper we develop random block coordinate descent methods for minimizing large-scale linearl...
We analyze the convergence rate of the randomized Newton-like method introduced by Qu et. al. (2016)...
We study and develop (stochastic) primal--dual block-coordinate descent methods for convex problems ...
International audienceThis work proposes block-coordinate fixed point algorithms with applications t...
International audienceThis work proposes block-coordinate fixed point algorithms with applications t...
We consider convex optimization problems with structures that are suitable for sequential treatment ...
Two types of low cost-per-iteration gradient descent methods have been extensively studied in par-al...
We consider the problem of minimizing block-separable (non-smooth) convex functions subject to linea...
Abstract — We consider learning problems over training sets in which both, the number of training ex...
We propose and analyze a new parallel coordinate descent method—‘NSync— in which at each iteration a...
Abstract We propose and analyze a new parallel coordinate descent method-'NSyncin which at each...
International audienceRecent random block-coordinate fixed point algorithms are particularly well su...
We study and develop (stochastic) primal-dual block-coordinate descentmethods for convex problems ba...
In this paper we analyze the randomized block-coordinate descent (RBCD) methods proposed in [11, 15]...
International audienceReference [11] investigated the almost sure weak convergence of block-coordina...
In this paper we develop random block coordinate descent methods for minimizing large-scale linearl...
We analyze the convergence rate of the randomized Newton-like method introduced by Qu et. al. (2016)...
We study and develop (stochastic) primal--dual block-coordinate descent methods for convex problems ...
International audienceThis work proposes block-coordinate fixed point algorithms with applications t...
International audienceThis work proposes block-coordinate fixed point algorithms with applications t...
We consider convex optimization problems with structures that are suitable for sequential treatment ...
Two types of low cost-per-iteration gradient descent methods have been extensively studied in par-al...
We consider the problem of minimizing block-separable (non-smooth) convex functions subject to linea...
Abstract — We consider learning problems over training sets in which both, the number of training ex...