Coordinate descent with random coordinate selection is the current state of the art for many large scale optimization problems. However, greedy selection of the steepest coordinate on smooth problems can yield convergence rates independent of the dimension n, requiring n times fewer iterations. In this paper, we consider greedy updates that are based on subgradients for a class of non-smooth composite problems, including L1-regularized problems, SVMs and related applications. For these problems we provide (i) the first linear rates of convergence independent of n, and show that our greedy update rule provides speedups similar to those obtained in the smooth case. This was previously conjectured to be true for a stronger greedy coordinate se...
In this paper we develop random block coordinate descent methods for minimizing large-scale linearl...
International audienceFor composite nonsmooth optimization problems, which are "regular enough", pro...
Coordinate descent methods usually minimize a cost function by updating a random decision variable (...
This work looks at large-scale machine learning, with a particular focus on greedy methods. A recent...
In this paper we propose new methods for solving huge-scale optimization problems. For problems of t...
Large-scale optimization problems appear quite frequently in data science and machine learning appli...
Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as...
Stochastic coordinate descent, due to its practicality and efficiency, is increasingly popular in ma...
Abstract. In this paper we present a novel randomized block coordinate descent method for the minimi...
Abstract Coordinate descent algorithms solve optimization problems by suc-cessively performing appro...
International audience<p>We propose a new randomized coordinate descent method for minimizing the s...
In this work we show that randomized (block) coordinate descent methods can be accelerated by parall...
We study the problem of minimizing the sum of a smooth convex function and a convex block-separable ...
For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identificat...
First-order methods are gaining substantial interest in the past two decades because of their superi...
In this paper we develop random block coordinate descent methods for minimizing large-scale linearl...
International audienceFor composite nonsmooth optimization problems, which are "regular enough", pro...
Coordinate descent methods usually minimize a cost function by updating a random decision variable (...
This work looks at large-scale machine learning, with a particular focus on greedy methods. A recent...
In this paper we propose new methods for solving huge-scale optimization problems. For problems of t...
Large-scale optimization problems appear quite frequently in data science and machine learning appli...
Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as...
Stochastic coordinate descent, due to its practicality and efficiency, is increasingly popular in ma...
Abstract. In this paper we present a novel randomized block coordinate descent method for the minimi...
Abstract Coordinate descent algorithms solve optimization problems by suc-cessively performing appro...
International audience<p>We propose a new randomized coordinate descent method for minimizing the s...
In this work we show that randomized (block) coordinate descent methods can be accelerated by parall...
We study the problem of minimizing the sum of a smooth convex function and a convex block-separable ...
For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identificat...
First-order methods are gaining substantial interest in the past two decades because of their superi...
In this paper we develop random block coordinate descent methods for minimizing large-scale linearl...
International audienceFor composite nonsmooth optimization problems, which are "regular enough", pro...
Coordinate descent methods usually minimize a cost function by updating a random decision variable (...