In this paper we show how to accelerate randomized coordinate descent methods and achieve faster convergence rates without paying per-iteration costs in asymptotic running time. In particular, we show how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box. In addition to providing a proof of convergence for this new general method, we show that it is numerically stable, efficiently implementable, and in certain regimes, asymptotically optimal. To highlight the computational power of this algorithm, we show how it can used to create faster linear system solvers in several regimes: • We show how this method ach...
The coordinate descent (CD) method is a classical optimization algorithm that has seen a revival of ...
In this paper we develop random block coordinate descent methods for minimizing large-scale linearl...
In this work we show that randomized (block) coordinate descent methods can be accelerated by parall...
In this thesis we study iterative algorithms with simple sublinear time update steps, and we show ho...
International audience<p>We propose a new randomized coordinate descent method for minimizing the s...
In this paper we prove a new complexity bound for a variant of Accelerated Coordinate Descent Method...
The most common class of methods for solving linear systems is the class of gradient algorithms, th...
In this paper we prove a new complexity bound for a variant of the accelerated coordinate descent me...
We develop a novel, fundamental and surprisingly simple randomized iterative method for solving cons...
The standard randomized sparse Kaczmarz (RSK) method is an algorithm to compute sparse solutions of ...
Abstract Accelerated coordinate descent is widely used in optimization due to its cheap per-iteratio...
When solving massive optimization problems in areas such as machine learning, it is a common practic...
Asynchronous methods for solving systems of linear equations have been researched since Chazan and M...
The Kaczmarz method for solving linear systems of equations is an iterative algorithm that has found...
The Kaczmarz method for solving linear systems of equations is an iterative algorithm that ...
The coordinate descent (CD) method is a classical optimization algorithm that has seen a revival of ...
In this paper we develop random block coordinate descent methods for minimizing large-scale linearl...
In this work we show that randomized (block) coordinate descent methods can be accelerated by parall...
In this thesis we study iterative algorithms with simple sublinear time update steps, and we show ho...
International audience<p>We propose a new randomized coordinate descent method for minimizing the s...
In this paper we prove a new complexity bound for a variant of Accelerated Coordinate Descent Method...
The most common class of methods for solving linear systems is the class of gradient algorithms, th...
In this paper we prove a new complexity bound for a variant of the accelerated coordinate descent me...
We develop a novel, fundamental and surprisingly simple randomized iterative method for solving cons...
The standard randomized sparse Kaczmarz (RSK) method is an algorithm to compute sparse solutions of ...
Abstract Accelerated coordinate descent is widely used in optimization due to its cheap per-iteratio...
When solving massive optimization problems in areas such as machine learning, it is a common practic...
Asynchronous methods for solving systems of linear equations have been researched since Chazan and M...
The Kaczmarz method for solving linear systems of equations is an iterative algorithm that has found...
The Kaczmarz method for solving linear systems of equations is an iterative algorithm that ...
The coordinate descent (CD) method is a classical optimization algorithm that has seen a revival of ...
In this paper we develop random block coordinate descent methods for minimizing large-scale linearl...
In this work we show that randomized (block) coordinate descent methods can be accelerated by parall...