This work addresses the problem of regularized linear least squares (RLS) with non-quadratic separable regularization. Despite being frequently deployed in many applications, the RLS prob-lem is often hard to solve using standard iterative methods. In a recent work [10], a new iterative method called Parallel Coordinate Descent (PCD) was devised. We provide herein a convergence analysis of the PCD algorithm, and also introduce a form of the regularization function, which permits analytical solution to the coordinate optimization. Several other recent works [6, 12, 13], which considered the deblurring problem in a Bayesian methodology, also obtained element-wise optimization algorithms. We show that these three methods are essentially equiva...
Large-scale optimization problems appear quite frequently in data science and machine learning appli...
We propose a novel algorithm for greedy forward fea-ture selection for regularized least-squares (RL...
This paper proposes a method for parallel block coordinate-wise minimization of convex functions. Ea...
AbstractThis work addresses the problem of regularized linear least squares (RLS) with non-quadratic...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
We propose a new sparse model construction method aimed at maximizing a model's generalisation capab...
The problem of finding sparse solutions to underdetermined systems of linear equations arises in sev...
Many computer vision problems are formulated as an objective function consisting of a sum of functio...
39 pages, 1 algorithm, 3 figures, 2 tablesInternational audienceWe study the performance of a family...
We study the performance of a family of randomized parallel coordinate descent methods for minimizin...
We present a generic framework for par-allel coordinate descent (CD) algorithms that includes, as sp...
International audience<p>We propose a new randomized coordinate descent method for minimizing the s...
The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the no...
ABSTRACT: Multiple undersampled images of a scene are often obtained by using a charge-coupled devic...
Large-scale optimization problems appear quite frequently in data science and machine learning appli...
We propose a novel algorithm for greedy forward fea-ture selection for regularized least-squares (RL...
This paper proposes a method for parallel block coordinate-wise minimization of convex functions. Ea...
AbstractThis work addresses the problem of regularized linear least squares (RLS) with non-quadratic...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
In this paper, we analyze the convergence of two general classes of optimization algorithms for regu...
We propose a new sparse model construction method aimed at maximizing a model's generalisation capab...
The problem of finding sparse solutions to underdetermined systems of linear equations arises in sev...
Many computer vision problems are formulated as an objective function consisting of a sum of functio...
39 pages, 1 algorithm, 3 figures, 2 tablesInternational audienceWe study the performance of a family...
We study the performance of a family of randomized parallel coordinate descent methods for minimizin...
We present a generic framework for par-allel coordinate descent (CD) algorithms that includes, as sp...
International audience<p>We propose a new randomized coordinate descent method for minimizing the s...
The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the no...
ABSTRACT: Multiple undersampled images of a scene are often obtained by using a charge-coupled devic...
Large-scale optimization problems appear quite frequently in data science and machine learning appli...
We propose a novel algorithm for greedy forward fea-ture selection for regularized least-squares (RL...
This paper proposes a method for parallel block coordinate-wise minimization of convex functions. Ea...