Over the past decades, Linear Programming (LP) has been widely used in different areas and considered as one of the mature technologies in numerical optimization. However, the complexity offered by state-of-the-art algorithms (i.e. interior-point method and primal, dual simplex methods) is still unsatisfactory for problems in machine learning with huge number of variables and constraints. In this paper, we investigate a general LP algorithm based on the combination of Augmented Lagrangian and Coordinate Descent (AL-CD), giving an iteration complexity of O((log(1/))2) with O(nnz(A)) cost per iteration, where nnz(A) is the number of non-zeros in the m×n constraint matrix A, and in practice, one can further re-duce cost per iteration to the or...
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capab...
International audienceGeneralized Linear Models (GLM) are a wide class ofregression and classificati...
International audienceWe present a primal-dual augmented Lagrangian algorithm for NLP. The algorithm...
Linear Classification has achieved complexity linear to the data size. However, in many applications...
Linear classification has achieved complexity linear to the data size. However, in many applications...
The augmented Lagrangian and Newton methods are used to simultaneously solve the primal and dual lin...
Large-scale optimization problems appear quite frequently in data science and machine learning appli...
International audienceThe Vu-Condat algorithm is a standard method for finding a saddle point of a L...
In many applications, data appear with a huge number of instances as well as features. Linear Suppor...
The sparse nonlinear programming (SNP) problem has wide applications in signal and image processing,...
In solving a linear system with iterative methods, one is usually confronted with the dilemma of hav...
Elastic Net Regularizers have shown much promise in designing sparse classifiers for linear classifi...
The problem of finding sparse solutions to underdetermined systems of linear equations arises in sev...
In the framework of linear programming, we propose a theoretical justification for regularizing the ...
In solving a linear system with iterative methods, one is usually confronted with the dilemma of hav...
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capab...
International audienceGeneralized Linear Models (GLM) are a wide class ofregression and classificati...
International audienceWe present a primal-dual augmented Lagrangian algorithm for NLP. The algorithm...
Linear Classification has achieved complexity linear to the data size. However, in many applications...
Linear classification has achieved complexity linear to the data size. However, in many applications...
The augmented Lagrangian and Newton methods are used to simultaneously solve the primal and dual lin...
Large-scale optimization problems appear quite frequently in data science and machine learning appli...
International audienceThe Vu-Condat algorithm is a standard method for finding a saddle point of a L...
In many applications, data appear with a huge number of instances as well as features. Linear Suppor...
The sparse nonlinear programming (SNP) problem has wide applications in signal and image processing,...
In solving a linear system with iterative methods, one is usually confronted with the dilemma of hav...
Elastic Net Regularizers have shown much promise in designing sparse classifiers for linear classifi...
The problem of finding sparse solutions to underdetermined systems of linear equations arises in sev...
In the framework of linear programming, we propose a theoretical justification for regularizing the ...
In solving a linear system with iterative methods, one is usually confronted with the dilemma of hav...
We propose a new sparse model construction method aimed at maximizing a model’s generalisation capab...
International audienceGeneralized Linear Models (GLM) are a wide class ofregression and classificati...
International audienceWe present a primal-dual augmented Lagrangian algorithm for NLP. The algorithm...