We propose a general regularized empirical risk minimization framework for sparse learning which accommodates popular regularizers such as lasso, group lasso, and the trace norm. Within this framework, we develop two optimization algorithms. The first method is based on squared penalties added to the empirical risk and is solved using a subgradient-based L-BFGS quasi-Newton method. The second method is based on constraints imposed on sparsity-inducing norms and is solved using a gradient projection method. A notable advantage of our approaches is that a simple way to access the dual objective value is available, which is use-ful in tracking the progress of optimization and deciding when to terminate the optimization procedure.
Sparse estimation methods are aimed at using or obtaining parsimo-nious representations of data or m...
Given the "right" representation, learning is easy. This thesis studies representation learning and ...
Supervised learning over graphs is an intrinsically difficult problem: simultaneous learning of rele...
Nowadays, the explosive data scale increase provides an unprecedented opportunity to apply machine l...
International audience<p>High dimensional regression benefits from sparsity promoting regularization...
In this paper we propose a general framework to characterize and solve the optimization problems und...
In this paper, we aim at recovering an unknown signal x0 from noisy L1measurements y=Phi*x0+w, where...
Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional d...
We present a data dependent generalization bound for a large class of regularized algorithms which i...
Abstract Algorithms for learning to rank can be inefficient when they employ risk functions that use...
Recent advances in stochastic optimization and regularized dual averaging approaches revealed a subs...
Various regularization techniques are investigated in supervised learning from data. Theoretical fea...
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm....
In this paper we study the problem of learning a low-rank (sparse) distance ma-trix. We propose a no...
13International audienceRecently, there has been a lot of interest around multi-task learning (MTL) ...
Sparse estimation methods are aimed at using or obtaining parsimo-nious representations of data or m...
Given the "right" representation, learning is easy. This thesis studies representation learning and ...
Supervised learning over graphs is an intrinsically difficult problem: simultaneous learning of rele...
Nowadays, the explosive data scale increase provides an unprecedented opportunity to apply machine l...
International audience<p>High dimensional regression benefits from sparsity promoting regularization...
In this paper we propose a general framework to characterize and solve the optimization problems und...
In this paper, we aim at recovering an unknown signal x0 from noisy L1measurements y=Phi*x0+w, where...
Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional d...
We present a data dependent generalization bound for a large class of regularized algorithms which i...
Abstract Algorithms for learning to rank can be inefficient when they employ risk functions that use...
Recent advances in stochastic optimization and regularized dual averaging approaches revealed a subs...
Various regularization techniques are investigated in supervised learning from data. Theoretical fea...
Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm....
In this paper we study the problem of learning a low-rank (sparse) distance ma-trix. We propose a no...
13International audienceRecently, there has been a lot of interest around multi-task learning (MTL) ...
Sparse estimation methods are aimed at using or obtaining parsimo-nious representations of data or m...
Given the "right" representation, learning is easy. This thesis studies representation learning and ...
Supervised learning over graphs is an intrinsically difficult problem: simultaneous learning of rele...