International audienceFeature selection in learning to rank has recently emerged as a crucial issue. Whereas several preprocessing approaches have been proposed, only a few works have been focused on integrating the feature selection into the learning process. In this work, we propose a general framework for feature selection in learning to rank using SVM with a sparse regularization term. We investigate both classical convex regularizations such as $\ell_1$ or weighted $\ell_1$ and non-convex regularization terms such as log penalty, Minimax Concave Penalty (MCP) or $\ell_p$ pseudo norm with $p<1$. Two algorithms are proposed, first an accelerated proximal approach for solving the convex problems, second a reweighted $\ell_1$ scheme to add...
Learning sparse models from data is an important task in all those frameworks where relevant informa...
Many supervised learning problems are considered difficult to solve either because of the redundant ...
International audienceWe develop an exact penalty approach for feature selection in machine learning...
International audienceFeature selection in learning to rank has recently emerged as a crucial issue....
International audienceFeature selection in learning to rank has recently emerged as a crucial issue....
International audienceTo select the most useful and the least redundant features to be used in ranki...
National audienceTo select the most useful and the least redundant features to be used in ranking fu...
International audienceSparse estimation methods are aimed at using or obtaining parsimonious represe...
International audienceSparse estimation methods are aimed at using or obtaining parsimonious represe...
International audienceIn supervised classification, data representation is usually considered at the...
National audienceLearning to rank algorithms are dealing with a very large amount of features to aut...
This paper introduces 1 a new support vector machine (SVM) formulation to obtain sparse solutions in...
We present a feature selection method for solving sparse regularization problem, which hasa composit...
Regularization, or penalization, is a simple yet effective method to promote some desired solution s...
Sparse learning problems, known as feature selection problems or variable selection problems, are a ...
Learning sparse models from data is an important task in all those frameworks where relevant informa...
Many supervised learning problems are considered difficult to solve either because of the redundant ...
International audienceWe develop an exact penalty approach for feature selection in machine learning...
International audienceFeature selection in learning to rank has recently emerged as a crucial issue....
International audienceFeature selection in learning to rank has recently emerged as a crucial issue....
International audienceTo select the most useful and the least redundant features to be used in ranki...
National audienceTo select the most useful and the least redundant features to be used in ranking fu...
International audienceSparse estimation methods are aimed at using or obtaining parsimonious represe...
International audienceSparse estimation methods are aimed at using or obtaining parsimonious represe...
International audienceIn supervised classification, data representation is usually considered at the...
National audienceLearning to rank algorithms are dealing with a very large amount of features to aut...
This paper introduces 1 a new support vector machine (SVM) formulation to obtain sparse solutions in...
We present a feature selection method for solving sparse regularization problem, which hasa composit...
Regularization, or penalization, is a simple yet effective method to promote some desired solution s...
Sparse learning problems, known as feature selection problems or variable selection problems, are a ...
Learning sparse models from data is an important task in all those frameworks where relevant informa...
Many supervised learning problems are considered difficult to solve either because of the redundant ...
International audienceWe develop an exact penalty approach for feature selection in machine learning...