Abstract. A general model is proposed for studying ranking problems. We investigate learning methods based on empirical minimization of the natural estimates of the ranking risk. The empirical estimates are of the form of a U-statistic. Inequalities from the theory of U-statistics and Uprocesses are used to obtain performance bounds for the empirical risk minimizers. Convex risk minimization methods are also studied to give a theoretical framework for ranking algorithms based on boosting and support vector machines. Just like in binary classification, fast rates of convergence are achieved under certain noise assumption. General sufficient conditions are proposed in several special cases that guarantee fast rates of convergence.
International audienceWe formulate a local form of the bipartite ranking problem where the goal is t...
Le ranking multipartite est un problème d'apprentissage statistique qui consiste à ordonner les obse...
15 pagesLet $\cF$ be a set of $M$ classification procedures with values in $[-1,1]$. Given a loss fu...
Proceedings of the 18th Annual Conference on Learning Theory, COLT 2005, Bertinoro, Italy, 2005. Lec...
Abstract Algorithms for learning to rank can be inefficient when they employ risk functions that use...
Statistical Learning Theory has been growing rapidly the last ten years. The introduction of efficie...
International audienceIn a wide range of statistical learning problems such as ranking, clustering o...
The problem of ranking arises ubiquitously in almost every aspect of life, and in particular in Mach...
32 pagesThe problem of ranking/ordering instances, instead of simply classifying them, has recently ...
Abstract—The ranking problem has become increasingly impor-tant in modern applications of statistica...
The purpose of these lecture notes is to provide an introduction to the general theory of empirical ...
New classification algorithms based on the notion of 'margin' (e.g. Support Vector Machines, Boostin...
This paper investigates the theoretical relation between loss criteria and the optimal ranking funct...
Abstract. We are interested in supervised ranking with the following twist: our goal is to design al...
In this thesis, we discuss two issues in the learning to rank area, choosing effective objective lo...
International audienceWe formulate a local form of the bipartite ranking problem where the goal is t...
Le ranking multipartite est un problème d'apprentissage statistique qui consiste à ordonner les obse...
15 pagesLet $\cF$ be a set of $M$ classification procedures with values in $[-1,1]$. Given a loss fu...
Proceedings of the 18th Annual Conference on Learning Theory, COLT 2005, Bertinoro, Italy, 2005. Lec...
Abstract Algorithms for learning to rank can be inefficient when they employ risk functions that use...
Statistical Learning Theory has been growing rapidly the last ten years. The introduction of efficie...
International audienceIn a wide range of statistical learning problems such as ranking, clustering o...
The problem of ranking arises ubiquitously in almost every aspect of life, and in particular in Mach...
32 pagesThe problem of ranking/ordering instances, instead of simply classifying them, has recently ...
Abstract—The ranking problem has become increasingly impor-tant in modern applications of statistica...
The purpose of these lecture notes is to provide an introduction to the general theory of empirical ...
New classification algorithms based on the notion of 'margin' (e.g. Support Vector Machines, Boostin...
This paper investigates the theoretical relation between loss criteria and the optimal ranking funct...
Abstract. We are interested in supervised ranking with the following twist: our goal is to design al...
In this thesis, we discuss two issues in the learning to rank area, choosing effective objective lo...
International audienceWe formulate a local form of the bipartite ranking problem where the goal is t...
Le ranking multipartite est un problème d'apprentissage statistique qui consiste à ordonner les obse...
15 pagesLet $\cF$ be a set of $M$ classification procedures with values in $[-1,1]$. Given a loss fu...