International audienceWe provide novel theoretical insights on structured prediction in the context of efficient convex surrogate loss minimization with consistency guarantees. For any task loss, we construct a convex surrogate that can be optimized via stochastic gradient descent and we prove tight bounds on the so-called “calibration function” relating the excess surrogate risk to the actual risk. In contrast to prior related work, we carefully monitor the effect of the exponential number of classes in the learning guarantees as well as on the optimization complexity. As an interesting consequence, we formalize the intuition that some task losses make learning harder than others, and that the classical 0-1 loss is ill-suited for structure...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
Large-margin structured estimation methods minimize a convex upper bound of loss functions. While th...
Learning with non-modular losses is an important problem when sets of predictions are made simultane...
Learning with non-modular losses is an important problem when sets of predictions are made simultane...
Learning with non-modular losses is an important problem when sets of predictions are made simultane...
Learning with non-modular losses is an important problem when sets of predictions are made simultane...
International audienceLearning with non-modular losses is an important problem when sets of predicti...
International audienceLearning with non-modular losses is an important problem when sets of predicti...
Many of the classification algorithms developed in the machine learning literature, including the s...
Many of the classification algorithms developed in the machine learning literature, including the su...
A commonly used approach to multiclass classification is to replace the 0 − 1 loss with a convex sur...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
We consider large margin estimation in a broad range of prediction models where inference involves s...
We consider the broad framework of supervised learning, where one gets examples of objects together ...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
Large-margin structured estimation methods minimize a convex upper bound of loss functions. While th...
Learning with non-modular losses is an important problem when sets of predictions are made simultane...
Learning with non-modular losses is an important problem when sets of predictions are made simultane...
Learning with non-modular losses is an important problem when sets of predictions are made simultane...
Learning with non-modular losses is an important problem when sets of predictions are made simultane...
International audienceLearning with non-modular losses is an important problem when sets of predicti...
International audienceLearning with non-modular losses is an important problem when sets of predicti...
Many of the classification algorithms developed in the machine learning literature, including the s...
Many of the classification algorithms developed in the machine learning literature, including the su...
A commonly used approach to multiclass classification is to replace the 0 − 1 loss with a convex sur...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
We consider large margin estimation in a broad range of prediction models where inference involves s...
We consider the broad framework of supervised learning, where one gets examples of objects together ...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
The design of convex, calibrated surrogate losses, whose minimization entails consistency with respe...
Large-margin structured estimation methods minimize a convex upper bound of loss functions. While th...