In many applications, the training data, from which one needs to learn a classifier, is corrupted with label noise. Many standard algorithms such as SVM perform poorly in the presence of label noise. In this paper we investigate the robustness of risk minimization to label noise. We prove a sufficient condition on a loss function for the risk minimization under that loss to be tolerant to uniform label noise. We show that the 0-1 loss, sigmoid loss, ramp loss and probit loss satisfy this condition though none of the standard convex loss functions satisfy it. We also prove that, by choosing a sufficiently large value of a parameter in the loss function, the sigmoid loss, ramp loss and probit loss can be made tolerant to nonuniform label nois...
The paper brings together methods from two disciplines: machine learning theory and robust statistic...
© 1979-2012 IEEE. In this paper, we study a classification problem in which sample labels are random...
This letter addresses the robustness problem when learning a large margin classifier in the presence...
In this paper, we explore noise-tolerant learning of classifiers. We formulate the problem as follow...
Robust learning in presence of label noise is an important problem of current interest. Training dat...
In this paper, we theoretically study the problem of binary classification in the presence of random...
In this paper, we theoretically study the problem of binary classification in the presence of random...
In many applications of classifier learning, training data suffers from label noise. Deep networks a...
Convex potential minimisation is the de facto approach to binary classification. However, Long and S...
Labelling of data for supervised learning canbe costly and time-consuming and the riskof incorporati...
Matrix concentration inequalities have attracted much attention in diverse applications such as line...
Loss function plays an important role in data classification. Manyloss functions have been proposed ...
Obtaining a sufficient number of accurate labels to form a training set for learning a classifier ca...
In many real-world classification problems, the labels of training examples are randomly corrupted. ...
We study the effect of imperfect training data labels on the performance of classification methods. ...
The paper brings together methods from two disciplines: machine learning theory and robust statistic...
© 1979-2012 IEEE. In this paper, we study a classification problem in which sample labels are random...
This letter addresses the robustness problem when learning a large margin classifier in the presence...
In this paper, we explore noise-tolerant learning of classifiers. We formulate the problem as follow...
Robust learning in presence of label noise is an important problem of current interest. Training dat...
In this paper, we theoretically study the problem of binary classification in the presence of random...
In this paper, we theoretically study the problem of binary classification in the presence of random...
In many applications of classifier learning, training data suffers from label noise. Deep networks a...
Convex potential minimisation is the de facto approach to binary classification. However, Long and S...
Labelling of data for supervised learning canbe costly and time-consuming and the riskof incorporati...
Matrix concentration inequalities have attracted much attention in diverse applications such as line...
Loss function plays an important role in data classification. Manyloss functions have been proposed ...
Obtaining a sufficient number of accurate labels to form a training set for learning a classifier ca...
In many real-world classification problems, the labels of training examples are randomly corrupted. ...
We study the effect of imperfect training data labels on the performance of classification methods. ...
The paper brings together methods from two disciplines: machine learning theory and robust statistic...
© 1979-2012 IEEE. In this paper, we study a classification problem in which sample labels are random...
This letter addresses the robustness problem when learning a large margin classifier in the presence...