Noisy labels often occur in vision datasets, especially when they are obtained from crowdsourcing or Web scraping. We propose a new regularization method, which enables learning robust classifiers in presence of noisy data. To achieve this goal, we propose a new adversarial regularization {scheme} based on the Wasserstein distance. Using this distance allows taking into account specific relations between classes by leveraging the geometric properties of the labels space. {Our Wasserstein Adversarial Regularization (WAR) encodes a selective regularization, which promotes smoothness of the classifier between some classes, while preserving sufficient complexity of the decision boundary between others. We first discuss how and why adversarial r...
The increasingly common use of neural network classifiers in industrial and social applications of i...
Large scale datasets collected using non-expert labelers are prone to labeling errors. Errors in the...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...
Noisy labels often occur in vision datasets, especially when they are issued from crowdsourcing or W...
We propose regularization strategies for learning discriminative models that are robust to in-class ...
Recent advances in deep learning have relied on large, labelled datasets to train high-capacity mode...
Image classification systems recently made a giant leap with the advancement of deep neural networks...
Over the past decades, deep neural networks have achieved unprecedented success in image classificat...
In this talk we will discuss the use of a Wasserstein loss function for learning regularisers in an ...
Recent advances in Artificial Intelligence (AI) have been built on large scale datasets. These advan...
State-of-the-art classifiers have been shown to be largely vulnerable to adversarial perturbations. ...
We propose a Regularization framework based on Adversarial Transformations (RAT) for semi-supervised...
Training deep neural networks (DNNs) with noisy labels often leads to poorly generalized models as D...
Learning to predict multi-label outputs is challenging, but in many problems there is a natural metr...
© 1992-2012 IEEE. There is an emerging trend to leverage noisy image datasets in many visual recogni...
The increasingly common use of neural network classifiers in industrial and social applications of i...
Large scale datasets collected using non-expert labelers are prone to labeling errors. Errors in the...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...
Noisy labels often occur in vision datasets, especially when they are issued from crowdsourcing or W...
We propose regularization strategies for learning discriminative models that are robust to in-class ...
Recent advances in deep learning have relied on large, labelled datasets to train high-capacity mode...
Image classification systems recently made a giant leap with the advancement of deep neural networks...
Over the past decades, deep neural networks have achieved unprecedented success in image classificat...
In this talk we will discuss the use of a Wasserstein loss function for learning regularisers in an ...
Recent advances in Artificial Intelligence (AI) have been built on large scale datasets. These advan...
State-of-the-art classifiers have been shown to be largely vulnerable to adversarial perturbations. ...
We propose a Regularization framework based on Adversarial Transformations (RAT) for semi-supervised...
Training deep neural networks (DNNs) with noisy labels often leads to poorly generalized models as D...
Learning to predict multi-label outputs is challenging, but in many problems there is a natural metr...
© 1992-2012 IEEE. There is an emerging trend to leverage noisy image datasets in many visual recogni...
The increasingly common use of neural network classifiers in industrial and social applications of i...
Large scale datasets collected using non-expert labelers are prone to labeling errors. Errors in the...
Recent discoveries uncovered flaws in machine learning algorithms such as deep neural networks. Deep...