To provide stability of classification, a robust supervised minimum distance classifier based on the minimax (in the Huber sense) estimate of location is designed for the class of generalized Gaussian pattern distributions with a bounded variance. This classifier has the following low-complexity form: with relatively small variances, it is the nearest mean rule (NMean), and with relatively large variances, it is the nearest median rule (NMed). The proposed classifier exhibits good performance both under heavy-and short-tailed pattern distribution
We analyze the connection between minimizers with good generalizing properties and high local entrop...
Many datasets are collected automatically, and are thus easily contaminated by outliers. In order to...
International audienceSome recent contributions to robust inference are presented. Firstly, the clas...
In the framework of the Huber's minimax variance approach to designing robust estimates of localizat...
Supervised classification techniques use training samples to learn a classification rule with small ...
Supervised classification techniques use training samples to learn a classification rule with small...
We propose a robust probability classifier model to address classification problems with data uncert...
When constructing a classifier, the probability of correct classification of future data points shou...
We study the problem of estimating an unknown parameter $\theta$ from an observation of a random var...
A novel method to improve the generalization performance of the Minimum Classification Error (MCE) /...
The maximum entropy principle advocates to evaluate events’ probabilities using a distribution that...
Conventional distance-based classifiers use standard Euclidean distance, and so can suffer from exce...
In this paper, we propose a method, called the nearest feature midpoint (NFM), for pattern classific...
48 pages, 6 figuresInternational audienceWe introduce new estimators for robust machine learning bas...
We present an extension of Vapnik's classical empirical risk minimizer (ERM) where the empirical ris...
We analyze the connection between minimizers with good generalizing properties and high local entrop...
Many datasets are collected automatically, and are thus easily contaminated by outliers. In order to...
International audienceSome recent contributions to robust inference are presented. Firstly, the clas...
In the framework of the Huber's minimax variance approach to designing robust estimates of localizat...
Supervised classification techniques use training samples to learn a classification rule with small ...
Supervised classification techniques use training samples to learn a classification rule with small...
We propose a robust probability classifier model to address classification problems with data uncert...
When constructing a classifier, the probability of correct classification of future data points shou...
We study the problem of estimating an unknown parameter $\theta$ from an observation of a random var...
A novel method to improve the generalization performance of the Minimum Classification Error (MCE) /...
The maximum entropy principle advocates to evaluate events’ probabilities using a distribution that...
Conventional distance-based classifiers use standard Euclidean distance, and so can suffer from exce...
In this paper, we propose a method, called the nearest feature midpoint (NFM), for pattern classific...
48 pages, 6 figuresInternational audienceWe introduce new estimators for robust machine learning bas...
We present an extension of Vapnik's classical empirical risk minimizer (ERM) where the empirical ris...
We analyze the connection between minimizers with good generalizing properties and high local entrop...
Many datasets are collected automatically, and are thus easily contaminated by outliers. In order to...
International audienceSome recent contributions to robust inference are presented. Firstly, the clas...