International audienceEvidential calibration methods of binary classifiers improve upon probabilistic calibration methods by representing explicitly the calibration uncertainty due to the amount of training (labelled) data. This justified yet undesirable uncertainty can be reduced by adding training data, which are in general costly. Hence the need for strategies that, given a pool of unlabelled data, will point to interesting data to be labelled, i.e., to data inducing a drop in uncertainty greater than a random selection. Two such strategies are considered in this paper and applied to an ensemble of binary SVM classifiers on some classical binary classification datasets. Experimental results show the interest of the approach
Machine learning classifiers typically provide scores for the different classes. These scores are su...
When applying supervised machine learning algorithms to classification, the classical goal is to rec...
International audienceThis paper addresses the pattern classification problem arising when available...
International audienceEvidential calibration methods of binary classifiers improve upon probabilisti...
In machine learning problems, the availability of several classifiers trained on different data or f...
International audienceIn order to improve overall performance with respect to a classification probl...
International audienceThe theory of belief functions has been successfully used in many classificati...
Learning probabilistic classification and prediction models that generate accurate probabilities is ...
Accurate calibration of probabilistic predictive models learned is critical for many practical predi...
A set of probabilistic predictions is well calibrated if the events that are predicted to occur with...
© 2015 Elsevier B.V. We present a novel approach to learn binary classifiers when only positive and ...
Class membership probability estimates are important for many applications of data mining in which c...
Calibrating a classification system consists in transforming the output scores, which somehow state t...
Multi-class classification methods that produce sets of probabilistic classifiers, such as ensemble ...
This paper is concerned with learning binary classifiers under adversarial label-noise. We introduce...
Machine learning classifiers typically provide scores for the different classes. These scores are su...
When applying supervised machine learning algorithms to classification, the classical goal is to rec...
International audienceThis paper addresses the pattern classification problem arising when available...
International audienceEvidential calibration methods of binary classifiers improve upon probabilisti...
In machine learning problems, the availability of several classifiers trained on different data or f...
International audienceIn order to improve overall performance with respect to a classification probl...
International audienceThe theory of belief functions has been successfully used in many classificati...
Learning probabilistic classification and prediction models that generate accurate probabilities is ...
Accurate calibration of probabilistic predictive models learned is critical for many practical predi...
A set of probabilistic predictions is well calibrated if the events that are predicted to occur with...
© 2015 Elsevier B.V. We present a novel approach to learn binary classifiers when only positive and ...
Class membership probability estimates are important for many applications of data mining in which c...
Calibrating a classification system consists in transforming the output scores, which somehow state t...
Multi-class classification methods that produce sets of probabilistic classifiers, such as ensemble ...
This paper is concerned with learning binary classifiers under adversarial label-noise. We introduce...
Machine learning classifiers typically provide scores for the different classes. These scores are su...
When applying supervised machine learning algorithms to classification, the classical goal is to rec...
International audienceThis paper addresses the pattern classification problem arising when available...