We show that the Confusion Entropy, a measure of performance in multiclass problems has a strong (monotone) relation with the multiclass generalization of a classical metric, the Matthews Correlation Coefficient. Analytical results are provided for the limit cases of general no-information (n-face dice rolling) of the binary classification. Computational evidence supports the claim in the general case
Even if measuring the outcome of binary classifications is a pivotal task in machine learning and st...
Even if measuring the outcome of binary classifications is a pivotal task in machine learning and st...
The paper presents a new proposal for a single overall measure, the diagonal modified confusion entr...
We show that the Confusion Entropy, a measure of performance in multiclass problems has a strong (mo...
Different performance measures are used to assess the behaviour, and to carry out the comparison, of...
Evaluating binary classifications is a pivotal task in statistics and machine learning, because it c...
For evaluating the classification model of an information system, a proper measure is usually needed...
Categorical classifier performance is typically evaluated with respect to error rate, expressed as a...
To evaluate binary classifications and their confusion matrices, scientific researchers can employ s...
To assess the quality of a binary classification, researchers often take advantage of a four-entry c...
We develop two tools to analyze the behavior of multiple-class, or multi-class, classifiers by means...
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models w...
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models w...
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models w...
The generalization error, or probability of misclassification, of ensemble classifiers has been show...
Even if measuring the outcome of binary classifications is a pivotal task in machine learning and st...
Even if measuring the outcome of binary classifications is a pivotal task in machine learning and st...
The paper presents a new proposal for a single overall measure, the diagonal modified confusion entr...
We show that the Confusion Entropy, a measure of performance in multiclass problems has a strong (mo...
Different performance measures are used to assess the behaviour, and to carry out the comparison, of...
Evaluating binary classifications is a pivotal task in statistics and machine learning, because it c...
For evaluating the classification model of an information system, a proper measure is usually needed...
Categorical classifier performance is typically evaluated with respect to error rate, expressed as a...
To evaluate binary classifications and their confusion matrices, scientific researchers can employ s...
To assess the quality of a binary classification, researchers often take advantage of a four-entry c...
We develop two tools to analyze the behavior of multiple-class, or multi-class, classifiers by means...
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models w...
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models w...
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models w...
The generalization error, or probability of misclassification, of ensemble classifiers has been show...
Even if measuring the outcome of binary classifications is a pivotal task in machine learning and st...
Even if measuring the outcome of binary classifications is a pivotal task in machine learning and st...
The paper presents a new proposal for a single overall measure, the diagonal modified confusion entr...