Different performance measures are used to assess the behaviour, and to carry out the comparison, of classifiers in Machine Learning. Many measures have been defined on the literature, and among them, a measure inspired by Shannon's entropy named the Confusion Entropy (CEN). In this work we introduce a new measure, MCEN, by modifying CEN to avoid its unwanted behaviour in the binary case, that disables it as a suitable performance measure in classification. We compare MCEN with CEN and other performance measures, presenting analytical results in some particularly interesting cases, as well as some heuristic computational experimentation
Many algorithms of machine learning use an entropy measure as optimization criterion. Among the wide...
Categorical classifier performance is typically evaluated with respect to error rate, expressed as a...
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performanc...
Different performance measures are used to assess the behaviour, and to carry out the comparison, of...
In 2010, a new performance measure to evaluate the results obtained by algorithms of data classifica...
We show that the Confusion Entropy, a measure of performance in multiclass problems has a strong (mo...
We show that the Confusion Entropy, a measure of performance in multiclass problems has a strong (mo...
For evaluating the classification model of an information system, a proper measure is usually needed...
We develop two tools to analyze the behavior of multiple-class, or multi-class, classifiers by means...
An MLP classifier outputs a posterior probability for each class. With noisy data, classification be...
The paper presents a new proposal for a single overall measure, the diagonal modified confusion entr...
In the machine learning literature we can find numerous methods to solve classification problems. We...
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models w...
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performanc...
While neural network binary classifiers are often evaluated on metrics such as Accuracy and $F_1$-Sc...
Many algorithms of machine learning use an entropy measure as optimization criterion. Among the wide...
Categorical classifier performance is typically evaluated with respect to error rate, expressed as a...
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performanc...
Different performance measures are used to assess the behaviour, and to carry out the comparison, of...
In 2010, a new performance measure to evaluate the results obtained by algorithms of data classifica...
We show that the Confusion Entropy, a measure of performance in multiclass problems has a strong (mo...
We show that the Confusion Entropy, a measure of performance in multiclass problems has a strong (mo...
For evaluating the classification model of an information system, a proper measure is usually needed...
We develop two tools to analyze the behavior of multiple-class, or multi-class, classifiers by means...
An MLP classifier outputs a posterior probability for each class. With noisy data, classification be...
The paper presents a new proposal for a single overall measure, the diagonal modified confusion entr...
In the machine learning literature we can find numerous methods to solve classification problems. We...
The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models w...
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performanc...
While neural network binary classifiers are often evaluated on metrics such as Accuracy and $F_1$-Sc...
Many algorithms of machine learning use an entropy measure as optimization criterion. Among the wide...
Categorical classifier performance is typically evaluated with respect to error rate, expressed as a...
In this work, we analyze the cross-entropy function, widely used in classifiers both as a performanc...