<p>A series of classifiers can be constructed using different number of top features from the mRMR tables during the IFS process. Plot showing the performances of the different classifiers, with <i>MCC</i> as the main measurement on the y-axis. As the classifiers used different numbers of features, we represented the classifiers with the corresponding number of features used in x-axis. In Dataset 1, the highest <i>MCC</i> (0.7046) was achieved at 118 features. This finding demonstrated that the classifier adopting the top 118 features in the mRMR table for Dataset 1 performed the best, and the 118 features were regarded as composing the optimal feature set for Dataset 1. Similarly, a peak of <i>MCC</i> 0.7322 and 0.7267 was obtained at 35 a...
<p>Three classifiers, Gaussian Naive Bayes (GNB) in panel (a), SVM in panel (b) and sparse MRF in pa...
<p>(A) ROC curves obtained on IRMA category 14. (B) ROC curves obtained on IRMA category 16. (C) ROC...
<p>Barcharts: (a) comparison of dataset-wise average accuracies, and (b) comparison of dataset-wise ...
<p>When the first 220 features in the ranked feature list were used, <i>MCC</i> reached the maximum ...
<p>We used an IFS curve to determine the number of features finally used in mRMR selection. Predicti...
<p>Graph showing the change of the MCC values versus the feature numbers in each trained model for e...
<p>When the 65 features were used, a peak of MCC was obtained. These 65 features were considered as ...
<p>In detail, (A) shows the IFS-curve for the dataset <i>S</i><sub>1</sub>; (B) shows the IFS-curve ...
<p>The IFS curve using the MCC as its Y-axis and the number of features participating in classificat...
<p>In the IFS curve, the x-axis is the number of features used for classification, and the y-axis is...
<p>By adding features one by one from higher to lower rank, 315 different feature subsets are obtain...
<p>Classification accuracy is more sensitive to the number of meta-samples rather than to the regula...
<p>The IFS curves were drawn based on the data in <a href="http://www.plosone.org/article/info:doi/1...
Many multi-label classifiers provide a real-valued score for each class. A well known design approac...
<p>The classifier is SVM. (A) ROC curves obtained on IRMA category 14. (B) ROC curves obtained on IR...
<p>Three classifiers, Gaussian Naive Bayes (GNB) in panel (a), SVM in panel (b) and sparse MRF in pa...
<p>(A) ROC curves obtained on IRMA category 14. (B) ROC curves obtained on IRMA category 16. (C) ROC...
<p>Barcharts: (a) comparison of dataset-wise average accuracies, and (b) comparison of dataset-wise ...
<p>When the first 220 features in the ranked feature list were used, <i>MCC</i> reached the maximum ...
<p>We used an IFS curve to determine the number of features finally used in mRMR selection. Predicti...
<p>Graph showing the change of the MCC values versus the feature numbers in each trained model for e...
<p>When the 65 features were used, a peak of MCC was obtained. These 65 features were considered as ...
<p>In detail, (A) shows the IFS-curve for the dataset <i>S</i><sub>1</sub>; (B) shows the IFS-curve ...
<p>The IFS curve using the MCC as its Y-axis and the number of features participating in classificat...
<p>In the IFS curve, the x-axis is the number of features used for classification, and the y-axis is...
<p>By adding features one by one from higher to lower rank, 315 different feature subsets are obtain...
<p>Classification accuracy is more sensitive to the number of meta-samples rather than to the regula...
<p>The IFS curves were drawn based on the data in <a href="http://www.plosone.org/article/info:doi/1...
Many multi-label classifiers provide a real-valued score for each class. A well known design approac...
<p>The classifier is SVM. (A) ROC curves obtained on IRMA category 14. (B) ROC curves obtained on IR...
<p>Three classifiers, Gaussian Naive Bayes (GNB) in panel (a), SVM in panel (b) and sparse MRF in pa...
<p>(A) ROC curves obtained on IRMA category 14. (B) ROC curves obtained on IRMA category 16. (C) ROC...
<p>Barcharts: (a) comparison of dataset-wise average accuracies, and (b) comparison of dataset-wise ...