<p>(A) is updated from 2 to 10 with step 1. (B) is updated from 1.1 to 3 with step 0.1.</p
<p>Average precision of top ranked results for different methods after ninth feedback.</p
The average accuracy, AUPRC, and MCC of the supervised learning models for the external dataset.</p
Summary of mean accuracy and standard error for the significant main effects.</p
<p>Mean average precision values of the proposed MLLE evaluated with different </p
<p>Comparison of the mean average precision of the MLLE, LLE, MSE, PCA and LE methods.</p
<p>Micro-average classification accuracy of ML classifiers bs. Baseline for comparisons A to F.</p
<p>Accuracy values at different m values for participants K3, K6 and L1. The mean accuracy value pea...
<p>Mean accuracy for each level of transparency in three conditions (n = 32).</p
<p>The accuracy of different m values on PDB1075 (Five-fold cross validation).</p
The average accuracy, AUPRC, and MCC of the supervised learning models for the main dataset.</p
<p>The average precision for active learning versus most confident (MC) prediction selection.</p
Comparisons of average classifier precision with existing FS methods [29, 37, 39, 48].</p
<p>(A) The algorithms are evaluated with (B) The algorithms are evaluated with (C) The algorithms ...
<p>Classification accuracy of MLPD and SLPD with respect to different predefined threshold .</p
<p>Result on the average precision, recall, and F-measure with varying in the best case using featu...
<p>Average precision of top ranked results for different methods after ninth feedback.</p
The average accuracy, AUPRC, and MCC of the supervised learning models for the external dataset.</p
Summary of mean accuracy and standard error for the significant main effects.</p
<p>Mean average precision values of the proposed MLLE evaluated with different </p
<p>Comparison of the mean average precision of the MLLE, LLE, MSE, PCA and LE methods.</p
<p>Micro-average classification accuracy of ML classifiers bs. Baseline for comparisons A to F.</p
<p>Accuracy values at different m values for participants K3, K6 and L1. The mean accuracy value pea...
<p>Mean accuracy for each level of transparency in three conditions (n = 32).</p
<p>The accuracy of different m values on PDB1075 (Five-fold cross validation).</p
The average accuracy, AUPRC, and MCC of the supervised learning models for the main dataset.</p
<p>The average precision for active learning versus most confident (MC) prediction selection.</p
Comparisons of average classifier precision with existing FS methods [29, 37, 39, 48].</p
<p>(A) The algorithms are evaluated with (B) The algorithms are evaluated with (C) The algorithms ...
<p>Classification accuracy of MLPD and SLPD with respect to different predefined threshold .</p
<p>Result on the average precision, recall, and F-measure with varying in the best case using featu...
<p>Average precision of top ranked results for different methods after ninth feedback.</p
The average accuracy, AUPRC, and MCC of the supervised learning models for the external dataset.</p
Summary of mean accuracy and standard error for the significant main effects.</p