<p>For each combination and metric, the mean and standard deviation values are shown. The best performances for each metric are shown in bold. The same capital letters in the superscripts indicate no statistical difference between feature extractors (rows) as the same lower case letters indicate no significant difference between supervised learning techniques (columns).</p
<p><b>Bold</b> indicates the best performance.</p><p>Average performance and ROUGE scores (average ±...
<p>The best F-measure for each dataset is marked in bold. Each algorithm is implemented on 30 indepe...
Performance metrics (r2, AUC) and their standard deviations are computed on an independent test set....
<p>The figure shows the development of the mean AUC on the test set depending on the amount of avail...
<p>(On the following five result tables, two best results of each metrics are bold).</p
<p>Predictive performance of RF and TT for different values of , tuned value for and . Best AUC val...
<p>Result on the average precision, recall, and F-measure with varying in the best case using featu...
<p>The bar charts show the average AUCs for different classification methods. Five pathway-based met...
<p>For the MLR, ANN and RF methods, 95% confidence intervals of the difference between the indicator...
<p>The sensitivity, specificity and accuracy of each of three classifiers (Linear SVM, RBF SVM, NN) ...
Abstract: We have comparatively assessed five regression performance metrics namely, Mean Absolute E...
<p>Each value is averaged over 100 independent runs with random divisions of training set and probe...
The right panel shows the four performance measure metrics (ACC, MCC, SN and SP) for each methods.</...
Mean (standard deviation) is reported for the performance metrics. Previous results using 51 pairs [...
<p>Comparison of the average precision rates, recall rates and F1 values for the different classific...
<p><b>Bold</b> indicates the best performance.</p><p>Average performance and ROUGE scores (average ±...
<p>The best F-measure for each dataset is marked in bold. Each algorithm is implemented on 30 indepe...
Performance metrics (r2, AUC) and their standard deviations are computed on an independent test set....
<p>The figure shows the development of the mean AUC on the test set depending on the amount of avail...
<p>(On the following five result tables, two best results of each metrics are bold).</p
<p>Predictive performance of RF and TT for different values of , tuned value for and . Best AUC val...
<p>Result on the average precision, recall, and F-measure with varying in the best case using featu...
<p>The bar charts show the average AUCs for different classification methods. Five pathway-based met...
<p>For the MLR, ANN and RF methods, 95% confidence intervals of the difference between the indicator...
<p>The sensitivity, specificity and accuracy of each of three classifiers (Linear SVM, RBF SVM, NN) ...
Abstract: We have comparatively assessed five regression performance metrics namely, Mean Absolute E...
<p>Each value is averaged over 100 independent runs with random divisions of training set and probe...
The right panel shows the four performance measure metrics (ACC, MCC, SN and SP) for each methods.</...
Mean (standard deviation) is reported for the performance metrics. Previous results using 51 pairs [...
<p>Comparison of the average precision rates, recall rates and F1 values for the different classific...
<p><b>Bold</b> indicates the best performance.</p><p>Average performance and ROUGE scores (average ±...
<p>The best F-measure for each dataset is marked in bold. Each algorithm is implemented on 30 indepe...
Performance metrics (r2, AUC) and their standard deviations are computed on an independent test set....