<p><sup>a</sup>HumVar trained PolyPhen-2. The performance of this version was better than for HumDiv trained PolyPhen-2 (data not shown).</p><p><sup>b</sup>Performance scores are computed by using the predicted variants at 0.95 confidence level.</p><p><sup>c</sup>Performance scores inside parentheses are for the predictor when the unreliable cases are included.</p><p><sup>d</sup>Sens, Sensitivity; Spec, Specificity; Acc, Accuracy; OPM, Overall performance measure.</p><p>Performance scores of different prediction methods.</p
<p>Prediction measures for the classifier at k = 5 built by SSVM and LLR: true positive rate (TPR), ...
<p>For each method, the accuracy, the sensitivity, the specificity and the Matthews correlation coef...
<p>This table reports average <i>F</i><sub>1</sub>, Precision, Recall scores of five methods in thre...
Performance metrics (r2, AUC) and their standard deviations are computed on an independent test set....
<p>Best and worst performance are selected based on MCC.</p>1<p>.</p>2<p>.</p>3<p>.</p>4<p>.</p
<p>Prediction performance of PredSAV classifiers in comparison with six other prediction tools on th...
<p>Prediction performance of 10-fold cross-validation based on different encoding methods.</p
<p>AR: Accuracy rate, SE: Sensitivity, SP: Specificity, PPV: Positive predictive value, NPV: Negativ...
Prediction performance does not always reflect the estimation behaviour of a method. High error in e...
<p>The experiment was conducted 10 times using 10-fold cross-validation performed on the training se...
Prediction performance does not always reflect the estimation behaviour of a method. High error in e...
<p>Comparison of prediction accuracy on four multiclass classification datasets by varying the numbe...
Prediction performance does not always reflect the estimation behaviour of a method. High error in e...
<p>Comparison of prediction performance of classifiers in terms of F2 score, at different levels hie...
<p>Comparison of prediction performance of classifiers in terms of AUC score, at different levels hi...
<p>Prediction measures for the classifier at k = 5 built by SSVM and LLR: true positive rate (TPR), ...
<p>For each method, the accuracy, the sensitivity, the specificity and the Matthews correlation coef...
<p>This table reports average <i>F</i><sub>1</sub>, Precision, Recall scores of five methods in thre...
Performance metrics (r2, AUC) and their standard deviations are computed on an independent test set....
<p>Best and worst performance are selected based on MCC.</p>1<p>.</p>2<p>.</p>3<p>.</p>4<p>.</p
<p>Prediction performance of PredSAV classifiers in comparison with six other prediction tools on th...
<p>Prediction performance of 10-fold cross-validation based on different encoding methods.</p
<p>AR: Accuracy rate, SE: Sensitivity, SP: Specificity, PPV: Positive predictive value, NPV: Negativ...
Prediction performance does not always reflect the estimation behaviour of a method. High error in e...
<p>The experiment was conducted 10 times using 10-fold cross-validation performed on the training se...
Prediction performance does not always reflect the estimation behaviour of a method. High error in e...
<p>Comparison of prediction accuracy on four multiclass classification datasets by varying the numbe...
Prediction performance does not always reflect the estimation behaviour of a method. High error in e...
<p>Comparison of prediction performance of classifiers in terms of F2 score, at different levels hie...
<p>Comparison of prediction performance of classifiers in terms of AUC score, at different levels hi...
<p>Prediction measures for the classifier at k = 5 built by SSVM and LLR: true positive rate (TPR), ...
<p>For each method, the accuracy, the sensitivity, the specificity and the Matthews correlation coef...
<p>This table reports average <i>F</i><sub>1</sub>, Precision, Recall scores of five methods in thre...