<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocaRNATE+RNAz and Dynalign/SVM on the first testing set. (B) The high-specificity range of the ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocARNATE+RNAz and Dynalign/SVM on the first testing set.</p
<p>Each curve represents a changing antigen band intensity limit from <i>L</i> = 1 (top right) to <i...
(A) Performance of the model in the training set, which showed an AUC value of 0.768, an optimal cut...
<p>ROC curves for the three experimental settings (Multi-instance Learning, Multi-instance Learning ...
<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocaRNATE+...
<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocaRNATE+...
<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocARNATE+...
<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocARNATE+...
<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocARNATE+...
<p>The F1, F2, and F3 scores were computed for every point on the ROC curve from the training datase...
<p>The ROC curves for evaluating the quality of the algorithms over the entire test datasets.</p
<p>The ROC curves of the RF and SVM in internal five-fold cross validation for (a) Model I, (b) Mode...
<p>ROC curves showing true-positive rates (sensitivity) plotted against the false-positive rate for ...
<p>ROC curve and prediction parameters for optimal thresholds in all tested methods.</p
<p>The ROC curves of the RF and SVM in four independent external validations for (a) Model I, (b) Mo...
<p>ROC curves of TPR versus FPR for optimal sets of biomarkers where averaged over SSVM models. T...
<p>Each curve represents a changing antigen band intensity limit from <i>L</i> = 1 (top right) to <i...
(A) Performance of the model in the training set, which showed an AUC value of 0.768, an optimal cut...
<p>ROC curves for the three experimental settings (Multi-instance Learning, Multi-instance Learning ...
<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocaRNATE+...
<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocaRNATE+...
<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocARNATE+...
<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocARNATE+...
<p>(A) ROC curves for Multifind, Multifind trained without ensemble defect Z score, RNAz, LocARNATE+...
<p>The F1, F2, and F3 scores were computed for every point on the ROC curve from the training datase...
<p>The ROC curves for evaluating the quality of the algorithms over the entire test datasets.</p
<p>The ROC curves of the RF and SVM in internal five-fold cross validation for (a) Model I, (b) Mode...
<p>ROC curves showing true-positive rates (sensitivity) plotted against the false-positive rate for ...
<p>ROC curve and prediction parameters for optimal thresholds in all tested methods.</p
<p>The ROC curves of the RF and SVM in four independent external validations for (a) Model I, (b) Mo...
<p>ROC curves of TPR versus FPR for optimal sets of biomarkers where averaged over SSVM models. T...
<p>Each curve represents a changing antigen band intensity limit from <i>L</i> = 1 (top right) to <i...
(A) Performance of the model in the training set, which showed an AUC value of 0.768, an optimal cut...
<p>ROC curves for the three experimental settings (Multi-instance Learning, Multi-instance Learning ...