<p>AUC scores of the PseDNC models with the variation of <i>λ</i> on <i>Human</i> dataset.</p
(A) ROC curves of the human same-prediction result from ten-fold cross validation. Solid lines repre...
<p>AUC analysis for the top 15 multidimensional biomarkers in the training and testing set.</p
AUC and AUPRC performance graphs of each model in internal cross-validation are available in Fig C i...
<p>AUC scores of the models with the variation of <i>δ</i> on <i>Human</i> dataset.</p
<p>The average AUC scores of individual feature-based models using different values for λ, evaluated...
<p>AUC scores of predictive models fit with varying <i>α</i>, for <i>α</i> ∈ {0.01, 0.05, 0.1, 0.2, ...
<p>AUC scores for three different classifiers with different types of markers for Metagene and CoMi ...
AUC values of different combination of feature scores for training and test dataset in a generic pre...
<p>Determined from ten-fold cross validation experiments. The AUC scores are normalized to 100.</p><...
<p>The absolute values of correlation coefficients of AUC scores yielded by individual feature-based...
<p>See <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0119721#sec002" target=...
<p><i>AUC</i> on T1D discovery and validation datasets generated from logistic regression models.</p
Values between 0.7-0.8 are generally considered good, 0.8-0.9 is considered excellent whilst 0.9-1 i...
An additional curve for our consensus predictions was added separately based on the performance of t...
AUC and accuracy values for the best model of each classifier when classifying apnea and baseline se...
(A) ROC curves of the human same-prediction result from ten-fold cross validation. Solid lines repre...
<p>AUC analysis for the top 15 multidimensional biomarkers in the training and testing set.</p
AUC and AUPRC performance graphs of each model in internal cross-validation are available in Fig C i...
<p>AUC scores of the models with the variation of <i>δ</i> on <i>Human</i> dataset.</p
<p>The average AUC scores of individual feature-based models using different values for λ, evaluated...
<p>AUC scores of predictive models fit with varying <i>α</i>, for <i>α</i> ∈ {0.01, 0.05, 0.1, 0.2, ...
<p>AUC scores for three different classifiers with different types of markers for Metagene and CoMi ...
AUC values of different combination of feature scores for training and test dataset in a generic pre...
<p>Determined from ten-fold cross validation experiments. The AUC scores are normalized to 100.</p><...
<p>The absolute values of correlation coefficients of AUC scores yielded by individual feature-based...
<p>See <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0119721#sec002" target=...
<p><i>AUC</i> on T1D discovery and validation datasets generated from logistic regression models.</p
Values between 0.7-0.8 are generally considered good, 0.8-0.9 is considered excellent whilst 0.9-1 i...
An additional curve for our consensus predictions was added separately based on the performance of t...
AUC and accuracy values for the best model of each classifier when classifying apnea and baseline se...
(A) ROC curves of the human same-prediction result from ten-fold cross validation. Solid lines repre...
<p>AUC analysis for the top 15 multidimensional biomarkers in the training and testing set.</p
AUC and AUPRC performance graphs of each model in internal cross-validation are available in Fig C i...