<p>Each value is averaged over 100 independent runs with random divisions of training set and probe set . The bold font represents that MI is better than the corresponding prediction index.</p><p>Comparison of the prediction accuracy measured by AUC on ten real-world networks.</p
<p>Predictive performance of RF and TT for different values of , tuned value for and . Best AUC val...
<p> <b>The AUC (ROC score) is the area under the ROC curve, normalized to 100 for a ...
<p>Correlations between prediction performances of methods, measured as the average AUC over phenoty...
<p>Each value is averaged over 100 independent runs with random divisions of training set and probe...
The AUC and precision results compared with baseline methods on 13 real networks.</p
<p>The best performance for each network is emphasized in bold. Each number is obtained by averaging...
The AUC results compared with the state-of-the-art methods on 13 real networks.</p
<p>(training period: 0–30 (, ), testing period: 30–60 (, ), parameter for link prediction: , dataset...
Average accuracy, recall, precision, MCC, and AUC measures over 10 folds for the three effector pred...
<p>Comparison of various prediction methods in terms of the area under the ROC curve (AUC).</p
<p>(training period: April 2000–March 2001, testing period: April 2001–March 2002, parameter for lin...
<p>The <i>testMSEs</i> comparisons of prediction performance for four networks.</p
<p>(<b>A</b>) ROCs of five different methods. The values in the brackets are the average auROCs of e...
<p><b>(A)</b> AUC of the three algorithms. AUC measures the area under the ROC curves. <b>(B)</b> Pe...
<p>Comparison of prediction performance of classifiers in terms of AUC score, at different levels hi...
<p>Predictive performance of RF and TT for different values of , tuned value for and . Best AUC val...
<p> <b>The AUC (ROC score) is the area under the ROC curve, normalized to 100 for a ...
<p>Correlations between prediction performances of methods, measured as the average AUC over phenoty...
<p>Each value is averaged over 100 independent runs with random divisions of training set and probe...
The AUC and precision results compared with baseline methods on 13 real networks.</p
<p>The best performance for each network is emphasized in bold. Each number is obtained by averaging...
The AUC results compared with the state-of-the-art methods on 13 real networks.</p
<p>(training period: 0–30 (, ), testing period: 30–60 (, ), parameter for link prediction: , dataset...
Average accuracy, recall, precision, MCC, and AUC measures over 10 folds for the three effector pred...
<p>Comparison of various prediction methods in terms of the area under the ROC curve (AUC).</p
<p>(training period: April 2000–March 2001, testing period: April 2001–March 2002, parameter for lin...
<p>The <i>testMSEs</i> comparisons of prediction performance for four networks.</p
<p>(<b>A</b>) ROCs of five different methods. The values in the brackets are the average auROCs of e...
<p><b>(A)</b> AUC of the three algorithms. AUC measures the area under the ROC curves. <b>(B)</b> Pe...
<p>Comparison of prediction performance of classifiers in terms of AUC score, at different levels hi...
<p>Predictive performance of RF and TT for different values of , tuned value for and . Best AUC val...
<p> <b>The AUC (ROC score) is the area under the ROC curve, normalized to 100 for a ...
<p>Correlations between prediction performances of methods, measured as the average AUC over phenoty...