The recall is the percentage of true positives found by the three methods which are evaluated (srnadiff, derfinder, and ShortStack), when compared to a dataset supposed to be bona fide elements. The first two columns provide the recall of the methods, when compared to the results published in the papers. The last columns compare the methods to the direct approach, which uses the annotation and performs the test.</p
<p>Mean percentage scores (SDs) and sample sizes for the 1-week (white rows) and 5-week (shaded rows...
A correlational analysis was performed to examine the relationship between recognition and recall te...
<p>Scores represent the number of the information type stated over all the stimuli for both conditio...
<p>Values are F1-score (recall/precision) (%)</p><p>Comparison of two methods in term recognition: r...
<p>Comparison of the average precision rates, recall rates and F1 values for the different classific...
<p>Column A indicates the number of annotations found exclusively by TE-Learner<sup><i>LTR</i></sup>...
In each set of boxes corresponding to different sample size (SS) values, Precision, NA_perc (percent...
Comparison of recall rates of multi-category results for different intrusion detection models.</p
<p>A comparison between our method and PRINCE. We can see that our method gives a high precision as ...
<p>Precision (positive predictive value) is the percentage of texts positively classified by the alg...
<p>Recall rate comparison of our method and two other methods on synthetic multilayer networks.</p
<p>Classification accuracy comparison of the proposed research with the state-of-the-art methods.</p
<p>Recall value comparisons for different clustering algorithms using the overlapping technique.</p
<p>Comparison of the recognition rates of the proposed method with some popular classifiers in the l...
<p>(<b>A</b>) ROCs of five different methods. The values in the brackets are the average auROCs of e...
<p>Mean percentage scores (SDs) and sample sizes for the 1-week (white rows) and 5-week (shaded rows...
A correlational analysis was performed to examine the relationship between recognition and recall te...
<p>Scores represent the number of the information type stated over all the stimuli for both conditio...
<p>Values are F1-score (recall/precision) (%)</p><p>Comparison of two methods in term recognition: r...
<p>Comparison of the average precision rates, recall rates and F1 values for the different classific...
<p>Column A indicates the number of annotations found exclusively by TE-Learner<sup><i>LTR</i></sup>...
In each set of boxes corresponding to different sample size (SS) values, Precision, NA_perc (percent...
Comparison of recall rates of multi-category results for different intrusion detection models.</p
<p>A comparison between our method and PRINCE. We can see that our method gives a high precision as ...
<p>Precision (positive predictive value) is the percentage of texts positively classified by the alg...
<p>Recall rate comparison of our method and two other methods on synthetic multilayer networks.</p
<p>Classification accuracy comparison of the proposed research with the state-of-the-art methods.</p
<p>Recall value comparisons for different clustering algorithms using the overlapping technique.</p
<p>Comparison of the recognition rates of the proposed method with some popular classifiers in the l...
<p>(<b>A</b>) ROCs of five different methods. The values in the brackets are the average auROCs of e...
<p>Mean percentage scores (SDs) and sample sizes for the 1-week (white rows) and 5-week (shaded rows...
A correlational analysis was performed to examine the relationship between recognition and recall te...
<p>Scores represent the number of the information type stated over all the stimuli for both conditio...