This paper represents a new technique for building a relevance judgment list for information retrieval test collections without any human intervention. It is based on the number of occurrences of the documents in runs retrieved from several information retrieval systems and a distance based measure between the documents. The effectiveness of the technique is evaluated by computing the correlation between the ranking of the TREC systems using the original relevance judgment list (qrels) built by human assessors and the ranking obtained by using the newly generated qrels
Introduction. Evaluation is highly important for designing, developing and maintaining effective inf...
Evaluating retrieval systems, such as those submitted to the annual TREC competition, usually requir...
This paper investigates the agreement of relevance assessments between official TREC judgments and t...
A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for ...
We describe a new technique for building a relevance judgment list (qrels) for TREC test collections...
Corpora and topics are readily available for information retrieval research. Relevance judgments, wh...
The availability of test collections in Cranfield paradigm has significantly benefited the developme...
We consider the problem of optimally allocating a limited budget to acquire relevance judgments when...
Corpora and topics are readily available for information re-trieval research. Relevance judgments, w...
Relevance is a complex, but core, concept within the field of Information Retrieval. In order to all...
© 2019 Ziying YangBatch evaluation techniques are often used to measure and compare the performance ...
track at TREC 2009. We used only the Category B subset of the ClueWeb collection; our preprocessing ...
The dominant approach to evaluate the effectiveness of information retrieval (IR) systems is by mean...
Purpose: The effort in addition to relevance is a major factor for satisfaction and utility of the d...
In this paper we a propose an extended methodology for laboratory based Information Retrieval evalua...
Introduction. Evaluation is highly important for designing, developing and maintaining effective inf...
Evaluating retrieval systems, such as those submitted to the annual TREC competition, usually requir...
This paper investigates the agreement of relevance assessments between official TREC judgments and t...
A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for ...
We describe a new technique for building a relevance judgment list (qrels) for TREC test collections...
Corpora and topics are readily available for information retrieval research. Relevance judgments, wh...
The availability of test collections in Cranfield paradigm has significantly benefited the developme...
We consider the problem of optimally allocating a limited budget to acquire relevance judgments when...
Corpora and topics are readily available for information re-trieval research. Relevance judgments, w...
Relevance is a complex, but core, concept within the field of Information Retrieval. In order to all...
© 2019 Ziying YangBatch evaluation techniques are often used to measure and compare the performance ...
track at TREC 2009. We used only the Category B subset of the ClueWeb collection; our preprocessing ...
The dominant approach to evaluate the effectiveness of information retrieval (IR) systems is by mean...
Purpose: The effort in addition to relevance is a major factor for satisfaction and utility of the d...
In this paper we a propose an extended methodology for laboratory based Information Retrieval evalua...
Introduction. Evaluation is highly important for designing, developing and maintaining effective inf...
Evaluating retrieval systems, such as those submitted to the annual TREC competition, usually requir...
This paper investigates the agreement of relevance assessments between official TREC judgments and t...