We consider the problem of evaluating retrieval systems using a limited number of relevance judgments. Recent work has demonstrated that one can accurately estimate average precision via a judged pool corresponding to a relatively small random sample of documents. In this work, we demonstrate that given values or estimates of average precision, one can accurately infer the relevances of unjudged documents. Combined, we thus show how one can efficiently and accurately infer a large judged pool from a relatively small number of judged documents, thus permitting accurate and efficient retrieval evaluation on a large scale
Research in Information Retrieval has progressed against a background of rapidly increasing corpus s...
© 2019 Ziying YangBatch evaluation techniques are often used to measure and compare the performance ...
In this paper we present some new methods of ranking information retrieval systems without relevance...
Accurate estimation of information retrieval evaluation met-rics such as average precision require l...
Batched evaluations in IR experiments are commonly built using relevance judgments formed over a sam...
© 2010 Dr. William Edward WebberFull-text retrieval systems employ heuristics to match documents to ...
Some measures such as mean average precision and recall level precision are considered as good syste...
We present two new measures of retrieval effectiveness, inspired by Graded Average Precision (GAP), ...
Corpora and topics are readily available for information retrieval research. Relevance judgments, wh...
Purpose: The effort in addition to relevance is a major factor for satisfaction and utility of the d...
Large-scale retrieval systems are often implemented as a cascading sequence of phases-a first filter...
© 2011 Dr. Sri Devi RavanaComparative evaluations of information retrieval systems using test collec...
A range of methods for measuring the effectiveness of information retrieval systems has been propose...
In Information Retrieval evaluation, the classical approach of adopting binary relevance judgments h...
Carterette, BenInformation Retrieval System Evaluation is important in order to determine the perfor...
Research in Information Retrieval has progressed against a background of rapidly increasing corpus s...
© 2019 Ziying YangBatch evaluation techniques are often used to measure and compare the performance ...
In this paper we present some new methods of ranking information retrieval systems without relevance...
Accurate estimation of information retrieval evaluation met-rics such as average precision require l...
Batched evaluations in IR experiments are commonly built using relevance judgments formed over a sam...
© 2010 Dr. William Edward WebberFull-text retrieval systems employ heuristics to match documents to ...
Some measures such as mean average precision and recall level precision are considered as good syste...
We present two new measures of retrieval effectiveness, inspired by Graded Average Precision (GAP), ...
Corpora and topics are readily available for information retrieval research. Relevance judgments, wh...
Purpose: The effort in addition to relevance is a major factor for satisfaction and utility of the d...
Large-scale retrieval systems are often implemented as a cascading sequence of phases-a first filter...
© 2011 Dr. Sri Devi RavanaComparative evaluations of information retrieval systems using test collec...
A range of methods for measuring the effectiveness of information retrieval systems has been propose...
In Information Retrieval evaluation, the classical approach of adopting binary relevance judgments h...
Carterette, BenInformation Retrieval System Evaluation is important in order to determine the perfor...
Research in Information Retrieval has progressed against a background of rapidly increasing corpus s...
© 2019 Ziying YangBatch evaluation techniques are often used to measure and compare the performance ...
In this paper we present some new methods of ranking information retrieval systems without relevance...