Traditional retrieval evaluation uses explicit relevance judgments which are expensive to collect. Relevance assessments inferred from implicit feedback such as click-through data can be collected inexpensively, but may be less reliable. We compare assessments derived from click-through data to another source of implicit feedback that we assume to be highly indicative of relevance: purchase decisions. Evaluating retrieval runs based on a log of an audio-visual archive, we find agreement between system rankings and purchase decisions to be surprisingly high
Search sessions consist of a person presenting a query to a search engine, followed by that person e...
© 2019 Ziying YangBatch evaluation techniques are often used to measure and compare the performance ...
Evaluation of information retrieval (IR) systems has recently been exploring the use of preference j...
Queries and click-through data taken from search engine transaction logs is an attractive alternativ...
We evaluate the use of clickthrough information as implicit relevance feedback in sessions. We emplo...
The interactions of users with search engines can be seen as implicit relevance feedback by the user...
Corpora and topics are readily available for information retrieval research. Relevance judgments, wh...
Evaluation of search engine result relevance has traditionally been an expensive process done by hum...
The purpose of this article is to bring attention to the prob-lem of variations in relevance assessm...
Various click models have been recently proposed as a principled approach to infer the relevance of ...
Web search tools are used on a daily basis by billions of people. The commercial providers of these ...
The Cranfield evaluation method has some disadvantages, including its high cost in labor and inadequ...
© 2011 Dr. Sri Devi RavanaComparative evaluations of information retrieval systems using test collec...
In Information Retrieval (IR) evaluation, preference judgments are collected by presenting to the as...
Although relevance is known to be a multidimensional concept, information retrieval measures mainly ...
Search sessions consist of a person presenting a query to a search engine, followed by that person e...
© 2019 Ziying YangBatch evaluation techniques are often used to measure and compare the performance ...
Evaluation of information retrieval (IR) systems has recently been exploring the use of preference j...
Queries and click-through data taken from search engine transaction logs is an attractive alternativ...
We evaluate the use of clickthrough information as implicit relevance feedback in sessions. We emplo...
The interactions of users with search engines can be seen as implicit relevance feedback by the user...
Corpora and topics are readily available for information retrieval research. Relevance judgments, wh...
Evaluation of search engine result relevance has traditionally been an expensive process done by hum...
The purpose of this article is to bring attention to the prob-lem of variations in relevance assessm...
Various click models have been recently proposed as a principled approach to infer the relevance of ...
Web search tools are used on a daily basis by billions of people. The commercial providers of these ...
The Cranfield evaluation method has some disadvantages, including its high cost in labor and inadequ...
© 2011 Dr. Sri Devi RavanaComparative evaluations of information retrieval systems using test collec...
In Information Retrieval (IR) evaluation, preference judgments are collected by presenting to the as...
Although relevance is known to be a multidimensional concept, information retrieval measures mainly ...
Search sessions consist of a person presenting a query to a search engine, followed by that person e...
© 2019 Ziying YangBatch evaluation techniques are often used to measure and compare the performance ...
Evaluation of information retrieval (IR) systems has recently been exploring the use of preference j...