Abstract: In HARD track of HARD 2005, we classify the 50 queries into 7 categories and make use of 3 kinds of feedback sources in various tasks. We find that the different kinds of queries perform differently in feedback tasks and the “CASE “ and “EVENT ” queries are more sensitive to the feedback source. We also explore the internal structure of corpus and try to estimate the distribution of relevant documents within sub-collections. The experiments show that this technology is partly effective and the main existing problem is how to predict the distribution more precisely. 1
We used clarification forms to study passage term feedback. When compared against pseudo-relevance f...
© 2018 Elsevier Ltd Pseudo-relevance feedback (PRF) has evident potential for enriching the represen...
The availability of test collections in Cranfield paradigm has significantly benefited the developme...
Relevance feedback is the retrieval task where the system is given not only an information need, but...
Abstract. Relevance feedback algorithm is proposed to be an effective way to improve the precision o...
Relevance feedback is an effective approach to improve re-trieval quality over the initial query. Ty...
track at TREC 2009. We used only the Category B subset of the ClueWeb collection; our preprocessing ...
Pseudo-relevance feedback finds useful expansion terms from a set of top-ranked documents. It is oft...
In document retrieval using pseudo relevance feedback, after initial ranking, a fixed number of top-...
), which permits unrestricted use, distribution, and reproduction in any medium, provided the origin...
In this paper, we report our experiments in the TREC 2009 Million Query Track. Our first line of stu...
We investigate the topical structure of the set of documents used to expand a query in pseudo-releva...
In this paper we present five user experiments on incorporating behavioural information into the rel...
This document contains a description of experiments for the 2008 Relevance Feedback track. We experi...
User relevance feedback is usually utilized by Web systems to interpret user information needs and r...
We used clarification forms to study passage term feedback. When compared against pseudo-relevance f...
© 2018 Elsevier Ltd Pseudo-relevance feedback (PRF) has evident potential for enriching the represen...
The availability of test collections in Cranfield paradigm has significantly benefited the developme...
Relevance feedback is the retrieval task where the system is given not only an information need, but...
Abstract. Relevance feedback algorithm is proposed to be an effective way to improve the precision o...
Relevance feedback is an effective approach to improve re-trieval quality over the initial query. Ty...
track at TREC 2009. We used only the Category B subset of the ClueWeb collection; our preprocessing ...
Pseudo-relevance feedback finds useful expansion terms from a set of top-ranked documents. It is oft...
In document retrieval using pseudo relevance feedback, after initial ranking, a fixed number of top-...
), which permits unrestricted use, distribution, and reproduction in any medium, provided the origin...
In this paper, we report our experiments in the TREC 2009 Million Query Track. Our first line of stu...
We investigate the topical structure of the set of documents used to expand a query in pseudo-releva...
In this paper we present five user experiments on incorporating behavioural information into the rel...
This document contains a description of experiments for the 2008 Relevance Feedback track. We experi...
User relevance feedback is usually utilized by Web systems to interpret user information needs and r...
We used clarification forms to study passage term feedback. When compared against pseudo-relevance f...
© 2018 Elsevier Ltd Pseudo-relevance feedback (PRF) has evident potential for enriching the represen...
The availability of test collections in Cranfield paradigm has significantly benefited the developme...