This work is an initial study on the utility of automatically generated queries for evaluating known-item retrieval and how such queries compare to real queries. The main advantage of automatically generating queries is that for any given test collection numerous queries can be produced at minimal cost. For evaluation, this has huge ramifications as state-of-the-art algorithms can be tested on different types of generated queries which mimic particular querying styles that a user may adopt. Our approach draws upon previous research in IR which has probabilistically generated simulated queries for other purposes [2, 3]
The development of new search algorithms requires an evaluation framework in which A/B testing of ne...
We introduce and create a framework for deriving probabilistic models of Information Retrieval. The ...
Modelling the distribution of document scores returned from an information retrieval (IR) system in ...
This work is an initial study on the utility of automatically generated queries for evaluating known...
This work is an initial study on the utility of automatically generated queries for evaluating known...
Abstract. Known-item search is the search for a specific document that is known to exist. This task ...
In this paper we present two contributions: a method to construct simulated document collections sui...
In this paper, a new Monte Carlo algorithm to improve precision of information retrieval by using pa...
Artificial Intelligence Lab, Department of MIS, University of ArizonaInformation retrieval using pro...
Analysis of queries posed to open-domain question-answering systems indicates that particular types ...
The Relevance Model (RM) incorporates pseudo relevance feedback to derive query language model and h...
We consider the problem of optimally allocating a limited budget to acquire relevance judgments when...
The huge number of documents available on the Web represents a challenge for the information retriev...
There is a vast amount of information available with the aid of computers. It is now far easier to m...
Artificial Intelligence Lab, Department of MIS, University of ArizonaThis paper presents an incremen...
The development of new search algorithms requires an evaluation framework in which A/B testing of ne...
We introduce and create a framework for deriving probabilistic models of Information Retrieval. The ...
Modelling the distribution of document scores returned from an information retrieval (IR) system in ...
This work is an initial study on the utility of automatically generated queries for evaluating known...
This work is an initial study on the utility of automatically generated queries for evaluating known...
Abstract. Known-item search is the search for a specific document that is known to exist. This task ...
In this paper we present two contributions: a method to construct simulated document collections sui...
In this paper, a new Monte Carlo algorithm to improve precision of information retrieval by using pa...
Artificial Intelligence Lab, Department of MIS, University of ArizonaInformation retrieval using pro...
Analysis of queries posed to open-domain question-answering systems indicates that particular types ...
The Relevance Model (RM) incorporates pseudo relevance feedback to derive query language model and h...
We consider the problem of optimally allocating a limited budget to acquire relevance judgments when...
The huge number of documents available on the Web represents a challenge for the information retriev...
There is a vast amount of information available with the aid of computers. It is now far easier to m...
Artificial Intelligence Lab, Department of MIS, University of ArizonaThis paper presents an incremen...
The development of new search algorithms requires an evaluation framework in which A/B testing of ne...
We introduce and create a framework for deriving probabilistic models of Information Retrieval. The ...
Modelling the distribution of document scores returned from an information retrieval (IR) system in ...