This work is an initial study on the utility of automatically generated queries for evaluating known-item retrieval and how such queries compare to real queries. The main advantage of automatically generating queries is that for any given test collection numerous queries can be produced at minimal cost. For evaluation, this has huge ramifications as state-of-the-art algorithms can be tested on different types of generated queries which mimic particular querying styles that a user may adopt. Our approach draws upon previous research in IR which has probabilistically generated simulated queries for other purposes [2, 3]
Artificial Intelligence Lab, Department of MIS, University of ArizonaInformation retrieval using pro...
The well-known relevance feedback process uses information extracted from previously retrieved rele...
We consider the problem of generating SQL notebooks of comparison queries for Exploratory Data Anal...
This work is an initial study on the utility of automatically generated queries for evaluating known...
This work is an initial study on the utility of automatically generated queries for evaluating known...
Abstract. Known-item search is the search for a specific document that is known to exist. This task ...
Users tend to use their own terms to search items in structured search systems such as restaurant se...
AbstractAutomatic techniques for generating a test fatabase for a given query are studied. The metho...
We consider the problem of optimally allocating a limited budget to acquire relevance judgments when...
The development of new search algorithms requires an evaluation framework in which A/B testing of ne...
We propose a methodology for building a robust query classification system that can identify thou-sa...
Artificial Intelligence Lab, Department of MIS, University of ArizonaThis paper presents an incremen...
To evaluate and improve medical information retrieval, benchmarking data sets need to be created. Fe...
Analysis of queries posed to open-domain question-answering systems indicates that particular types ...
Database-centric systems strongly rely on SQL queries to manage and manipulate their data. These SQL...
Artificial Intelligence Lab, Department of MIS, University of ArizonaInformation retrieval using pro...
The well-known relevance feedback process uses information extracted from previously retrieved rele...
We consider the problem of generating SQL notebooks of comparison queries for Exploratory Data Anal...
This work is an initial study on the utility of automatically generated queries for evaluating known...
This work is an initial study on the utility of automatically generated queries for evaluating known...
Abstract. Known-item search is the search for a specific document that is known to exist. This task ...
Users tend to use their own terms to search items in structured search systems such as restaurant se...
AbstractAutomatic techniques for generating a test fatabase for a given query are studied. The metho...
We consider the problem of optimally allocating a limited budget to acquire relevance judgments when...
The development of new search algorithms requires an evaluation framework in which A/B testing of ne...
We propose a methodology for building a robust query classification system that can identify thou-sa...
Artificial Intelligence Lab, Department of MIS, University of ArizonaThis paper presents an incremen...
To evaluate and improve medical information retrieval, benchmarking data sets need to be created. Fe...
Analysis of queries posed to open-domain question-answering systems indicates that particular types ...
Database-centric systems strongly rely on SQL queries to manage and manipulate their data. These SQL...
Artificial Intelligence Lab, Department of MIS, University of ArizonaInformation retrieval using pro...
The well-known relevance feedback process uses information extracted from previously retrieved rele...
We consider the problem of generating SQL notebooks of comparison queries for Exploratory Data Anal...