Abstract. This study addresses the lack of an adequate test collection that can be used to evaluate search systems that exploit annotations to increase the retrieval effectiveness of an information search tool. In par-ticular, a new approach is proposed that enables the automatic creation of multiple test collections without human effort. This approach takes advantage of the human relevance assessments contained in an already existing test collection and it introduces content-level annotations in that collection.
Search for multimedia is hampered by both the lack of quality annotations and a quantity of annotati...
We consider the problem of optimally allocating a limited budget to acquire relevance judgments when...
Abstract. This paper discusses how annotations can be exploited to develop information access and re...
Abstract. The increasing number of users and the diffusion of Digi-tal Libraries (DLs) has increased...
The availability of test collections in Cranfield paradigm has significantly benefited the developme...
The development of new search algorithms requires an evaluation framework in which A/B testing of ne...
Introduction. Evaluation is highly important for designing, developing and maintaining effective inf...
Test collections model use cases in ways that facilitate evaluation of information retrieval systems...
Abstract. We propose and evaluate a query expansion mechanism that supports searching and browsing i...
Accurate estimation of information retrieval evaluation met-rics such as average precision require l...
Many well-known information retrieval models rank documents using scores derived from the query and ...
This paper presents the result of a usability test of an annotation tool. The annotation tool is imp...
Use of test collections and evaluation measures to assess the effectiveness of information retrieval...
Search-based software testing (SBST)often uses objective-based approaches to solve testing problems....
Anyone offering content in a digital library is naturally interested in assessing its performance: h...
Search for multimedia is hampered by both the lack of quality annotations and a quantity of annotati...
We consider the problem of optimally allocating a limited budget to acquire relevance judgments when...
Abstract. This paper discusses how annotations can be exploited to develop information access and re...
Abstract. The increasing number of users and the diffusion of Digi-tal Libraries (DLs) has increased...
The availability of test collections in Cranfield paradigm has significantly benefited the developme...
The development of new search algorithms requires an evaluation framework in which A/B testing of ne...
Introduction. Evaluation is highly important for designing, developing and maintaining effective inf...
Test collections model use cases in ways that facilitate evaluation of information retrieval systems...
Abstract. We propose and evaluate a query expansion mechanism that supports searching and browsing i...
Accurate estimation of information retrieval evaluation met-rics such as average precision require l...
Many well-known information retrieval models rank documents using scores derived from the query and ...
This paper presents the result of a usability test of an annotation tool. The annotation tool is imp...
Use of test collections and evaluation measures to assess the effectiveness of information retrieval...
Search-based software testing (SBST)often uses objective-based approaches to solve testing problems....
Anyone offering content in a digital library is naturally interested in assessing its performance: h...
Search for multimedia is hampered by both the lack of quality annotations and a quantity of annotati...
We consider the problem of optimally allocating a limited budget to acquire relevance judgments when...
Abstract. This paper discusses how annotations can be exploited to develop information access and re...