National audienceEvaluation in information retrieval (IR) is crucial. Since the seventies, researchers use Cranfield or TREC-like framework to evaluate their systems and approaches resulting in system effectiveness calculated across reference collections. While numerical results are common practice for system comparison, we think that visual comparisons could also be of help for researchers. To this end, we developed an interface that allows IR scientists to compare system effectiveness. It relies on results from the trec_eval tool. At this stage, the interface allows analyses for ad hoc retrieval. This paper presents the interface
Experiments using TREC-style topic descriptions and relevance judgments have recently been carried o...
This paper documents the program and the outcome of Dagstuhl Seminar 13441 “Evaluation Methodologies...
Introduction. Evaluation is highly important for designing, developing and maintaining effective inf...
Evaluation in information retrieval (IR) is crucial. Since the seventies, researchers use Cranfield ...
Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focu...
Visualization of search results is an essential step in the textual Information Retrieval (IR) proce...
In information retrieval (IR), research aiming to reduce the cost of retrieval system evaluations ha...
© 2019 Ziying YangBatch evaluation techniques are often used to measure and compare the performance ...
Information Retrieval (IR) research has traditionally focused on serving the best results for a sing...
Modern large retrieval environments tend to overwhelm their users by their large output. Since all d...
This paper is a personal take on the history of evaluation experiments in information retrieval. It ...
We propose a novel method of analysing data gathered from TREC or similar information retrieval eval...
© 2011 Dr. Sri Devi RavanaComparative evaluations of information retrieval systems using test collec...
Objective: Information Retrieval (IR) is strongly rooted in experimentation where new and better way...
The existence and use of standard test collections in information retrieval experimentation allows r...
Experiments using TREC-style topic descriptions and relevance judgments have recently been carried o...
This paper documents the program and the outcome of Dagstuhl Seminar 13441 “Evaluation Methodologies...
Introduction. Evaluation is highly important for designing, developing and maintaining effective inf...
Evaluation in information retrieval (IR) is crucial. Since the seventies, researchers use Cranfield ...
Evaluation has always been a strong element of Information Retrieval (IR) research, much of our focu...
Visualization of search results is an essential step in the textual Information Retrieval (IR) proce...
In information retrieval (IR), research aiming to reduce the cost of retrieval system evaluations ha...
© 2019 Ziying YangBatch evaluation techniques are often used to measure and compare the performance ...
Information Retrieval (IR) research has traditionally focused on serving the best results for a sing...
Modern large retrieval environments tend to overwhelm their users by their large output. Since all d...
This paper is a personal take on the history of evaluation experiments in information retrieval. It ...
We propose a novel method of analysing data gathered from TREC or similar information retrieval eval...
© 2011 Dr. Sri Devi RavanaComparative evaluations of information retrieval systems using test collec...
Objective: Information Retrieval (IR) is strongly rooted in experimentation where new and better way...
The existence and use of standard test collections in information retrieval experimentation allows r...
Experiments using TREC-style topic descriptions and relevance judgments have recently been carried o...
This paper documents the program and the outcome of Dagstuhl Seminar 13441 “Evaluation Methodologies...
Introduction. Evaluation is highly important for designing, developing and maintaining effective inf...