In this work, we investigate approaches to engineer better topic sets in information retrieval test collections. By recasting the TREC evaluation exercise from one of building more effective systems to an exercise in building better topics, we present two possible ap-proaches to quantify topic “goodness”: topic ease and topic set predictivity. A novel interpretation of a well known result and a twofold analysis of data from several TREC editions lead to a re-sult that has been neglected so far: both topic ease and topic set predictivity have changed significantly across the years, sometimes in a perhaps undesirable way
The robust retrieval track explores methods for improving the the consistency of retrieval technolog...
In TREC-8, we participated in the automatic and manual tracks for category A as well as the small we...
We describe our participation in the TREC 2003 Robust and Web tracks. For the Robust track, we exp...
In this work, we investigate approaches to engineer better topic sets in information retrieval test ...
In the course of eight TREC Conferences, retrieval performance of all systems started high and then ...
Ranking a number of retrieval systems according to their retrieval effectiveness without relying on ...
In TREC 2004, IRIT modified important features of t he strategy that was developed for TREC 2003. Ch...
Modeling text with topics is currently a popular research area in both Machine Learning and Informat...
Experiments using TREC-style topic descriptions and relevance judgments have recently been carried o...
Effectiveness evaluation of information retrieval systems by means of a test collection is a widely ...
Depending on a web searcher's familiarity with a query's tar-get topic, it may be more app...
This is the third year that our group participates in TREC's Web track, the second year in the ...
The TREC HARD (High accuracy Retrieval from Documents) track was motivated to investigate techniques...
Ranking a set retrieval systems according to their retrieval effectiveness without relying on releva...
Ranking a set retrieval systems according to their retrieval effectiveness without relying on releva...
The robust retrieval track explores methods for improving the the consistency of retrieval technolog...
In TREC-8, we participated in the automatic and manual tracks for category A as well as the small we...
We describe our participation in the TREC 2003 Robust and Web tracks. For the Robust track, we exp...
In this work, we investigate approaches to engineer better topic sets in information retrieval test ...
In the course of eight TREC Conferences, retrieval performance of all systems started high and then ...
Ranking a number of retrieval systems according to their retrieval effectiveness without relying on ...
In TREC 2004, IRIT modified important features of t he strategy that was developed for TREC 2003. Ch...
Modeling text with topics is currently a popular research area in both Machine Learning and Informat...
Experiments using TREC-style topic descriptions and relevance judgments have recently been carried o...
Effectiveness evaluation of information retrieval systems by means of a test collection is a widely ...
Depending on a web searcher's familiarity with a query's tar-get topic, it may be more app...
This is the third year that our group participates in TREC's Web track, the second year in the ...
The TREC HARD (High accuracy Retrieval from Documents) track was motivated to investigate techniques...
Ranking a set retrieval systems according to their retrieval effectiveness without relying on releva...
Ranking a set retrieval systems according to their retrieval effectiveness without relying on releva...
The robust retrieval track explores methods for improving the the consistency of retrieval technolog...
In TREC-8, we participated in the automatic and manual tracks for category A as well as the small we...
We describe our participation in the TREC 2003 Robust and Web tracks. For the Robust track, we exp...