Although at first sight, the web track might seem a copy of the ad hoc track, we discovered that some small adjustments had to be made to our systems to run the web evaluation. As we expected, the basic language model based IR model worked effectively on this data. Blind feedback methods however, seem less effective on web data. We also experimented with rescoring the documents based on several algorithms that exploit link information. These methods yielded no positive result
both for the first time. For the Robust track we applied our existing information retrieval system t...
This paper presents a new approach to improve retrieval effectiveness by using concepts, examples, a...
We propose a novel method of analysing data gathered from TREC or similar information retrieval eval...
Experiments using TREC-style topic descriptions and relevance judgments have recently been carried o...
In this paper, we report on our TREC experiments with the ClueWeb09 document collection. We par-tici...
National audienceEvaluation in information retrieval (IR) is crucial. Since the seventies, researche...
We measure the WT10g test collection, used in the TREC-9 and TREC 2001 Web Tracks, and the.GOV test ...
We describe the participation of the University of Amsterdam's ILPS group in the web, blog, web, ent...
A frozen 18.5 million page snapshot of part of the Web has been created to enable and encourage mean...
We describe our participation in the TREC 2003 Robust and Web tracks. For the Robust track, we exp...
The Mirror DBMS is a prototype database system especially designed for multimedia and web retrieval....
Anchor text has been proofed efficient in former TREC experiments on homepage finding task [1] and s...
(MSRC) team this year continue to explore issues in IR from a perspective very close to that of the ...
ABSTRACT This paper investigates the agreement of relevance assessments between official TREC judgme...
In this paper we examine the extent to which implicit feedback (where the system attempts to estimat...
both for the first time. For the Robust track we applied our existing information retrieval system t...
This paper presents a new approach to improve retrieval effectiveness by using concepts, examples, a...
We propose a novel method of analysing data gathered from TREC or similar information retrieval eval...
Experiments using TREC-style topic descriptions and relevance judgments have recently been carried o...
In this paper, we report on our TREC experiments with the ClueWeb09 document collection. We par-tici...
National audienceEvaluation in information retrieval (IR) is crucial. Since the seventies, researche...
We measure the WT10g test collection, used in the TREC-9 and TREC 2001 Web Tracks, and the.GOV test ...
We describe the participation of the University of Amsterdam's ILPS group in the web, blog, web, ent...
A frozen 18.5 million page snapshot of part of the Web has been created to enable and encourage mean...
We describe our participation in the TREC 2003 Robust and Web tracks. For the Robust track, we exp...
The Mirror DBMS is a prototype database system especially designed for multimedia and web retrieval....
Anchor text has been proofed efficient in former TREC experiments on homepage finding task [1] and s...
(MSRC) team this year continue to explore issues in IR from a perspective very close to that of the ...
ABSTRACT This paper investigates the agreement of relevance assessments between official TREC judgme...
In this paper we examine the extent to which implicit feedback (where the system attempts to estimat...
both for the first time. For the Robust track we applied our existing information retrieval system t...
This paper presents a new approach to improve retrieval effectiveness by using concepts, examples, a...
We propose a novel method of analysing data gathered from TREC or similar information retrieval eval...