There is a long history of repeatable and comparable evaluation in Information Retrieval (IR). However, thus far, no shared test collection exists that has been designed to support interactive lifelog retrieval. In this paper we introduce the LSC2018 collection, that is de- signed to evaluate the performance of interactive retrieval systems. We describe the features of the dataset and we report on the outcome of the first Lifelog Search Challenge (LSC), which used the dataset in an interactive competition at ACM ICMR 2018
The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates ...
In this work, we outline the submission of Dublin City University (DCU) team, the organisers, to the...
Building an interactive retrieval system for lifelogging contains many challenges due to massive mul...
There is a long history of repeatable and comparable evaluation in Information Retrieval (IR). Howev...
There is a long history of repeatable and comparable evaluation in Information Retrieval (IR). Howev...
There is a long history of repeatable and comparable evaluation in Information Retrieval (IR). Howev...
The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates ...
Test collections have a long history of supporting repeatable and comparable evaluation in Informati...
The Lifelog Search Challenge (LSC), is an annual comparative benchmarking activity for comparing app...
For the fifth time since 2018, the Lifelog Search Challenge (LSC) facilitated a benchmarking exercis...
The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates ...
This paper describes the work of DCU research team in collaboration with University of Science, Viet...
The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates ...
In this work, we outline the submission of Dublin City University (DCU) team, the organisers, to the...
Building an interactive retrieval system for lifelogging contains many challenges due to massive mul...
There is a long history of repeatable and comparable evaluation in Information Retrieval (IR). Howev...
There is a long history of repeatable and comparable evaluation in Information Retrieval (IR). Howev...
There is a long history of repeatable and comparable evaluation in Information Retrieval (IR). Howev...
The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates ...
Test collections have a long history of supporting repeatable and comparable evaluation in Informati...
The Lifelog Search Challenge (LSC), is an annual comparative benchmarking activity for comparing app...
For the fifth time since 2018, the Lifelog Search Challenge (LSC) facilitated a benchmarking exercis...
The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates ...
This paper describes the work of DCU research team in collaboration with University of Science, Viet...
The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates ...
In this work, we outline the submission of Dublin City University (DCU) team, the organisers, to the...
Building an interactive retrieval system for lifelogging contains many challenges due to massive mul...