Abstract. In this paper we present the results of a user study that was conducted in combination with a submission to TRECVID 2003. Search behavior of students querying an interactive video-retrieval system was analyzed. 242 Searches by 39 students on 24 topics were assessed. Questionnaire data, logged user actions on the system, and a quality measure of each search provided by TRECVID were studied. Analysis of the results at various stages in the retrieval process suggests that retrieval based on transcriptions of the speech in video data adds more to the average precision of the result than content-based retrieval. The latter is particularly useful in providing the user with an overview of the dataset and thus an indication of the success...
Abstract. This paper examines results from the last two years of the TRECVID video retrieval evaluat...
While it would seem that digital video libraries should benefit from access mechanisms directed to t...
Aural and visual cues can be automatically extracted from video and used to index its contents. This...
The results of a study are presented, in which people queried a news archive using an interactive vi...
The results of a study are presented, in which people queried a news archive using an interactive vi...
This research project explores the topic of video information retrieval in conjunction with the task...
In this paper we present and discuss the system we developed for the search task of the TRECVID 2002...
The Informedia group at Carnegie Mellon University has since 1994 been developing and evaluating sur...
Improving the user's interaction with a video retrieval system requires to examine the search behavi...
The experiments presented in this paper explore topics surrounding video information retrieval (IR)....
TRECVID participants have enjoyed consistent success using storyboard interfaces for shot-based retr...
TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in informat...
The growth in available online video material over the Internet is generally combined with user-assi...
In this paper we give an outline of the Físchlár system developed to enable participation in the int...
The growth in available online video material over the internet is generally combined with user-assi...
Abstract. This paper examines results from the last two years of the TRECVID video retrieval evaluat...
While it would seem that digital video libraries should benefit from access mechanisms directed to t...
Aural and visual cues can be automatically extracted from video and used to index its contents. This...
The results of a study are presented, in which people queried a news archive using an interactive vi...
The results of a study are presented, in which people queried a news archive using an interactive vi...
This research project explores the topic of video information retrieval in conjunction with the task...
In this paper we present and discuss the system we developed for the search task of the TRECVID 2002...
The Informedia group at Carnegie Mellon University has since 1994 been developing and evaluating sur...
Improving the user's interaction with a video retrieval system requires to examine the search behavi...
The experiments presented in this paper explore topics surrounding video information retrieval (IR)....
TRECVID participants have enjoyed consistent success using storyboard interfaces for shot-based retr...
TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in informat...
The growth in available online video material over the Internet is generally combined with user-assi...
In this paper we give an outline of the Físchlár system developed to enable participation in the int...
The growth in available online video material over the internet is generally combined with user-assi...
Abstract. This paper examines results from the last two years of the TRECVID video retrieval evaluat...
While it would seem that digital video libraries should benefit from access mechanisms directed to t...
Aural and visual cues can be automatically extracted from video and used to index its contents. This...