We describe our second-time participation, that includes one high-level feature extraction run, and three manual and one interactive search runs, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual and textual information were used where visual information consisted of color, texture and edgebased low-level features and textual information consisted of the speech transcript provided in the collection. With the experience gained with our second-time participation, we are in the process of building a system for automatic classification and indexing of video archives
While it would seem that digital video libraries should benefit from access mechanisms directed to t...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
In this paper we present and discuss the system we developed for the search task of the TRECVID 2002...
We describe our third participation, that includes one high-level feature extraction run, and two ma...
We describe our fourth participation, that includes two high-level feature extraction runs, and one ...
TRECVID is an annual exercise which encourages research in information retrieval from digital video ...
TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in informat...
The Fischlar-TRECVid-2004 system was developed for Dublin City University's participation in the 200...
Many research groups worldwide are now investigating techniques which can support information retrie...
In this paper, we describe our experiments for TRECVID 2004 for the Search task. In the interactive ...
In this paper we give an outline of the Físchlár system developed to enable participation in the int...
In the first part of this paper we describe our experiments in the automatic and interactive search ...
Dublin City University participated in the Feature Extraction task and the Search task of the TREC-2...
The TREC Video Retrieval Evaluation (TRECVID) 2009 was a TREC-style video analysis and retrieval eva...
The TREC Video Retrieval Evaluation is a multiyear, international effort, funded by the US Advanced ...
While it would seem that digital video libraries should benefit from access mechanisms directed to t...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
In this paper we present and discuss the system we developed for the search task of the TRECVID 2002...
We describe our third participation, that includes one high-level feature extraction run, and two ma...
We describe our fourth participation, that includes two high-level feature extraction runs, and one ...
TRECVID is an annual exercise which encourages research in information retrieval from digital video ...
TRECVID, an annual retrieval evaluation benchmark organized by NIST, encourages research in informat...
The Fischlar-TRECVid-2004 system was developed for Dublin City University's participation in the 200...
Many research groups worldwide are now investigating techniques which can support information retrie...
In this paper, we describe our experiments for TRECVID 2004 for the Search task. In the interactive ...
In this paper we give an outline of the Físchlár system developed to enable participation in the int...
In the first part of this paper we describe our experiments in the automatic and interactive search ...
Dublin City University participated in the Feature Extraction task and the Search task of the TREC-2...
The TREC Video Retrieval Evaluation (TRECVID) 2009 was a TREC-style video analysis and retrieval eva...
The TREC Video Retrieval Evaluation is a multiyear, international effort, funded by the US Advanced ...
While it would seem that digital video libraries should benefit from access mechanisms directed to t...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
In this paper we present and discuss the system we developed for the search task of the TRECVID 2002...