We describe our fourth participation, that includes two high-level feature extraction runs, and one manual search run, to the TRECVID video retrieval evaluation. All of these runs have used a system trained on the common development collection. Only visual information, consisting of color, texture and edge-based low-level features, was used
In this paper, we describe our experiments in high-level features extraction and interactive topic s...
In this paper, we describe our experiments in high-level features extraction and interactive topic s...
In this paper we summarize our TRECVID 2014 video retrieval experiments. The MediaMill team particip...
We describe our third participation, that includes one high-level feature extraction run, and two ma...
We describe our second-time participation, that includes one high-level feature extraction run, and ...
Bilkent University Multimedia Database Group (BILMDG) participated in two tasks at TRECVID 2008: con...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
Successful and effective content-based access to digital video requires fast, accurate and scalabl...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
In this paper we describe the K-Space participation in\ud TRECVid 2006. K-Space participated in two ...
We participate in two tasks of TRECVID 2009: high-level feature extraction (HLFE) and search. This p...
TRECVID is an annual exercise which encourages research in information retrieval from digital video ...
In this paper we describe our TRECVID 2008 video retrieval experiments. The MediaMill team participa...
In this paper, we describe our approaches and experiments in semantic video classification (high-lev...
Many research groups worldwide are now investigating techniques which can support information retrie...
In this paper, we describe our experiments in high-level features extraction and interactive topic s...
In this paper, we describe our experiments in high-level features extraction and interactive topic s...
In this paper we summarize our TRECVID 2014 video retrieval experiments. The MediaMill team particip...
We describe our third participation, that includes one high-level feature extraction run, and two ma...
We describe our second-time participation, that includes one high-level feature extraction run, and ...
Bilkent University Multimedia Database Group (BILMDG) participated in two tasks at TRECVID 2008: con...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
Successful and effective content-based access to digital video requires fast, accurate and scalabl...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
In this paper we describe the K-Space participation in\ud TRECVid 2006. K-Space participated in two ...
We participate in two tasks of TRECVID 2009: high-level feature extraction (HLFE) and search. This p...
TRECVID is an annual exercise which encourages research in information retrieval from digital video ...
In this paper we describe our TRECVID 2008 video retrieval experiments. The MediaMill team participa...
In this paper, we describe our approaches and experiments in semantic video classification (high-lev...
Many research groups worldwide are now investigating techniques which can support information retrie...
In this paper, we describe our experiments in high-level features extraction and interactive topic s...
In this paper, we describe our experiments in high-level features extraction and interactive topic s...
In this paper we summarize our TRECVID 2014 video retrieval experiments. The MediaMill team particip...