Our main focus for this year was on setting up a flexible retrieval environment rather than on evaluating novel video retrieval approaches. In this structured abstract the submitted runs are briefly described. High-level feature extraction We experimented with feature detectors based on visual information only, and compared Weibullbased and GMM-based detectors. • LL-HF-WB-VisOnly Region-based Weibull models, visual only • LL-HF-WBNWC-VisOnly Extended region-based Weibull models, visual only • LL-HF-GMMQGM-VisOnly GMM-based models, query generation variant • LL-HF-GMMDGM-VisOnly GMM-based models, document generation variant We found large differences across topics. Some models are good for one topic other for the next. Future research has to...
We participate in two tasks of TRECVID 2009: high-level feature extraction (HLFE) and search. This p...
In this paper we describe the current performance of our MediaMill system as presented in the TRECVI...
In this paper, we describe our experiments in high-level features extraction and interactive topic s...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
In this paper we describe our TRECVID 2007 experiments. The MediaMill team participated in two tasks...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
In this paper we describe our TRECVID 2006 experiments. The MediaMill team participated in two tasks...
In the first part of this paper we describe our experiments in the automatic and interactive search ...
In this paper we describe our approach for jointly modeling the text part and the visual part of mul...
In this paper we describe our TRECVID 2008 video retrieval experiments. The MediaMill team participa...
We describe our second-time participation, that includes one high-level feature extraction run, and ...
We describe our third participation, that includes one high-level feature extraction run, and two ma...
In this paper, the two different applications based on the Schema Reference System that were develop...
In this paper, we describe our experiments for TRECVID 2004 for the Search task. In the interactive ...
In this paper we describe our TRECVID 2009 video retrieval experiments. The MediaMill team participa...
We participate in two tasks of TRECVID 2009: high-level feature extraction (HLFE) and search. This p...
In this paper we describe the current performance of our MediaMill system as presented in the TRECVI...
In this paper, we describe our experiments in high-level features extraction and interactive topic s...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
In this paper we describe our TRECVID 2007 experiments. The MediaMill team participated in two tasks...
In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network o...
In this paper we describe our TRECVID 2006 experiments. The MediaMill team participated in two tasks...
In the first part of this paper we describe our experiments in the automatic and interactive search ...
In this paper we describe our approach for jointly modeling the text part and the visual part of mul...
In this paper we describe our TRECVID 2008 video retrieval experiments. The MediaMill team participa...
We describe our second-time participation, that includes one high-level feature extraction run, and ...
We describe our third participation, that includes one high-level feature extraction run, and two ma...
In this paper, the two different applications based on the Schema Reference System that were develop...
In this paper, we describe our experiments for TRECVID 2004 for the Search task. In the interactive ...
In this paper we describe our TRECVID 2009 video retrieval experiments. The MediaMill team participa...
We participate in two tasks of TRECVID 2009: high-level feature extraction (HLFE) and search. This p...
In this paper we describe the current performance of our MediaMill system as presented in the TRECVI...
In this paper, we describe our experiments in high-level features extraction and interactive topic s...