We briefly describe “CuVid, ” Columbia University’s video search engine, a system that enables semantic multimodal search over video broadcast news collections. The system was developed and first evaluated for the NIST TRECVID 2005 benchmark and later expanded to include a large number (374) of visual concept detectors. Our focus is on comparative studies of pros and cons of search methods built on various individual modalities (keyword, image, near-duplicate, and semantic concept) and combinations, without requiring advanced tools and interfaces for interactive search
We combine in this paper automatic learning of a large lexicon of semantic concepts with traditional...
In this paper we describe our TRECVID 2011 video retrieval experiments. The MediaMill team participa...
In this technical demonstration we showcase the current version of the MediaMill system, a search en...
In this paper we describe our TRECVID 2009 video retrieval experiments. The MediaMill team participa...
In this paper we describe the current performance of our MediaMill system as presented in the TRECVI...
In this paper we present our Mediamill video search engine. The basis for the engine is a semantic i...
In this paper we present the methods and visualizations used in the MediaMill video search engine. T...
In this paper we describe our TRECVID 2008 video retrieval experiments. The MediaMill team participa...
In this paper we describe our TRECVID 2009 video re- trieval experiments. The MediaMill team partici...
In this paper we present the methods and visualizations used in the MediaMill video search engine. T...
In this paper we describe our TRECVID 2007 experiments. The MediaMill team participated in two tasks...
In this paper we describe our TRECVID 2006 experiments. The MediaMill team participated in two tasks...
This year the UvA-MediaMill team participated in the Feature Extraction and Search Task. We develope...
In this paper, we review 300 references on video retrieval, indicating when text-only solutions are ...
In this paper we describe our TRECVID 2012 video retrieval experiments. The MediaMill team participa...
We combine in this paper automatic learning of a large lexicon of semantic concepts with traditional...
In this paper we describe our TRECVID 2011 video retrieval experiments. The MediaMill team participa...
In this technical demonstration we showcase the current version of the MediaMill system, a search en...
In this paper we describe our TRECVID 2009 video retrieval experiments. The MediaMill team participa...
In this paper we describe the current performance of our MediaMill system as presented in the TRECVI...
In this paper we present our Mediamill video search engine. The basis for the engine is a semantic i...
In this paper we present the methods and visualizations used in the MediaMill video search engine. T...
In this paper we describe our TRECVID 2008 video retrieval experiments. The MediaMill team participa...
In this paper we describe our TRECVID 2009 video re- trieval experiments. The MediaMill team partici...
In this paper we present the methods and visualizations used in the MediaMill video search engine. T...
In this paper we describe our TRECVID 2007 experiments. The MediaMill team participated in two tasks...
In this paper we describe our TRECVID 2006 experiments. The MediaMill team participated in two tasks...
This year the UvA-MediaMill team participated in the Feature Extraction and Search Task. We develope...
In this paper, we review 300 references on video retrieval, indicating when text-only solutions are ...
In this paper we describe our TRECVID 2012 video retrieval experiments. The MediaMill team participa...
We combine in this paper automatic learning of a large lexicon of semantic concepts with traditional...
In this paper we describe our TRECVID 2011 video retrieval experiments. The MediaMill team participa...
In this technical demonstration we showcase the current version of the MediaMill system, a search en...