Content is increasingly available in multiple modalities (such as images, text, and video), each of which provides a different representation of some entity. The cross-modal retrieval problem is: given the representation of an entity in one modality, find its best representation in all other modalities. We propose a novel approach to this problem based on pairwise classification. The approach seamlessly applies to both the settings where ground-truth annotations for the entities are absent and present. In the former case, the approach considers both positive and unlabelled links that arise in standard cross-modal retrieval datasets. Empirical comparisons show improvements over state-of-the-art methods for cross-modal retrieva
Conference of 5th ACM International Conference on Multimedia Retrieval, ICMR 2015 ; Conference Date:...
Cross-modal retrieval has been attracting increasing attention because of the explosion of multi-mod...
In order to exploit the abundant potential information of the unlabeled data and contribute to analy...
Content is increasingly available in multiple modalities (such as images, text, and video), each of ...
Cross-modal retrieval is an important field of research today because of the abundance of multi-medi...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Cross-modal retrieval aims to find relevant data of different modalities, such as images and text. I...
Cross-modal retrieval has attracted significant attention due to the increasing use of multi-modal d...
Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g....
Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g....
© 2017 Association for Computing Machinery. Cross-modal retrieval aims to enable flexible retrieval ...
Conference of 5th ACM International Conference on Multimedia Retrieval, ICMR 2015 ; Conference Date:...
Conference of 5th ACM International Conference on Multimedia Retrieval, ICMR 2015 ; Conference Date:...
Conference of 5th ACM International Conference on Multimedia Retrieval, ICMR 2015 ; Conference Date:...
Cross-modal retrieval has been attracting increasing attention because of the explosion of multi-mod...
In order to exploit the abundant potential information of the unlabeled data and contribute to analy...
Content is increasingly available in multiple modalities (such as images, text, and video), each of ...
Cross-modal retrieval is an important field of research today because of the abundance of multi-medi...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Cross-modal retrieval aims to find relevant data of different modalities, such as images and text. I...
Cross-modal retrieval has attracted significant attention due to the increasing use of multi-modal d...
Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g....
Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g....
© 2017 Association for Computing Machinery. Cross-modal retrieval aims to enable flexible retrieval ...
Conference of 5th ACM International Conference on Multimedia Retrieval, ICMR 2015 ; Conference Date:...
Conference of 5th ACM International Conference on Multimedia Retrieval, ICMR 2015 ; Conference Date:...
Conference of 5th ACM International Conference on Multimedia Retrieval, ICMR 2015 ; Conference Date:...
Cross-modal retrieval has been attracting increasing attention because of the explosion of multi-mod...
In order to exploit the abundant potential information of the unlabeled data and contribute to analy...