Cross-modal retrieval has been attracting increasing attention because of the explosion of multi-modal data, e.g., texts and images. Most supervised cross-modal retrieval methods learn discriminant common subspaces minimizing the heterogeneity of different modalities by exploiting the label information. However, these methods neglect the fact that, in practice, the given labels of training data might be incomplete (i.e., some of their labels are missing). The low-quality labels result in less effective subspace and consequent unsatisfactory retrieval performance. To tackle this, we propose a novel model that simultaneously performs label completion and cross-modal retrieval. Specifically, we assume the tobelearned common subspace can be joi...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Most cross-modal retrieval methods based on subspace learning just focus on learning the projection ...
Cross-modal retrieval is an important field of research today because of the abundance of multi-medi...
Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g....
Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g....
© 2017 Association for Computing Machinery. Cross-modal retrieval aims to enable flexible retrieval ...
Cross-modal retrieval has attracted significant attention due to the increasing use of multi-modal d...
In order to exploit the abundant potential information of the unlabeled data and contribute to analy...
Cross-modal retrieval aims to find relevant data of different modalities, such as images and text. I...
© 2017 IEEE. The core of existing cross-modal retrieval approaches is to close the gap between diffe...
Content is increasingly available in multiple modalities (such as images, text, and video), each of ...
A better similarity mapping function across heterogeneous high-dimensional features is very desirabl...
© 2017 Elsevier Inc. The heterogeneity-gap between different modalities brings a significant challen...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Most cross-modal retrieval methods based on subspace learning just focus on learning the projection ...
Cross-modal retrieval is an important field of research today because of the abundance of multi-medi...
Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g....
Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g....
© 2017 Association for Computing Machinery. Cross-modal retrieval aims to enable flexible retrieval ...
Cross-modal retrieval has attracted significant attention due to the increasing use of multi-modal d...
In order to exploit the abundant potential information of the unlabeled data and contribute to analy...
Cross-modal retrieval aims to find relevant data of different modalities, such as images and text. I...
© 2017 IEEE. The core of existing cross-modal retrieval approaches is to close the gap between diffe...
Content is increasingly available in multiple modalities (such as images, text, and video), each of ...
A better similarity mapping function across heterogeneous high-dimensional features is very desirabl...
© 2017 Elsevier Inc. The heterogeneity-gap between different modalities brings a significant challen...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...
Conference of 2016 ACM Workshop on Vision and Language Integration Meets Multimedia Fusion, Iv and L...