Models that acquire semantic represen-tations from both linguistic and percep-tual input are of interest to researchers in NLP because of the obvious parallels with human language learning. Perfor-mance advantages of the multi-modal ap-proach over language-only models have been clearly established when models are required to learn concrete noun concepts. However, such concepts are comparatively rare in everyday language. In this work, we present a new means of extending the scope of multi-modal models to more commonly-occurring abstract lexical con-cepts via an approach that learns multi-modal embeddings. Our architecture out-performs previous approaches in combin-ing input from distinct modalities, and propagates perceptual information on ...
In this paper we introduce MCA-NMF, a computational model of the acquisition of multi-modal concepts...
Multimodal models have been proven to outperform text-based models on learning semantic word represe...
<p>This paper explores the possibility to learn a semantically-relevant lexicon from images and spee...
Models that acquire semantic represen-tations from both linguistic and percep-tual input are of inte...
Multi-modal models that learn semantic rep-resentations from both linguistic and percep-tual input o...
Models that learn semantic representations from both linguistic and perceptual in-put outperform tex...
Multimodal models have been proven to outperform text-based approaches on learning semantic represen...
Multimodal models have been proven to outperform text-based approaches on learning semantic represen...
Philosophers and linguists have suggested that the meaning of a concept can be represented by a rule...
Philosophers and linguists have suggested that the meaning of a concept can be represented by a rule...
International audienceIn this paper we introduce MCA-NMF, a computational model of the acquisition o...
Research suggests that concepts are distributed across brain regions specialized for processing info...
Research suggests that concepts are distributed across brain regions specialized for processing info...
How are abstract concepts grounded in perceptual experiences for shaping human conceptual knowledge?...
How is semantic information from different modalities integrated and stored? If related ideas are ...
In this paper we introduce MCA-NMF, a computational model of the acquisition of multi-modal concepts...
Multimodal models have been proven to outperform text-based models on learning semantic word represe...
<p>This paper explores the possibility to learn a semantically-relevant lexicon from images and spee...
Models that acquire semantic represen-tations from both linguistic and percep-tual input are of inte...
Multi-modal models that learn semantic rep-resentations from both linguistic and percep-tual input o...
Models that learn semantic representations from both linguistic and perceptual in-put outperform tex...
Multimodal models have been proven to outperform text-based approaches on learning semantic represen...
Multimodal models have been proven to outperform text-based approaches on learning semantic represen...
Philosophers and linguists have suggested that the meaning of a concept can be represented by a rule...
Philosophers and linguists have suggested that the meaning of a concept can be represented by a rule...
International audienceIn this paper we introduce MCA-NMF, a computational model of the acquisition o...
Research suggests that concepts are distributed across brain regions specialized for processing info...
Research suggests that concepts are distributed across brain regions specialized for processing info...
How are abstract concepts grounded in perceptual experiences for shaping human conceptual knowledge?...
How is semantic information from different modalities integrated and stored? If related ideas are ...
In this paper we introduce MCA-NMF, a computational model of the acquisition of multi-modal concepts...
Multimodal models have been proven to outperform text-based models on learning semantic word represe...
<p>This paper explores the possibility to learn a semantically-relevant lexicon from images and spee...