In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level embeddings from tex-tual and visual input. The two modali-ties are encoded as vectors of attributes and are obtained automatically from text and images, respectively. We evaluate our model on its ability to simulate similar-ity judgments and concept categorization. On both tasks, our approach outperforms baselines and related models.
Comunicació presentada a: The 2012 Joint Conference on Empirical Methods in Natural Language Process...
The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A...
We present a distributional semantic model combining text- and image-based features. We evaluate thi...
In this paper we address the problem of grounding distributional representations of lexical meaning....
Zarrieß S, Schlangen D. Deriving continous grounded meaning representations from referentially struc...
Comunicació presentada a: the 51st Annual Meeting of the Association for Computational Linguistics, ...
Humans possess a rich semantic knowledge of words and concepts which captures the perceivable physi...
Models that learn semantic representations from both linguistic and perceptual in-put outperform tex...
International audienceThis paper introduces a novel approach to learn visually grounded meaning repr...
Deep compositional models of meaning acting on distributional representations of words in order to p...
Models that acquire semantic represen-tations from both linguistic and percep-tual input are of inte...
Understanding the meaning of linguistic expressions is a fundamental task of natural language proces...
How are abstract concepts grounded in perceptual experiences for shaping human conceptual knowledge?...
Words are not detached individuals but part of a beautiful interconnected web of related concepts, a...
Symbolic approaches have dominated NLP as a means to model syntactic and semantic aspects of natural...
Comunicació presentada a: The 2012 Joint Conference on Empirical Methods in Natural Language Process...
The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A...
We present a distributional semantic model combining text- and image-based features. We evaluate thi...
In this paper we address the problem of grounding distributional representations of lexical meaning....
Zarrieß S, Schlangen D. Deriving continous grounded meaning representations from referentially struc...
Comunicació presentada a: the 51st Annual Meeting of the Association for Computational Linguistics, ...
Humans possess a rich semantic knowledge of words and concepts which captures the perceivable physi...
Models that learn semantic representations from both linguistic and perceptual in-put outperform tex...
International audienceThis paper introduces a novel approach to learn visually grounded meaning repr...
Deep compositional models of meaning acting on distributional representations of words in order to p...
Models that acquire semantic represen-tations from both linguistic and percep-tual input are of inte...
Understanding the meaning of linguistic expressions is a fundamental task of natural language proces...
How are abstract concepts grounded in perceptual experiences for shaping human conceptual knowledge?...
Words are not detached individuals but part of a beautiful interconnected web of related concepts, a...
Symbolic approaches have dominated NLP as a means to model syntactic and semantic aspects of natural...
Comunicació presentada a: The 2012 Joint Conference on Empirical Methods in Natural Language Process...
The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A...
We present a distributional semantic model combining text- and image-based features. We evaluate thi...