In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level representations from textual and visual input. The visual modality is encoded via vectors of attributes obtained automatically from images. We create a new large-scale taxonomy of 600 visual attributes representing more than 500 concepts and 700K images. We use this dataset to train attribute classifiers and integrate their predictions with text-based distributional models of word meaning. We evaluate our model on its ability to simulate word similarity judgments and concept categorization. On both tasks, our model yields a better fit to behavioral data compa...
When we communicate with each other, a large chunk of what we express is conveyed by the words we us...
Word embeddings have been very successful in many natural language processing tasks, but they charac...
In recent years, distributional models (DMs) have shown great success in repre-senting lexical seman...
In this paper we address the problem of grounding distributional representations of lexical meaning....
In this paper we address the problem of grounding distributional representations of lexical meaning....
Comunicació presentada a: the 51st Annual Meeting of the Association for Computational Linguistics, ...
Humans possess a rich semantic knowledge of words and concepts which captures the perceivable physi...
We present a distributional semantic model combining text- and image-based features. We evaluate thi...
The distributional hypothesis states that the meaning of a concept is defined through the contexts i...
Distributional models such as Latent Semantic Analysis (LSA, Landauer, Dumais 1997) generate semanti...
Learning and representing semantics is one of the most important tasks that significantly contribute...
Opportunities for associationist learning of word meaning, where a word is heard or read contemperan...
Opportunities for associationist learning of word meaning, where a word is heard or read contemperan...
Models that learn semantic representations from both linguistic and perceptual in-put outperform tex...
Zarrieß S, Schlangen D. Deriving continous grounded meaning representations from referentially struc...
When we communicate with each other, a large chunk of what we express is conveyed by the words we us...
Word embeddings have been very successful in many natural language processing tasks, but they charac...
In recent years, distributional models (DMs) have shown great success in repre-senting lexical seman...
In this paper we address the problem of grounding distributional representations of lexical meaning....
In this paper we address the problem of grounding distributional representations of lexical meaning....
Comunicació presentada a: the 51st Annual Meeting of the Association for Computational Linguistics, ...
Humans possess a rich semantic knowledge of words and concepts which captures the perceivable physi...
We present a distributional semantic model combining text- and image-based features. We evaluate thi...
The distributional hypothesis states that the meaning of a concept is defined through the contexts i...
Distributional models such as Latent Semantic Analysis (LSA, Landauer, Dumais 1997) generate semanti...
Learning and representing semantics is one of the most important tasks that significantly contribute...
Opportunities for associationist learning of word meaning, where a word is heard or read contemperan...
Opportunities for associationist learning of word meaning, where a word is heard or read contemperan...
Models that learn semantic representations from both linguistic and perceptual in-put outperform tex...
Zarrieß S, Schlangen D. Deriving continous grounded meaning representations from referentially struc...
When we communicate with each other, a large chunk of what we express is conveyed by the words we us...
Word embeddings have been very successful in many natural language processing tasks, but they charac...
In recent years, distributional models (DMs) have shown great success in repre-senting lexical seman...