Since modern word embeddings are motivated by a distributional hypothesis and are, therefore, based on local co-occurrences of words, it is only to be expected that synonyms and antonyms can have very similar embeddings. Contrary to this widespread assumption, this paper shows that modern embeddings contain information that distinguishes synonyms and antonyms despite small cosine similarities between corresponding vectors. This information is encoded in the geometry of the embeddings and could be extracted with a manifold learning procedure or {\em contrasting map}. Such a map is trained on a small labeled subset of the data and can produce new empeddings that explicitly highlight specific semantic attributes of the word. The new embeddings...
SUMMARY. Automatic detection of antonymy is an important task in Natural Language Processing (...
Word embeddings have recently become a fundamental tool of Natural Language Processing, with applica...
Vector space models of words in NLP---word embeddings---have been recently shown to reliably encode ...
Distinguishing antonyms from synonyms is a key challenge for many NLP applications focused on the le...
Automatic detection of antonymy is an important task in Natural Language Processing (NLP) for Inform...
Recent advances on the Vector Space Model have significantly improved some NLP applications such as ...
Automatic detection of antonymy is an important task in Natural Language Processing (NLP) for Inform...
The word vectors learned by continuous space language models are known to have the property that the...
For many NLP applications such as In-formation Extraction and Sentiment De-tection, it is of vital i...
Word vector space specialisation models offer a portable, light-weight approach to fine-tuning arbit...
This paper analyzes the concept of opposition and describes a fully unsupervised method for its auto...
Semantic relations are core to how humans understand and express concepts in the real world using la...
Proceedings of the NODALIDA 2009 workshop WordNets and other Lexical Semantic Resources — between L...
Word embedding approaches increased the efficiency of natural language processing (NLP) tasks. Tradi...
Contrasting meaning is a basic aspect of semantics. Recent word-embedding models based on distributi...
SUMMARY. Automatic detection of antonymy is an important task in Natural Language Processing (...
Word embeddings have recently become a fundamental tool of Natural Language Processing, with applica...
Vector space models of words in NLP---word embeddings---have been recently shown to reliably encode ...
Distinguishing antonyms from synonyms is a key challenge for many NLP applications focused on the le...
Automatic detection of antonymy is an important task in Natural Language Processing (NLP) for Inform...
Recent advances on the Vector Space Model have significantly improved some NLP applications such as ...
Automatic detection of antonymy is an important task in Natural Language Processing (NLP) for Inform...
The word vectors learned by continuous space language models are known to have the property that the...
For many NLP applications such as In-formation Extraction and Sentiment De-tection, it is of vital i...
Word vector space specialisation models offer a portable, light-weight approach to fine-tuning arbit...
This paper analyzes the concept of opposition and describes a fully unsupervised method for its auto...
Semantic relations are core to how humans understand and express concepts in the real world using la...
Proceedings of the NODALIDA 2009 workshop WordNets and other Lexical Semantic Resources — between L...
Word embedding approaches increased the efficiency of natural language processing (NLP) tasks. Tradi...
Contrasting meaning is a basic aspect of semantics. Recent word-embedding models based on distributi...
SUMMARY. Automatic detection of antonymy is an important task in Natural Language Processing (...
Word embeddings have recently become a fundamental tool of Natural Language Processing, with applica...
Vector space models of words in NLP---word embeddings---have been recently shown to reliably encode ...