In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors' capability for judging semantic similarity. Applying this method to publicly available pre-trained word vectors leads to a new state of the art performance on the SimLex-999 dataset. We also show how the method can be used to tailor the word vector space for the downstream task of dialogue state tracking, resulting in robust improvements across different dialogue domains
Word vectors are at the core of many natural language processing tasks. Recently, there has been int...
Lexical co-occurrence counts from large corpora have been used to construct highdimensional vector-s...
We present a systematic study of parame-ters used in the construction of semantic vector space model...
Word vector space specialisation models offer a portable, light-weight approach to fine-tuning arbit...
We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecti...
The word vectors learned by continuous space language models are known to have the property that the...
Vector space word representations are learned from distributional information of words in large corp...
Morphologically rich languages accentuate two properties of distributional vector space models: 1) t...
This paper presents a new approach for finding semantically similar words from large text collection...
Methods for learning vector space representations of words have yielded spaces which contain semanti...
We propose two novel model architectures for computing continuous vector representations of words fr...
Recent methods for learning vector space representations of words have succeeded in capturing fine-g...
Recent methods for learning vector space representations of words have succeeded in capturing fine-g...
Word vector specialisation (also known as retrofitting) is a portable, light-weight approach to fine...
We present LEAR (Lexical Entailment Attract-Repel), a novel post-processing method that transforms a...
Word vectors are at the core of many natural language processing tasks. Recently, there has been int...
Lexical co-occurrence counts from large corpora have been used to construct highdimensional vector-s...
We present a systematic study of parame-ters used in the construction of semantic vector space model...
Word vector space specialisation models offer a portable, light-weight approach to fine-tuning arbit...
We present Attract-Repel, an algorithm for improving the semantic quality of word vectors by injecti...
The word vectors learned by continuous space language models are known to have the property that the...
Vector space word representations are learned from distributional information of words in large corp...
Morphologically rich languages accentuate two properties of distributional vector space models: 1) t...
This paper presents a new approach for finding semantically similar words from large text collection...
Methods for learning vector space representations of words have yielded spaces which contain semanti...
We propose two novel model architectures for computing continuous vector representations of words fr...
Recent methods for learning vector space representations of words have succeeded in capturing fine-g...
Recent methods for learning vector space representations of words have succeeded in capturing fine-g...
Word vector specialisation (also known as retrofitting) is a portable, light-weight approach to fine...
We present LEAR (Lexical Entailment Attract-Repel), a novel post-processing method that transforms a...
Word vectors are at the core of many natural language processing tasks. Recently, there has been int...
Lexical co-occurrence counts from large corpora have been used to construct highdimensional vector-s...
We present a systematic study of parame-ters used in the construction of semantic vector space model...