Given a set of instances of some relation, the relation induction task is to predict which other word pairs are likely to be related in the same way. While it is natural to use word embeddingsfor this task, standard approaches based on vector translations turn out to perform poorly. To address this issue, we propose two probabilistic relation induction models. The first model is based on translations, but uses Gaussians to explicitly model the variability of these translations and to encode soft constraints on the source and target words that may be chosen. In the second model, we use Bayesian linear regression to encode the assumption that there is a linear relationship between the vector representations of related words, which is consider...
The maintenance of wordnets and lexical knwoledge bases typically relies on time-consuming manual ef...
In this research, we manually create high-quality datasets in the digital humanities domain for the ...
The semantic representation of words is a fundamental task in natural language processing and text m...
Word embedding models such as GloVe rely on co-occurrence statistics to learn vector representations...
Semantic relations are core to how humans understand and express concepts in the real world using la...
Word embedding models such as GloVe rely on co-occurrence statistics to learn vector representations...
One of the most remarkable properties of word embeddings is the fact that they capture certain types...
We present SeVeN (Semantic Vector Networks), a hybrid resource that encodes relationships between wo...
International audienceVarious NLP problems -- such as the prediction of sentence similarity, entailm...
Methods for representing the meaning of words in vector spaces purely using the information distribu...
Computational models of verbal analogy and relational similarity judgments can employ different type...
We present a novel learning method for word embeddings designed for relation classification. Our wor...
A semantic relation between two given words a and b can be represented using two complementary sourc...
Identifying the relations that exist between words (or entities) is important for various natural la...
[EN] The development of a model to quantify semantic similarity and relatedness between words has be...
The maintenance of wordnets and lexical knwoledge bases typically relies on time-consuming manual ef...
In this research, we manually create high-quality datasets in the digital humanities domain for the ...
The semantic representation of words is a fundamental task in natural language processing and text m...
Word embedding models such as GloVe rely on co-occurrence statistics to learn vector representations...
Semantic relations are core to how humans understand and express concepts in the real world using la...
Word embedding models such as GloVe rely on co-occurrence statistics to learn vector representations...
One of the most remarkable properties of word embeddings is the fact that they capture certain types...
We present SeVeN (Semantic Vector Networks), a hybrid resource that encodes relationships between wo...
International audienceVarious NLP problems -- such as the prediction of sentence similarity, entailm...
Methods for representing the meaning of words in vector spaces purely using the information distribu...
Computational models of verbal analogy and relational similarity judgments can employ different type...
We present a novel learning method for word embeddings designed for relation classification. Our wor...
A semantic relation between two given words a and b can be represented using two complementary sourc...
Identifying the relations that exist between words (or entities) is important for various natural la...
[EN] The development of a model to quantify semantic similarity and relatedness between words has be...
The maintenance of wordnets and lexical knwoledge bases typically relies on time-consuming manual ef...
In this research, we manually create high-quality datasets in the digital humanities domain for the ...
The semantic representation of words is a fundamental task in natural language processing and text m...