Word embedding techniques heavily rely on the abundance of training data for individual words. Given the Zipfian distribution of words in natural language texts, a large number of words do not usually appear frequently or at all in the training data. In this paper we put forward a technique that exploits the knowledge encoded in lexical resources, such as WordNet, to induce embeddings for unseen words. Our approach adapts graph embedding and cross-lingual vector space transformation techniques in order to merge lexical knowledge encoded in ontologies with that derived from corpus statistics. We show that the approach can provide consistent performance improvements across multiple evaluation benchmarks: in-vitro, on multiple rare word simila...
Word embeddings — distributed word representations that can be learned from unlabelled data — have b...
Word vector specialisation (also known as retrofitting) is a portable, light-weight approach to fine...
Continuous word representations that can capture the semantic information in the corpus are the buil...
Word embedding techniques heavily rely on the abundance of training data for individual words. Given...
We put forward an approach that exploits the knowledge encoded in lexical resources in order to indu...
Word embeddings are a key component of high-performing natural language processing (NLP) systems, bu...
Word embeddings are a key component of high-performing natural language processing (NLP) systems, bu...
Creating word embeddings that reflect semantic relationships encoded in lexical knowledge resources ...
Real-valued word embeddings have transformed natural language processing (NLP) applications, recogni...
Vector space word representations are typically learned using only co-occurrence statistics from tex...
Motivations like domain adaptation, transfer learning, and feature learning have fueled interest in ...
Recent methods for learning word embeddings, like GloVe orWord2Vec, succeeded in spatial representat...
Methods for representing the meaning of words in vector spaces purely using the information distribu...
Word embedding is a feature learning technique which aims at mapping words from a vocabulary into ve...
Pre-trained word embeddings are often used to initialize deep learning models for text classificatio...
Word embeddings — distributed word representations that can be learned from unlabelled data — have b...
Word vector specialisation (also known as retrofitting) is a portable, light-weight approach to fine...
Continuous word representations that can capture the semantic information in the corpus are the buil...
Word embedding techniques heavily rely on the abundance of training data for individual words. Given...
We put forward an approach that exploits the knowledge encoded in lexical resources in order to indu...
Word embeddings are a key component of high-performing natural language processing (NLP) systems, bu...
Word embeddings are a key component of high-performing natural language processing (NLP) systems, bu...
Creating word embeddings that reflect semantic relationships encoded in lexical knowledge resources ...
Real-valued word embeddings have transformed natural language processing (NLP) applications, recogni...
Vector space word representations are typically learned using only co-occurrence statistics from tex...
Motivations like domain adaptation, transfer learning, and feature learning have fueled interest in ...
Recent methods for learning word embeddings, like GloVe orWord2Vec, succeeded in spatial representat...
Methods for representing the meaning of words in vector spaces purely using the information distribu...
Word embedding is a feature learning technique which aims at mapping words from a vocabulary into ve...
Pre-trained word embeddings are often used to initialize deep learning models for text classificatio...
Word embeddings — distributed word representations that can be learned from unlabelled data — have b...
Word vector specialisation (also known as retrofitting) is a portable, light-weight approach to fine...
Continuous word representations that can capture the semantic information in the corpus are the buil...