Word embeddings trained on natural corpora (e.g., newspaper collections, Wikipedia or the Web) excel in capturing thematic similarity (“topical relatedness”) on word pairs such as ‘coffee’ and ‘cup’ or ’bus’ and ‘road’. However, they are less successful on pairs showing taxonomic similarity, like ‘cup’ and ‘mug’ (near synonyms) or ‘bus’ and ‘train’ (types of public transport). Moreover, purely taxonomy-based embeddings (e.g. those trained on a random-walk of WordNet’s structure) outperform natural-corpus embeddings in taxonomic similarity but underperform them in thematic similarity. Previous work suggests that performance gains in both types of similarity can be achieved by enriching natural-corpus embeddings with taxonomic information fro...
In our participation on the task we wanted to test three different kinds of relatedness algorithms: ...
This is a resource description paper that describes the creation and properties of a set of pseudo-c...
International audienceComputing pairwise word semantic similarity is widely used and serves as a bui...
Word embeddings trained on natural corpora (e.g., newspaper collections, Wikipedia or the Web) excel...
Creating word embeddings that reflect semantic relationships encoded in lexical knowledge resources ...
This archive contains a collection of computational models called word embeddings. These are vectors...
Modelling taxonomic and thematic relatedness is important for building AI with comprehensive natural...
Giesen J, Kahlmeyer P, Nussbaum F, Zarrieß S. Leveraging the Wikipedia Graph for Evaluating Word Emb...
The digital era floods us with an excessive amount of text data. To make sense of such data automati...
Recent trends suggest that neural-network-inspired word embedding models outperform traditional coun...
We consider the following problem: given neural language models (embeddings) each of which is traine...
Text and Knowledge Bases are complementary sources of information. Given the success of distributed ...
Do continuous word embeddings encode any useful information for constituency parsing? We isolate thr...
Modelling semantic similarity plays a fundamental role in lexical semantic applications. A natural w...
Word embedding models have been an important contribution to natural language processing; following ...
In our participation on the task we wanted to test three different kinds of relatedness algorithms: ...
This is a resource description paper that describes the creation and properties of a set of pseudo-c...
International audienceComputing pairwise word semantic similarity is widely used and serves as a bui...
Word embeddings trained on natural corpora (e.g., newspaper collections, Wikipedia or the Web) excel...
Creating word embeddings that reflect semantic relationships encoded in lexical knowledge resources ...
This archive contains a collection of computational models called word embeddings. These are vectors...
Modelling taxonomic and thematic relatedness is important for building AI with comprehensive natural...
Giesen J, Kahlmeyer P, Nussbaum F, Zarrieß S. Leveraging the Wikipedia Graph for Evaluating Word Emb...
The digital era floods us with an excessive amount of text data. To make sense of such data automati...
Recent trends suggest that neural-network-inspired word embedding models outperform traditional coun...
We consider the following problem: given neural language models (embeddings) each of which is traine...
Text and Knowledge Bases are complementary sources of information. Given the success of distributed ...
Do continuous word embeddings encode any useful information for constituency parsing? We isolate thr...
Modelling semantic similarity plays a fundamental role in lexical semantic applications. A natural w...
Word embedding models have been an important contribution to natural language processing; following ...
In our participation on the task we wanted to test three different kinds of relatedness algorithms: ...
This is a resource description paper that describes the creation and properties of a set of pseudo-c...
International audienceComputing pairwise word semantic similarity is widely used and serves as a bui...