Cross-lingual word embeddings aim to bridge the gap between high-resource and low-resource languages by allowing to learn multilingual word representations even without using any direct bilingual signal. The lion's share of the methods are projection-based approaches that map pre-trained embeddings into a shared latent space. These methods are mostly based on the orthogonal transformation, which assumes language vector spaces to be isomorphic. However, this criterion does not necessarily hold, especially for morphologically-rich languages. In this paper, we propose a self-supervised method to refine the alignment of unsupervised bilingual word embeddings. The proposed model moves vectors of words and their corresponding translations closer ...
Word embeddings - dense vector representations of a word’s distributional semantics - are an indespe...
Cross-lingual word embeddings are an increasingly important reseource in cross-lingual methods for N...
Building bilingual lexica from non-parallel data is a long-standing natural language processing rese...
One of the notable developments in current natural language processing is the practical efficacy of ...
Cross-lingual word embeddings are becoming increasingly important in multilingual NLP. Recently, i...
Cross-lingual embeddings are vector space representations where word translations tend to be co-loca...
Cross-lingual word embeddings are becoming increasingly important in multilingual NLP. Recently, i...
Word embeddings have become a standard resource in the toolset of any Natural Language Processing p...
∗ Both authors contributed equally Cross-language learning allows one to use training data from one ...
The ability to accurately align concepts between languages can provide significant benefits in many ...
Recent research has discovered that a shared bilingual word embedding space can be induced by projec...
After introducing the necessary background based on a review of the literature, this paper presents ...
Cross-lingual embeddings are vector space representations where word translations tend to be co-loca...
Cross-lingual word embeddings are an increasingly important reseource in cross-lingual methods for N...
We present InstaMap, an instance-based method for learning projection-based cross-lingual word embed...
Word embeddings - dense vector representations of a word’s distributional semantics - are an indespe...
Cross-lingual word embeddings are an increasingly important reseource in cross-lingual methods for N...
Building bilingual lexica from non-parallel data is a long-standing natural language processing rese...
One of the notable developments in current natural language processing is the practical efficacy of ...
Cross-lingual word embeddings are becoming increasingly important in multilingual NLP. Recently, i...
Cross-lingual embeddings are vector space representations where word translations tend to be co-loca...
Cross-lingual word embeddings are becoming increasingly important in multilingual NLP. Recently, i...
Word embeddings have become a standard resource in the toolset of any Natural Language Processing p...
∗ Both authors contributed equally Cross-language learning allows one to use training data from one ...
The ability to accurately align concepts between languages can provide significant benefits in many ...
Recent research has discovered that a shared bilingual word embedding space can be induced by projec...
After introducing the necessary background based on a review of the literature, this paper presents ...
Cross-lingual embeddings are vector space representations where word translations tend to be co-loca...
Cross-lingual word embeddings are an increasingly important reseource in cross-lingual methods for N...
We present InstaMap, an instance-based method for learning projection-based cross-lingual word embed...
Word embeddings - dense vector representations of a word’s distributional semantics - are an indespe...
Cross-lingual word embeddings are an increasingly important reseource in cross-lingual methods for N...
Building bilingual lexica from non-parallel data is a long-standing natural language processing rese...