Recently, it has been found that monolingual English language models can be used as knowledge bases. Instead of structural knowledge base queries, masked sentences such as “Paris is the capital of [MASK]” are used as probes. We translate the established benchmarks TREx and GoogleRE into 53 languages. Working with mBERT, we investigate three questions. (i) Can mBERT be used as a multilingual knowledge base? Most prior work only considers English. Extending research to multiple languages is important for diversity and accessibility. (ii) Is mBERT’s performance as knowledge base language-independent or does it vary from language to language? (iii) A multilingual model is trained on more text, e.g., mBERT is trained on 104 Wikipedias. Can mBERT...
Large pre-trained masked language models have become state-of-the-art solutions for many NLP problem...
Multilingual language models exhibit better performance for some languages than for others (Singh et...
In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence language model pretrained wi...
Recently, it has been found that monolingual English language models can be used as knowledge bases....
Multilingual language models such as mBERT have seen impressive cross-lingual transfer to a variety ...
International audienceTransfer learning based on pretraining language models on a large amount of ra...
Multilingual language models are widely used to extend NLP systems to low-resource languages. Howeve...
While pretrained language models (PLMs) primarily serve as general purpose text encoders that can be...
Pre-trained multilingual language models play an important role in cross-lingual natural language un...
We present Belebele, a multiple-choice machine reading comprehension (MRC) dataset spanning 122 lang...
Large pretrained masked language models have become state-of-the-art solutions for many NLP problems...
For many (minority) languages, the resources needed to train large models are not available. We inve...
Large Language Models (LLMs), trained predominantly on extensive English data, often exhibit limitat...
Knowledge-enhanced language representation learning has shown promising results across various knowl...
It has been shown that multilingual BERT (mBERT) yields high quality multilingual rep- resentations ...
Large pre-trained masked language models have become state-of-the-art solutions for many NLP problem...
Multilingual language models exhibit better performance for some languages than for others (Singh et...
In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence language model pretrained wi...
Recently, it has been found that monolingual English language models can be used as knowledge bases....
Multilingual language models such as mBERT have seen impressive cross-lingual transfer to a variety ...
International audienceTransfer learning based on pretraining language models on a large amount of ra...
Multilingual language models are widely used to extend NLP systems to low-resource languages. Howeve...
While pretrained language models (PLMs) primarily serve as general purpose text encoders that can be...
Pre-trained multilingual language models play an important role in cross-lingual natural language un...
We present Belebele, a multiple-choice machine reading comprehension (MRC) dataset spanning 122 lang...
Large pretrained masked language models have become state-of-the-art solutions for many NLP problems...
For many (minority) languages, the resources needed to train large models are not available. We inve...
Large Language Models (LLMs), trained predominantly on extensive English data, often exhibit limitat...
Knowledge-enhanced language representation learning has shown promising results across various knowl...
It has been shown that multilingual BERT (mBERT) yields high quality multilingual rep- resentations ...
Large pre-trained masked language models have become state-of-the-art solutions for many NLP problem...
Multilingual language models exhibit better performance for some languages than for others (Singh et...
In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence language model pretrained wi...