Cross-lingual pre-training has achieved great successes using monolingual and bilingual plain text corpora. However, most pre-trained models neglect multilingual knowledge, which is language agnostic but comprises abundant cross-lingual structure alignment. In this paper, we propose XLM-K, a cross-lingual language model incorporating multilingual knowledge in pre-training. XLM-K augments existing multilingual pre-training with two knowledge tasks, namely Masked Entity Prediction Task and Object Entailment Task. We evaluate XLM-K on MLQA, NER and XNLI. Experimental results clearly demonstrate significant improvements over existing multilingual language models. The results on MLQA and NER exhibit the superiority of XLM-K in knowledge related...
Masked language models have quickly become the de facto standard when processing text. Recently, sev...
Traditional approaches to supervised learning require a generous amount of labeled data for good gen...
Traditional approaches to supervised learning require a generous amount of labeled data for good gen...
Language model pre-training has achieved success in many natural language processing tasks. Existing...
The main goal behind state-of-the-art pretrained multilingual models such as multilingual BERT and X...
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great...
Although multilingual pretrained models (mPLMs) enabled support of various natural language processi...
The lack of publicly available evaluation data for low-resource languages limits progress in Spoken ...
Cross-lingual Machine Reading Comprehension (xMRC) is a challenging task due to the lack of training...
Recent research has shown promise in multilingual modeling, demonstrating how a single model is capa...
MasterIn this thesis, we present a new approach to Automatic Post-Editing (APE) that uses the cross-...
Large pre-trained masked language models have become state-of-the-art solutions for many NLP problem...
Pre-trained multilingual language models play an important role in cross-lingual natural language un...
Large language models appear to learn facts from the large text corpora they are trained on. Such fa...
Pre-training and fine-tuning have achieved great success in natural language process field. The stan...
Masked language models have quickly become the de facto standard when processing text. Recently, sev...
Traditional approaches to supervised learning require a generous amount of labeled data for good gen...
Traditional approaches to supervised learning require a generous amount of labeled data for good gen...
Language model pre-training has achieved success in many natural language processing tasks. Existing...
The main goal behind state-of-the-art pretrained multilingual models such as multilingual BERT and X...
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great...
Although multilingual pretrained models (mPLMs) enabled support of various natural language processi...
The lack of publicly available evaluation data for low-resource languages limits progress in Spoken ...
Cross-lingual Machine Reading Comprehension (xMRC) is a challenging task due to the lack of training...
Recent research has shown promise in multilingual modeling, demonstrating how a single model is capa...
MasterIn this thesis, we present a new approach to Automatic Post-Editing (APE) that uses the cross-...
Large pre-trained masked language models have become state-of-the-art solutions for many NLP problem...
Pre-trained multilingual language models play an important role in cross-lingual natural language un...
Large language models appear to learn facts from the large text corpora they are trained on. Such fa...
Pre-training and fine-tuning have achieved great success in natural language process field. The stan...
Masked language models have quickly become the de facto standard when processing text. Recently, sev...
Traditional approaches to supervised learning require a generous amount of labeled data for good gen...
Traditional approaches to supervised learning require a generous amount of labeled data for good gen...