Massively multilingual pre-trained language models, such as mBERT and XLM-RoBERTa, have received significant attention in the recent NLP literature for their excellent capability towards crosslingual zero-shot transfer of NLP tasks. This is especially promising because a large number of languages have no or very little labeled data for supervised learning. Moreover, a substantially improved performance on low resource languages without any significant degradation of accuracy for high resource languages lead us to believe that these models will help attain a fairer distribution of language technologies despite the prevalent unfair and extremely skewed distribution of resources across the world’s languages. Nevertheless, these models, and th...
International audienceEvaluating bias, fairness, and social impact in monolingual language models is...
International audienceEvaluating bias, fairness, and social impact in monolingual language models is...
International audienceEvaluating bias, fairness, and social impact in monolingual language models is...
How cross-linguistically applicable are NLP models, specifically language models? A fair comparison ...
Although recent Massively Multilingual Language Models (MMLMs) like mBERT and XLMR support around 10...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal langu...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
Recently, work in NLP has shifted to few-shot (in-context) learning, with large language models (LLM...
Multilingual pre-trained language models perform remarkably well on cross-lingual transfer for downs...
Large pre-trained masked language models have become state-of-the-art solutions for many NLP problem...
International audienceEvaluating bias, fairness, and social impact in monolingual language models is...
International audienceEvaluating bias, fairness, and social impact in monolingual language models is...
International audienceEvaluating bias, fairness, and social impact in monolingual language models is...
How cross-linguistically applicable are NLP models, specifically language models? A fair comparison ...
Although recent Massively Multilingual Language Models (MMLMs) like mBERT and XLMR support around 10...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal langu...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
NLP technologies are uneven for the world's languages as the state-of-the-art models are only availa...
Recently, work in NLP has shifted to few-shot (in-context) learning, with large language models (LLM...
Multilingual pre-trained language models perform remarkably well on cross-lingual transfer for downs...
Large pre-trained masked language models have become state-of-the-art solutions for many NLP problem...
International audienceEvaluating bias, fairness, and social impact in monolingual language models is...
International audienceEvaluating bias, fairness, and social impact in monolingual language models is...
International audienceEvaluating bias, fairness, and social impact in monolingual language models is...