Since the advent of automatic evaluation, tasks within Natural Language Processing (NLP), including Machine Translation, have been able to better utilize both time and labor resources. Later, multilingual pre-trained models (MLMs)have uplifted many languages’ capacity to participate in NLP research. Contextualized representations generated from these MLMs are both influential towards several downstream tasks and have inspired practitioners to better make sense of them. We propose the adoption of BERTScore, coupled with contrastive learning, for machine translation evaluation in lieu of BLEU - the industry leading metric. While BERTScore computes a similarity score for each token in a candidate and reference sentence, it does away with exact...
Large pre-trained masked language models have become state-of-the-art solutions for many NLP problem...
Translations generated by current statistical systems often have a large variance, in terms of their...
In this paper we introduce MoBiL, a hybrid Monolingual, Bilingual and Language modelling feature set...
Since the advent of automatic evaluation, tasks within Natural Language Processing (NLP), including ...
BERTScore (Zhang et al., 2020), a recently proposed automatic metric for machine translation quality...
Abstract We use referential translation machines (RTMs) for predicting translation performance. RTMs...
Assessing the quality of candidate translations involves diverse linguistic facets. However, most au...
International audienceDeep learning models like BERT, a stack of attention layers with an unsupervis...
Discriminative training, a.k.a. tuning, is an important part of Statistical Machine Translation. Thi...
Machine translation has advanced considerably in recent years, primarily due to the availability of ...
Statistical machine translation is an approach dependent particularly on huge amount of parallel bil...
We present the first ever results show-ing that tuning a machine translation sys-tem against a seman...
The problem of evaluating machine translation (MT) systems is more challenging than it may first app...
Traditional machine translation evaluation metrics such as BLEU and WER have been widely used, but t...
We present a pairwise learning-to-rank approach to machine translation evalua-tion that learns to di...
Large pre-trained masked language models have become state-of-the-art solutions for many NLP problem...
Translations generated by current statistical systems often have a large variance, in terms of their...
In this paper we introduce MoBiL, a hybrid Monolingual, Bilingual and Language modelling feature set...
Since the advent of automatic evaluation, tasks within Natural Language Processing (NLP), including ...
BERTScore (Zhang et al., 2020), a recently proposed automatic metric for machine translation quality...
Abstract We use referential translation machines (RTMs) for predicting translation performance. RTMs...
Assessing the quality of candidate translations involves diverse linguistic facets. However, most au...
International audienceDeep learning models like BERT, a stack of attention layers with an unsupervis...
Discriminative training, a.k.a. tuning, is an important part of Statistical Machine Translation. Thi...
Machine translation has advanced considerably in recent years, primarily due to the availability of ...
Statistical machine translation is an approach dependent particularly on huge amount of parallel bil...
We present the first ever results show-ing that tuning a machine translation sys-tem against a seman...
The problem of evaluating machine translation (MT) systems is more challenging than it may first app...
Traditional machine translation evaluation metrics such as BLEU and WER have been widely used, but t...
We present a pairwise learning-to-rank approach to machine translation evalua-tion that learns to di...
Large pre-trained masked language models have become state-of-the-art solutions for many NLP problem...
Translations generated by current statistical systems often have a large variance, in terms of their...
In this paper we introduce MoBiL, a hybrid Monolingual, Bilingual and Language modelling feature set...