This paper describes a simple evaluation metric for MT which attempts to overcome the well-known deficits of the standard BLEU metric from a slightly different an-gle. It employes Levenshtein’s edit dis-tance for establishing alignment between the MT output and the reference transla-tion in order to reflect the morphological properties of highly inflected languages. It also incorporates a very simple measure expressing the differences in the word or-der. The paper also includes evaluation on the data from the previous SMT workshop for several language pairs.
Linguistic metrics based on syntactic and semantic information have proven very effective for Automa...
Automatic evaluation of machine translation (MT) is based on the idea that the quality of the MT out...
State-of-the-art MT systems use so called log-linear model, which combines several components to pre...
This paper describes a simple evaluation metric for MT which attempts to overcome the well-known def...
This paper describes the latest version of the ATEC metric for automatic MT evaluation, with paramet...
A number of approaches to Automatic MT Evaluation based on deep linguistic knowledge have been sugge...
Meteor is an automatic metric for Machine Translation evaluation which has been demonstrated to have...
Automatic evaluation metrics for Machine Translation (MT) systems, such as BLEU or NIST, are now w...
Automatic Machine Translation (MT) evaluation metrics have traditionally been evaluated by the corre...
Meteor is an automatic metric for Ma-chine Translation evaluation which has been demonstrated to hav...
Machine translation industry is working well but they have been facing problem in postediting. MT-ou...
Machine translation translates a text from one language to another, while text simplification conver...
Machine translation industry is working well but they have been facing problem in postediting. MT-ou...
Evaluation of machine translation (MT) output is a challenging task. In most cases, there is no sing...
Evaluation of machine translation (MT) output is a challenging task. In most cases, there is no sing...
Linguistic metrics based on syntactic and semantic information have proven very effective for Automa...
Automatic evaluation of machine translation (MT) is based on the idea that the quality of the MT out...
State-of-the-art MT systems use so called log-linear model, which combines several components to pre...
This paper describes a simple evaluation metric for MT which attempts to overcome the well-known def...
This paper describes the latest version of the ATEC metric for automatic MT evaluation, with paramet...
A number of approaches to Automatic MT Evaluation based on deep linguistic knowledge have been sugge...
Meteor is an automatic metric for Machine Translation evaluation which has been demonstrated to have...
Automatic evaluation metrics for Machine Translation (MT) systems, such as BLEU or NIST, are now w...
Automatic Machine Translation (MT) evaluation metrics have traditionally been evaluated by the corre...
Meteor is an automatic metric for Ma-chine Translation evaluation which has been demonstrated to hav...
Machine translation industry is working well but they have been facing problem in postediting. MT-ou...
Machine translation translates a text from one language to another, while text simplification conver...
Machine translation industry is working well but they have been facing problem in postediting. MT-ou...
Evaluation of machine translation (MT) output is a challenging task. In most cases, there is no sing...
Evaluation of machine translation (MT) output is a challenging task. In most cases, there is no sing...
Linguistic metrics based on syntactic and semantic information have proven very effective for Automa...
Automatic evaluation of machine translation (MT) is based on the idea that the quality of the MT out...
State-of-the-art MT systems use so called log-linear model, which combines several components to pre...