Machine Translation (MT) systems are more complex to test than they appear to be at first, since many interpretations are also seen as equally reliable. The task becomes much more challenging in practice as short texts such as incremental system development and error analysis require tests to be carried out automatically. While a variety of automated metrics, including BLEU, have been suggested and implemented for discrimination against the large-scale MT system, there is still not enough sentence-level connection to judgments. A new metrics class is proposed in this paper based on machine learning. The requirement for broad client concentrations as a source of information preparation may be expelled by another methodology which classifies ...
We investigate whether it is possible to automatically evaluate the output of automatic text simplif...
We investigate whether it is possible to automatically evaluate the output of automatic text simplif...
We investigate whether it is possible to automatically evaluate the output of automatic text simplif...
The problem of evaluating machine translation (MT) systems is more challenging than it may first app...
Recent studies suggest that machine learn-ing can be applied to develop good auto-matic evaluation m...
Automatic evaluation metrics are fast and cost-effective measurements of the quality of a Machine Tr...
State-of-the-art MT systems use so called log-linear model, which combines several components to pre...
Automatic Machine Translation (MT) evaluation is an active field of research, with a handful of new ...
Evaluation of machine translation (MT) is a difficult task, both for humans, and using automatic met...
This paper examines the motivation, design, and practical results of several types of human evaluati...
Automatic Machine Translation (MT) evaluation metrics have traditionally been evaluated by the corre...
Automatic metrics are fundamental for the development and evaluation of machine translation systems....
Many machine translation (MT) evaluation metrics have been shown to correlate better with human judg...
Any scientific endeavour must be evaluated in order to assess its correctness. In many applied scien...
Machine translation evaluation is a very important activity in machine translation development. Auto...
We investigate whether it is possible to automatically evaluate the output of automatic text simplif...
We investigate whether it is possible to automatically evaluate the output of automatic text simplif...
We investigate whether it is possible to automatically evaluate the output of automatic text simplif...
The problem of evaluating machine translation (MT) systems is more challenging than it may first app...
Recent studies suggest that machine learn-ing can be applied to develop good auto-matic evaluation m...
Automatic evaluation metrics are fast and cost-effective measurements of the quality of a Machine Tr...
State-of-the-art MT systems use so called log-linear model, which combines several components to pre...
Automatic Machine Translation (MT) evaluation is an active field of research, with a handful of new ...
Evaluation of machine translation (MT) is a difficult task, both for humans, and using automatic met...
This paper examines the motivation, design, and practical results of several types of human evaluati...
Automatic Machine Translation (MT) evaluation metrics have traditionally been evaluated by the corre...
Automatic metrics are fundamental for the development and evaluation of machine translation systems....
Many machine translation (MT) evaluation metrics have been shown to correlate better with human judg...
Any scientific endeavour must be evaluated in order to assess its correctness. In many applied scien...
Machine translation evaluation is a very important activity in machine translation development. Auto...
We investigate whether it is possible to automatically evaluate the output of automatic text simplif...
We investigate whether it is possible to automatically evaluate the output of automatic text simplif...
We investigate whether it is possible to automatically evaluate the output of automatic text simplif...