As the performance of machine translation has improved, the need for a human-like automatic evaluation metric has been increasing. The use of multiple reference translations against a system translation (a hypothesis) has been adopted as a strategy to improve the performance of such evaluation metrics. However, preparing multiple references is highly expensive and impractical. In this study, we propose an automatic evaluation method for machine translation that uses source sentences as additional pseudo-references. The proposed method evaluates a translation hypothesis via regression to assign a real-valued score. The model takes the paired source, reference, and hypothesis sentences together as input. A pre-trained large-scale cross-lingua...
Evaluation of machine translation output is an important task. Various human evaluation techniques a...
Evaluation of machine translation is one of the most important issues in this field. We have already...
We describe a large-scale investigation of the correlation between human judgments of machine transl...
Most evaluation metrics for machine translation (MT) require reference translations for each sentenc...
Reliably evaluating Machine Translation (MT) through automated metrics is a long-standing problem. O...
Automatic evaluation metrics are fast and cost-effective measurements of the quality of a Machine Tr...
Copyright © 2014 Aaron L.-F. Han et al.This is an open access article distributed under theCreative ...
Evaluation of machine translation output is an important task. Various human evaluation techniques a...
Any scientific endeavour must be evaluated in order to assess its correctness. In many applied scien...
In the past few decades machine translation research has made major progress. A researcher now has a...
We investigate the problem of predicting the quality of sentences produced by ma-chine translation s...
Assessing the quality of candidate translations involves diverse linguistic facets. However, most au...
Evaluation measures for machine trans-lation depend on several common meth-ods, such as preprocessin...
Evaluation measures for machine translation depend on several common methods, such as preprocessin...
Evaluation measures for machine trans-lation depend on several common meth-ods, such as preprocessin...
Evaluation of machine translation output is an important task. Various human evaluation techniques a...
Evaluation of machine translation is one of the most important issues in this field. We have already...
We describe a large-scale investigation of the correlation between human judgments of machine transl...
Most evaluation metrics for machine translation (MT) require reference translations for each sentenc...
Reliably evaluating Machine Translation (MT) through automated metrics is a long-standing problem. O...
Automatic evaluation metrics are fast and cost-effective measurements of the quality of a Machine Tr...
Copyright © 2014 Aaron L.-F. Han et al.This is an open access article distributed under theCreative ...
Evaluation of machine translation output is an important task. Various human evaluation techniques a...
Any scientific endeavour must be evaluated in order to assess its correctness. In many applied scien...
In the past few decades machine translation research has made major progress. A researcher now has a...
We investigate the problem of predicting the quality of sentences produced by ma-chine translation s...
Assessing the quality of candidate translations involves diverse linguistic facets. However, most au...
Evaluation measures for machine trans-lation depend on several common meth-ods, such as preprocessin...
Evaluation measures for machine translation depend on several common methods, such as preprocessin...
Evaluation measures for machine trans-lation depend on several common meth-ods, such as preprocessin...
Evaluation of machine translation output is an important task. Various human evaluation techniques a...
Evaluation of machine translation is one of the most important issues in this field. We have already...
We describe a large-scale investigation of the correlation between human judgments of machine transl...