This paper presents the results of the WMT12 shared tasks, which included a translation task, a task for machine translation evaluation metrics, and a task for run-time estimation of machine translation quality. We conducted a large-scale manual evaluation of 103 machine translation systems submitted by 34 teams. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 12 evaluation metrics. We introduced a new quality estimation task this year, and evaluated submissions from 11 teams
This paper presents the results of the premier shared task organized alongside the Conference on Mac...
Machine Translation Quality Estimation predicts quality scores for translations pro- duced by Machin...
Evaluation of machine translation (MT) output is a challenging task. In most cases, there is no sing...
This paper presents the results of the WMT11 shared tasks, which included a translation task, a syst...
This paper presents the results of the WMT09 shared tasks, which included a translation task, a syst...
This paper presents the results of the WMT14 shared tasks, which included a standard news translatio...
This paper presents the results of the WMT14 shared tasks, which included a standard news translatio...
Title: Measures of Machine Translation Quality Author: Matouš Macháček Department: Institute of Form...
Test data for the WMT18 QE task. Train data can be downloaded from http://hdl.handle.net/11372/LRT-2...
Training and development data for the WMT18 QE task. Test data will be published as a separate item....
This paper describes the Universitat d’Alacant submissions (labelled as UAla-cant) for the machine t...
This paper presents the use of consensus among Machine Translation (MT) systems for the WMT14 Qualit...
This paper presents the results of the WMT15 shared tasks, which included a standard news translatio...
Most evaluation metrics for machine translation (MT) require reference translations for each sentenc...
This paper presents the results of the WMT17 Metrics Shared Task. We asked participants of this task...
This paper presents the results of the premier shared task organized alongside the Conference on Mac...
Machine Translation Quality Estimation predicts quality scores for translations pro- duced by Machin...
Evaluation of machine translation (MT) output is a challenging task. In most cases, there is no sing...
This paper presents the results of the WMT11 shared tasks, which included a translation task, a syst...
This paper presents the results of the WMT09 shared tasks, which included a translation task, a syst...
This paper presents the results of the WMT14 shared tasks, which included a standard news translatio...
This paper presents the results of the WMT14 shared tasks, which included a standard news translatio...
Title: Measures of Machine Translation Quality Author: Matouš Macháček Department: Institute of Form...
Test data for the WMT18 QE task. Train data can be downloaded from http://hdl.handle.net/11372/LRT-2...
Training and development data for the WMT18 QE task. Test data will be published as a separate item....
This paper describes the Universitat d’Alacant submissions (labelled as UAla-cant) for the machine t...
This paper presents the use of consensus among Machine Translation (MT) systems for the WMT14 Qualit...
This paper presents the results of the WMT15 shared tasks, which included a standard news translatio...
Most evaluation metrics for machine translation (MT) require reference translations for each sentenc...
This paper presents the results of the WMT17 Metrics Shared Task. We asked participants of this task...
This paper presents the results of the premier shared task organized alongside the Conference on Mac...
Machine Translation Quality Estimation predicts quality scores for translations pro- duced by Machin...
Evaluation of machine translation (MT) output is a challenging task. In most cases, there is no sing...