This work describes analysis of nature and causes of MT errors observed by different evaluators under guidance of different quality criteria: adequacy and comprehension and and a not specified generic mixture of adequacy and fluency. We report results for three language pairs and two domains and eleven MT systems. Our findings indicate that and despite the fact that some of the identified phenomena depend on domain and/or language and the following set of phenomena can be considered as generally challenging for modern MT systems: rephrasing groups of words and translation of ambiguous source words and translating noun phrases and and mistranslations. Furthermore and we show that the quality criterion also has impact on error perception. Our...
Since the emergence of the first fully automatic machine translation (MT) systems over fifty years a...
In this paper we present a corpus-based method to evaluate the translation quality of machine transl...
This paper presents a quantitative fine-grained manual evaluation approach to comparing the performa...
This work describes analysis of nature and causes of MT errors observed by different evaluators unde...
This work presents a detailed analysis of translation errors perceived by readers as comprehensibili...
This paper presents a quantitative fine-grained manual evaluation approach to comparing the performa...
We propose facilitating the error annotation task of translation quality assessment by introducing a...
Existing automated MT evaluation methods often require expert human translations. These are produced...
The evaluation of errors made by Machine Translation (MT) systems still needs hu-man effort despite ...
This paper aims to automatically identify which linguistic phenomena represent barriers to better MT...
This paper presents a quantitative fine-grained manual evaluation approach to comparing the performa...
Quality Estimation (QE) and error analysis of Machine Translation (MT) output remain active areas in...
The ArisToCAT project aims to assess the comprehensibility of ‘raw’ (unedited) MT output for readers...
In order to improve the symbiosis between machine translation (MT) system and post-editor, it is not...
Current Machine Translation (MT) systems achieve very good results on a growing variety of language ...
Since the emergence of the first fully automatic machine translation (MT) systems over fifty years a...
In this paper we present a corpus-based method to evaluate the translation quality of machine transl...
This paper presents a quantitative fine-grained manual evaluation approach to comparing the performa...
This work describes analysis of nature and causes of MT errors observed by different evaluators unde...
This work presents a detailed analysis of translation errors perceived by readers as comprehensibili...
This paper presents a quantitative fine-grained manual evaluation approach to comparing the performa...
We propose facilitating the error annotation task of translation quality assessment by introducing a...
Existing automated MT evaluation methods often require expert human translations. These are produced...
The evaluation of errors made by Machine Translation (MT) systems still needs hu-man effort despite ...
This paper aims to automatically identify which linguistic phenomena represent barriers to better MT...
This paper presents a quantitative fine-grained manual evaluation approach to comparing the performa...
Quality Estimation (QE) and error analysis of Machine Translation (MT) output remain active areas in...
The ArisToCAT project aims to assess the comprehensibility of ‘raw’ (unedited) MT output for readers...
In order to improve the symbiosis between machine translation (MT) system and post-editor, it is not...
Current Machine Translation (MT) systems achieve very good results on a growing variety of language ...
Since the emergence of the first fully automatic machine translation (MT) systems over fifty years a...
In this paper we present a corpus-based method to evaluate the translation quality of machine transl...
This paper presents a quantitative fine-grained manual evaluation approach to comparing the performa...