Recently novel MT evaluation metrics have been presented which go beyond pure string matching, and which correlate better than other existing metrics with human judgements. Other research in this area has presented machine learning methods which learn directly from human judgements. In this paper, we present a novel combination of dependency- and machine learning-based approaches to automatic MT evaluation, and demonstrate greater correlations with human judgement than the existing state-of-the-art methods. In addition, we examine the extent to which our novel method can be generalised across different tasks and domains.
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...
Recently novel MT evaluation metrics have been presented which go beyond pure string matching, and w...
Recently novel MT evaluation metrics have been presented which go beyond pure string matching, and w...
We present a method for evaluating the quality of Machine Translation (MT) output, using labelled de...
Automatic Machine Translation (MT) evaluation is an active field of research, with a handful of new ...
Machine Translation (MT) systems are more complex to test than they appear to be at first, since man...
Recent studies suggest that machine learn-ing can be applied to develop good auto-matic evaluation m...
We present a method for evaluating the quality of Machine Translation (MT) output, using labelled de...
We present a method for evaluating the quality of Machine Translation (MT) output, using labelled de...
We present a method for evaluating the quality of Machine Translation (MT) output, using labelled de...
We propose three new features for MT evaluation: source-sentence constrained n-gram precision, sourc...
We present a method for evaluating the quality of Machine Translation (MT) output, using labelled de...
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...
Recently novel MT evaluation metrics have been presented which go beyond pure string matching, and w...
Recently novel MT evaluation metrics have been presented which go beyond pure string matching, and w...
We present a method for evaluating the quality of Machine Translation (MT) output, using labelled de...
Automatic Machine Translation (MT) evaluation is an active field of research, with a handful of new ...
Machine Translation (MT) systems are more complex to test than they appear to be at first, since man...
Recent studies suggest that machine learn-ing can be applied to develop good auto-matic evaluation m...
We present a method for evaluating the quality of Machine Translation (MT) output, using labelled de...
We present a method for evaluating the quality of Machine Translation (MT) output, using labelled de...
We present a method for evaluating the quality of Machine Translation (MT) output, using labelled de...
We propose three new features for MT evaluation: source-sentence constrained n-gram precision, sourc...
We present a method for evaluating the quality of Machine Translation (MT) output, using labelled de...
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...
We present a novel method for evaluating the output of Machine Translation (MT), based on comparing ...