We present an approach to feature weight optimization for document-level decoding. This is an essential task for enabling future development of discourse-level statistical machine translation, as it allows easy integration of discourse features in the decoding process. We extend the framework of sentence-level feature weight optimization to the document-level. We show experimentally that we can get competitive and relatively stable results when using a standard set of features, and that this framework also allows us to optimize document- level features, which can be used to model discourse phenomena
Several recent papers claim to have achieved human parity at sentence-level machine translation (MT)...
This article shows how the automatic dis-ambiguation of discourse connectives can improve Statistica...
Document-level machine translation focuses on the translation of entire documents from a source to a...
This thesis addresses the technical and linguistic aspects of discourse-level processing in phrase-b...
In this paper we investigate the technique of extending the Moses Statistical Machine Translation (S...
Independence between sentences is an assumption deeply entrenched in the models and algorithms used ...
In this thesis we investigate several methods how to improve the quality of statistical machine tran...
We describe Docent, an open-source decoder for statistical machine translation that breaks with the ...
We study the impact of source length and verbosity of the tuning dataset on the per-formance of para...
This paper investigates varying the decoder weight of the language model (LM) when translating diffe...
Most of the current SMT systems work at sentence level. They translate a text assuming that sentence...
There have been many recent investigations into methods to tune SMT systems using large numbers of s...
Weights of the various components in a standard Statistical Machine Translation model are usually es...
International audienceDuring decoding, the Statistical Machine Translation (SMT) decoder travels ove...
The neural revolution in machine translation has made it easier to model larger contexts beyond the...
Several recent papers claim to have achieved human parity at sentence-level machine translation (MT)...
This article shows how the automatic dis-ambiguation of discourse connectives can improve Statistica...
Document-level machine translation focuses on the translation of entire documents from a source to a...
This thesis addresses the technical and linguistic aspects of discourse-level processing in phrase-b...
In this paper we investigate the technique of extending the Moses Statistical Machine Translation (S...
Independence between sentences is an assumption deeply entrenched in the models and algorithms used ...
In this thesis we investigate several methods how to improve the quality of statistical machine tran...
We describe Docent, an open-source decoder for statistical machine translation that breaks with the ...
We study the impact of source length and verbosity of the tuning dataset on the per-formance of para...
This paper investigates varying the decoder weight of the language model (LM) when translating diffe...
Most of the current SMT systems work at sentence level. They translate a text assuming that sentence...
There have been many recent investigations into methods to tune SMT systems using large numbers of s...
Weights of the various components in a standard Statistical Machine Translation model are usually es...
International audienceDuring decoding, the Statistical Machine Translation (SMT) decoder travels ove...
The neural revolution in machine translation has made it easier to model larger contexts beyond the...
Several recent papers claim to have achieved human parity at sentence-level machine translation (MT)...
This article shows how the automatic dis-ambiguation of discourse connectives can improve Statistica...
Document-level machine translation focuses on the translation of entire documents from a source to a...