State-of-the-art Transformer-based neural machine translation (NMT) systems still follow a standard encoder-decoder framework, in which source sentence representation can be well done by an encoder with self-attention mechanism. Though Transformer-based encoder may effectively capture general information in its resulting source sentence representation, the backbone information, which stands for the gist of a sentence, is not specifically focused on. In this paper, we propose an explicit sentence compression method to enhance the source sentence representation for NMT. In practice, an explicit sentence compression goal used to learn the backbone information in a sentence. We propose three ways, including backbone source-side fusion, target-s...
The dominant neural machine translation (NMT) models that based on the encoder-decoder architecture ...
Transformer-based neural machine translation (NMT) has achieved state-of-the-art performance in the ...
Machine translation presents its root in the domain of textual processing that focuses on the usage ...
Although end-to-end Neural Machine Translation (NMT) has achieved remarkable progress in the past tw...
Sharing source and target side vocabularies and word embeddings has been a popular practice in neura...
Most neural machine translation models are implemented as a conditional language model framework com...
Attention-based Encoder-Decoder has the effective architecture for neural machine translation (NMT),...
In interactive machine translation (MT), human translators correct errors in automatic translations ...
Pre-training and fine-tuning have become the de facto paradigm in many natural language processing (...
We propose to achieve explainable neural machine translation (NMT) by changing the output representa...
Though early successes of Statistical Machine Translation (SMT) systems are attributed in part to th...
Humans benefit from communication but suffer from language barriers. Machine translation (MT) aims t...
Embedding matrices are key components in neural natural language processing (NLP) models that are re...
Neural machine translation (NMT) conducts end-to-end translation with a source language encoder and ...
The utility of linguistic annotation in neural machine translation seemed to had been established in...
The dominant neural machine translation (NMT) models that based on the encoder-decoder architecture ...
Transformer-based neural machine translation (NMT) has achieved state-of-the-art performance in the ...
Machine translation presents its root in the domain of textual processing that focuses on the usage ...
Although end-to-end Neural Machine Translation (NMT) has achieved remarkable progress in the past tw...
Sharing source and target side vocabularies and word embeddings has been a popular practice in neura...
Most neural machine translation models are implemented as a conditional language model framework com...
Attention-based Encoder-Decoder has the effective architecture for neural machine translation (NMT),...
In interactive machine translation (MT), human translators correct errors in automatic translations ...
Pre-training and fine-tuning have become the de facto paradigm in many natural language processing (...
We propose to achieve explainable neural machine translation (NMT) by changing the output representa...
Though early successes of Statistical Machine Translation (SMT) systems are attributed in part to th...
Humans benefit from communication but suffer from language barriers. Machine translation (MT) aims t...
Embedding matrices are key components in neural natural language processing (NLP) models that are re...
Neural machine translation (NMT) conducts end-to-end translation with a source language encoder and ...
The utility of linguistic annotation in neural machine translation seemed to had been established in...
The dominant neural machine translation (NMT) models that based on the encoder-decoder architecture ...
Transformer-based neural machine translation (NMT) has achieved state-of-the-art performance in the ...
Machine translation presents its root in the domain of textual processing that focuses on the usage ...