As a new neural machine translation approach, NonAutoregressive machine Translation (NAT) has attracted attention recently due to its high efficiency in inference. However, the high efficiency has come at the cost of not capturing the sequential dependency on the target side of translation, which causes NAT to suffer from two kinds of translation errors: 1) repeated translations (due to indistinguishable adjacent decoder hidden states), and 2) incomplete translations (due to incomplete transfer of source side information via the decoder hidden states). In this paper, we propose to address these two problems by improving the quality of decoder hidden representations via two auxiliary regularization terms in the training process of an NAT mod...
Neural Machine Translation (NMT) is a new approach to machine translation that has made great progre...
Non-autoregressive neural machine translation (NAT) models are proposed to accelerate the inference ...
How do we perform efficient inference while retaining high translation quality? Existing neural mach...
Non-autoregressive translation (NAT) models, which remove the dependence on previous target tokens f...
Non-autoregressive neural machine translation (NAT) models suffer from the multi-modality problem th...
Non-autoregressive neural machine translation (NAT) generates each target word in parallel and has a...
Non-Autoregressive Neural Machine Translation (NAT) achieves significant decoding speedup through ge...
Although Neural Machine Translation (NMT) has achieved remarkable progress in the past several years...
Without real bilingual corpus available, unsupervised Neural Machine Translation (NMT) typically req...
UNMT tackles translation on monolingual corpora in two required languages. Since there is no explici...
Non-autoregressive (NAR) generation, which is first proposed in neural machine translation (NMT) to ...
Non-autoregressive translation (NAT) models remove the dependence on previous target tokens and gene...
Neural machine translation (NMT) has become the de facto standard in the machine translation communi...
International audienceNon-autoregressive machine translation (NAT) has recently made great progress....
Although end-to-end Neural Machine Translation (NMT) has achieved remarkable progress in the past tw...
Neural Machine Translation (NMT) is a new approach to machine translation that has made great progre...
Non-autoregressive neural machine translation (NAT) models are proposed to accelerate the inference ...
How do we perform efficient inference while retaining high translation quality? Existing neural mach...
Non-autoregressive translation (NAT) models, which remove the dependence on previous target tokens f...
Non-autoregressive neural machine translation (NAT) models suffer from the multi-modality problem th...
Non-autoregressive neural machine translation (NAT) generates each target word in parallel and has a...
Non-Autoregressive Neural Machine Translation (NAT) achieves significant decoding speedup through ge...
Although Neural Machine Translation (NMT) has achieved remarkable progress in the past several years...
Without real bilingual corpus available, unsupervised Neural Machine Translation (NMT) typically req...
UNMT tackles translation on monolingual corpora in two required languages. Since there is no explici...
Non-autoregressive (NAR) generation, which is first proposed in neural machine translation (NMT) to ...
Non-autoregressive translation (NAT) models remove the dependence on previous target tokens and gene...
Neural machine translation (NMT) has become the de facto standard in the machine translation communi...
International audienceNon-autoregressive machine translation (NAT) has recently made great progress....
Although end-to-end Neural Machine Translation (NMT) has achieved remarkable progress in the past tw...
Neural Machine Translation (NMT) is a new approach to machine translation that has made great progre...
Non-autoregressive neural machine translation (NAT) models are proposed to accelerate the inference ...
How do we perform efficient inference while retaining high translation quality? Existing neural mach...