Recurrent neural networks (RNNs) have represented for years the state of the art in neural machine translation. Recently, new architectures have been proposed, which can leverage parallel computation on GPUs better than classical RNNs. Faster training and inference combined with different sequence-to-sequence modeling also lead to performance improvements. While the new models completely depart from the original recurrent architecture, we decided to investigate how to make RNNs more efficient. In this work, we propose a new recurrent NMT architecture, called Simple Recurrent NMT, built on a class of fast and weakly-recurrent units that use layer normalization and multiple attentions. Our experiments on the WMT14 English-to-German and WMT16 ...
The present work takes into account the compactness and efficiency of Recurrent Neural Networks (RNN...
The Transformer translation model (Vaswani et al., 2017), which relies on selfattention mechanisms, ...
Attention-based Encoder-Decoder has the effective architecture for neural machine translation (NMT),...
Recurrent neural networks (RNNs) have represented for years the state of the art in neural machine t...
With economic globalization and the rapid development of the Internet, the connections between diffe...
It has been shown that increasing model depth improves the quality of neural machine translation. Ho...
We introduce recurrent neural network-based Minimum Translation Unit (MTU) models which make predict...
We introduce recurrent neural network-based Minimum Translation Unit (MTU) models which make predict...
Humans benefit from communication but suffer from language barriers. Machine translation (MT) aims t...
Recurrent Neural Networks (RNNs) have been very successful in many state-of-the-art solutions for na...
Past years have witnessed rapid developments in Neural Machine Translation (NMT). Most recently, wit...
2018-08-01Recurrent neural networks (RNN) have been successfully applied to various Natural Language...
We participated in the WMT 2016 shared news translation task on English ↔ Chinese language pair. Ou...
In this era of globalization, it is quite likely to come across people or community who do not share...
Increasing the capacity of recurrent neural networks (RNN) usually involves augmenting the size...
The present work takes into account the compactness and efficiency of Recurrent Neural Networks (RNN...
The Transformer translation model (Vaswani et al., 2017), which relies on selfattention mechanisms, ...
Attention-based Encoder-Decoder has the effective architecture for neural machine translation (NMT),...
Recurrent neural networks (RNNs) have represented for years the state of the art in neural machine t...
With economic globalization and the rapid development of the Internet, the connections between diffe...
It has been shown that increasing model depth improves the quality of neural machine translation. Ho...
We introduce recurrent neural network-based Minimum Translation Unit (MTU) models which make predict...
We introduce recurrent neural network-based Minimum Translation Unit (MTU) models which make predict...
Humans benefit from communication but suffer from language barriers. Machine translation (MT) aims t...
Recurrent Neural Networks (RNNs) have been very successful in many state-of-the-art solutions for na...
Past years have witnessed rapid developments in Neural Machine Translation (NMT). Most recently, wit...
2018-08-01Recurrent neural networks (RNN) have been successfully applied to various Natural Language...
We participated in the WMT 2016 shared news translation task on English ↔ Chinese language pair. Ou...
In this era of globalization, it is quite likely to come across people or community who do not share...
Increasing the capacity of recurrent neural networks (RNN) usually involves augmenting the size...
The present work takes into account the compactness and efficiency of Recurrent Neural Networks (RNN...
The Transformer translation model (Vaswani et al., 2017), which relies on selfattention mechanisms, ...
Attention-based Encoder-Decoder has the effective architecture for neural machine translation (NMT),...