This paper presents a scalable method for integrating compositional morphological representations into a vector-based probabilistic language model. Our approach is evaluated in the context of log-bilinear language models, rendered suitably efficient for implementation inside a machine translation decoder by factoring the vocabulary. We perform both intrinsic and extrinsic evaluations, presenting results on a range of languages which demonstrate that our model learns morphological representations that both perform well on word similarity tasks and lead to substantial reductions in perplexity. When used for translation into morphologically rich languages with large vocabularies, our models obtain improvements of up to 1.2 BLEU points relative...
Treating morphologically complex words (MCWs) as atomic units in translation would not yield a desi...
It is well known that good language models improve performance of speech recognition. One requiremen...
<p>We present a morphology-aware nonparametric Bayesian model of language whose prior distribution u...
This paper presents a scalable method for integrating compositional morphological representations in...
Neural machine translation (NMT) models are typically trained with fixed-size input and output vocab...
The requirement for neural machine translation (NMT) models to use fixed-size input and output vocab...
Neural architectures are prominent in the construction of language models (LMs). However, word-leve...
Compositional generalisation refers to the ability to understand and generate a potentially infinite...
Abstract We propose a language-independent approach for improving statistical machine translation fo...
We present a joint morphological-lexical language model (JMLLM) for use in statistical machine trans...
Translation into morphologically-rich languages challenges neural machine translation (NMT) models w...
This thesis addresses some of the challenges of translating morphologically rich languages (MRLs). W...
Translating into morphologically rich languages is difficult. Although the coverage of lemmas may...
We propose a novel pipeline for translation into morphologically rich languages which consists of tw...
In this paper, a novel algorithm for incorporating morpho-logical knowledge into statistical machine...
Treating morphologically complex words (MCWs) as atomic units in translation would not yield a desi...
It is well known that good language models improve performance of speech recognition. One requiremen...
<p>We present a morphology-aware nonparametric Bayesian model of language whose prior distribution u...
This paper presents a scalable method for integrating compositional morphological representations in...
Neural machine translation (NMT) models are typically trained with fixed-size input and output vocab...
The requirement for neural machine translation (NMT) models to use fixed-size input and output vocab...
Neural architectures are prominent in the construction of language models (LMs). However, word-leve...
Compositional generalisation refers to the ability to understand and generate a potentially infinite...
Abstract We propose a language-independent approach for improving statistical machine translation fo...
We present a joint morphological-lexical language model (JMLLM) for use in statistical machine trans...
Translation into morphologically-rich languages challenges neural machine translation (NMT) models w...
This thesis addresses some of the challenges of translating morphologically rich languages (MRLs). W...
Translating into morphologically rich languages is difficult. Although the coverage of lemmas may...
We propose a novel pipeline for translation into morphologically rich languages which consists of tw...
In this paper, a novel algorithm for incorporating morpho-logical knowledge into statistical machine...
Treating morphologically complex words (MCWs) as atomic units in translation would not yield a desi...
It is well known that good language models improve performance of speech recognition. One requiremen...
<p>We present a morphology-aware nonparametric Bayesian model of language whose prior distribution u...