© 2019 Association for Computational Linguistics State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success. Here we investigate whether supervision with hierarchical structure enhances learning of a range of grammatical dependencies, a question that has previously been addressed only for subject-verb agreement. Using controlled experimental methods from psycholinguistics, we compare the performance of word-based LSTM models versus two models that represent hierarchical structure and deploy it in left-to-right processing: Recurrent Neural Network Grammars (RNNGs) (Dyer et al., 2016) and a in...
We investigate the extent to which the behavior of neural network language models reflects increment...
We present a dataset for evaluating the grammatical sophistication of language models (LMs). We cons...
We explored the syntactic information encoded implicitly by neural machine translation (NMT) models ...
© 2019 Association for Computational Linguistics State-of-the-art LSTM language models trained on la...
There is a growing interest in investigating what neural NLP models learn about language. A prominen...
One of the key features of natural languages is that they exhibit long-distance filler-gap dependenc...
There is a growing interest in investigating what neural NLP models learn about language. A prominen...
Filler-gap dependencies are among the most challenging syntactic constructions for com- putational m...
Syntax — the study of the hierarchical structure of language — has long featured as a prominent rese...
Syntax — the study of the hierarchical structure of language — has long featured as a prominent rese...
This repository contains the raw results (by word information-theoretic measures for the experimenta...
This paper revisits the question of what LSTMs know about the syntax of filler-gap dependencies in E...
In the field of natural language processing (NLP), recent research has shown that deep neural networ...
Artificial neural networks have become remarkably successful on many natural language processing tas...
Recurrent neural networks (RNNs) are exceptionally good models of distributions over natural languag...
We investigate the extent to which the behavior of neural network language models reflects increment...
We present a dataset for evaluating the grammatical sophistication of language models (LMs). We cons...
We explored the syntactic information encoded implicitly by neural machine translation (NMT) models ...
© 2019 Association for Computational Linguistics State-of-the-art LSTM language models trained on la...
There is a growing interest in investigating what neural NLP models learn about language. A prominen...
One of the key features of natural languages is that they exhibit long-distance filler-gap dependenc...
There is a growing interest in investigating what neural NLP models learn about language. A prominen...
Filler-gap dependencies are among the most challenging syntactic constructions for com- putational m...
Syntax — the study of the hierarchical structure of language — has long featured as a prominent rese...
Syntax — the study of the hierarchical structure of language — has long featured as a prominent rese...
This repository contains the raw results (by word information-theoretic measures for the experimenta...
This paper revisits the question of what LSTMs know about the syntax of filler-gap dependencies in E...
In the field of natural language processing (NLP), recent research has shown that deep neural networ...
Artificial neural networks have become remarkably successful on many natural language processing tas...
Recurrent neural networks (RNNs) are exceptionally good models of distributions over natural languag...
We investigate the extent to which the behavior of neural network language models reflects increment...
We present a dataset for evaluating the grammatical sophistication of language models (LMs). We cons...
We explored the syntactic information encoded implicitly by neural machine translation (NMT) models ...