We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventiona...
Historically, models of human language assume that sentences have a symbolic structure and that this...
Recent advances in deep learning have provided fruitful applications for natural language processing...
As potential candidates for explaining human cognition, connectionist models of sentence processing ...
We use reinforcement learning to learn tree-structured neural networks for computing representations...
Representation learning is a fundamental problem in natural language processing. This paper studies ...
Different from other sequential data, sentences in natural language are structured by linguistic gra...
Syntactic parsing is a key component of natural language understanding and, traditionally, has a sym...
This dissertation explores the use of linguistic structure to inform the structure and parameterizat...
Recurrent neural networks (RNNs) are exceptionally good models of distributions over natural languag...
In the field of natural language processing (NLP), recent research has shown that deep neural networ...
Sentence embedding is an effective feature representation for most deep learning-based NLP tasks. On...
Natural language processing (NLP) is one of the most important technologies of the information age. ...
Neural sequence models have been applied with great success to a variety of tasks in natural languag...
Natural Language Generation (NLG) is the task of generating natural language (e.g., English sentenc...
Learning to construct text representations in end-to-end systems can be difficult, as natural langua...
Historically, models of human language assume that sentences have a symbolic structure and that this...
Recent advances in deep learning have provided fruitful applications for natural language processing...
As potential candidates for explaining human cognition, connectionist models of sentence processing ...
We use reinforcement learning to learn tree-structured neural networks for computing representations...
Representation learning is a fundamental problem in natural language processing. This paper studies ...
Different from other sequential data, sentences in natural language are structured by linguistic gra...
Syntactic parsing is a key component of natural language understanding and, traditionally, has a sym...
This dissertation explores the use of linguistic structure to inform the structure and parameterizat...
Recurrent neural networks (RNNs) are exceptionally good models of distributions over natural languag...
In the field of natural language processing (NLP), recent research has shown that deep neural networ...
Sentence embedding is an effective feature representation for most deep learning-based NLP tasks. On...
Natural language processing (NLP) is one of the most important technologies of the information age. ...
Neural sequence models have been applied with great success to a variety of tasks in natural languag...
Natural Language Generation (NLG) is the task of generating natural language (e.g., English sentenc...
Learning to construct text representations in end-to-end systems can be difficult, as natural langua...
Historically, models of human language assume that sentences have a symbolic structure and that this...
Recent advances in deep learning have provided fruitful applications for natural language processing...
As potential candidates for explaining human cognition, connectionist models of sentence processing ...