This thesis investigates the role of linguistically-motivated generative models of syntax and semantic structure in natural language processing (NLP). Syntactic well-formedness is crucial in language generation, but most statistical models do not account for the hierarchical structure of sentences. Many applications exhibiting natural language understanding rely on structured semantic representations to enable querying, inference and reasoning. Yet most semantic parsers produce domain-specific or inadequately expressive representations. We propose a series of generative transition-based models for dependency syntax which can be applied as both parsers and language models while being amenable to supervised or unsupervised learning. Two model...
This thesis presents and investigates a dependency-based recursive neural network model applied to t...
This thesis presents and investigates a dependency-based recursive neural network model applied to t...
Graduation date: 2017Machine learning models for natural language processing have traditionally reli...
Thesis (Ph.D.)--University of Washington, 2022Natural language processing (NLP) is having a paradigm...
Recurrent neural networks (RNNs) are exceptionally good models of distributions over natural languag...
In this paper we develop novel algorithmic ideas for building a natural language parser grounded upo...
Parsing sentences into syntax trees can benefit downstream applications in NLP. Transition-based par...
Parsing sentences into syntax trees can benefit downstream applications in NLP. Transition-based par...
Syntax — the study of the hierarchical structure of language — has long featured as a prominent rese...
We describe a deterministic shift-reduce parsing model that combines the advantages of connectionism...
Comunicació presentada a la 2016 Conference of the North American Chapter of the Association for Com...
Syntax — the study of the hierarchical structure of language — has long featured as a prominent rese...
Comunicació presentada a la 2016 Conference of the North American Chapter of the Association for Com...
We propose a neural graphical model for parsing natural language sentences into their logical repres...
We present a neural-symbolic learning model of sentence production which displays strong semantic sy...
This thesis presents and investigates a dependency-based recursive neural network model applied to t...
This thesis presents and investigates a dependency-based recursive neural network model applied to t...
Graduation date: 2017Machine learning models for natural language processing have traditionally reli...
Thesis (Ph.D.)--University of Washington, 2022Natural language processing (NLP) is having a paradigm...
Recurrent neural networks (RNNs) are exceptionally good models of distributions over natural languag...
In this paper we develop novel algorithmic ideas for building a natural language parser grounded upo...
Parsing sentences into syntax trees can benefit downstream applications in NLP. Transition-based par...
Parsing sentences into syntax trees can benefit downstream applications in NLP. Transition-based par...
Syntax — the study of the hierarchical structure of language — has long featured as a prominent rese...
We describe a deterministic shift-reduce parsing model that combines the advantages of connectionism...
Comunicació presentada a la 2016 Conference of the North American Chapter of the Association for Com...
Syntax — the study of the hierarchical structure of language — has long featured as a prominent rese...
Comunicació presentada a la 2016 Conference of the North American Chapter of the Association for Com...
We propose a neural graphical model for parsing natural language sentences into their logical repres...
We present a neural-symbolic learning model of sentence production which displays strong semantic sy...
This thesis presents and investigates a dependency-based recursive neural network model applied to t...
This thesis presents and investigates a dependency-based recursive neural network model applied to t...
Graduation date: 2017Machine learning models for natural language processing have traditionally reli...