It is often assumed that ‘grounded’ learning tasks are beyond the scope of grammatical inference techniques. In this paper, we show that the grounded task of learning a semantic parser from ambiguous training data as discussed in Kim and Mooney (2010) can be reduced to a Probabilistic Context-Free Grammar learning task in a way that gives state of the art results. We further show that additionally letting our model learn the language’s canonical word order improves its performance and leads to the highest semantic parsing f-scores previously reported in the literature.10 page(s
International audienceWe present a polynomial update time algorithm for the inductive inference of a...
The task of unsupervised induction of probabilistic context-free grammars (PCFGs) has attracted a lo...
We explore the problem of automatic grammar correction and extend the work of [Park and Levy, 2011]....
textCommunicating with natural language interfaces is a long-standing, ultimate goal for artificial ...
Treebank parsing can be seen as the search for an optimally refined grammar consistent with a coarse...
This paper describes the Omphalos Context-Free Language Learning Competition held as part of the Int...
The problem of identifying a probabilistic context free grammar has two aspects: the first is determ...
Editor: editor We present a polynomial update time algorithm for the inductive inference of a large ...
One of the key challenges in grounded language acquisition is resolving the intentions of the expres...
Grammatical inference is a branch of computational learning theory that attacks the problem of learn...
Abstract Unsupervised learning algorithms have been derived for several statistical models of Englis...
This thesis considers the problem of assigning a sentence its syntactic structure, which may be disc...
Much is still unknown about how children learn language, but it is clear that they perform “grounded...
This article uses semi-supervised Expectation Maximization (EM) to learn lexico-syntactic dependenci...
We present a probabilistic generative model for learning semantic parsers from ambiguous supervision...
International audienceWe present a polynomial update time algorithm for the inductive inference of a...
The task of unsupervised induction of probabilistic context-free grammars (PCFGs) has attracted a lo...
We explore the problem of automatic grammar correction and extend the work of [Park and Levy, 2011]....
textCommunicating with natural language interfaces is a long-standing, ultimate goal for artificial ...
Treebank parsing can be seen as the search for an optimally refined grammar consistent with a coarse...
This paper describes the Omphalos Context-Free Language Learning Competition held as part of the Int...
The problem of identifying a probabilistic context free grammar has two aspects: the first is determ...
Editor: editor We present a polynomial update time algorithm for the inductive inference of a large ...
One of the key challenges in grounded language acquisition is resolving the intentions of the expres...
Grammatical inference is a branch of computational learning theory that attacks the problem of learn...
Abstract Unsupervised learning algorithms have been derived for several statistical models of Englis...
This thesis considers the problem of assigning a sentence its syntactic structure, which may be disc...
Much is still unknown about how children learn language, but it is clear that they perform “grounded...
This article uses semi-supervised Expectation Maximization (EM) to learn lexico-syntactic dependenci...
We present a probabilistic generative model for learning semantic parsers from ambiguous supervision...
International audienceWe present a polynomial update time algorithm for the inductive inference of a...
The task of unsupervised induction of probabilistic context-free grammars (PCFGs) has attracted a lo...
We explore the problem of automatic grammar correction and extend the work of [Park and Levy, 2011]....