Natural language understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machine...
Abstract: Spoken dialogue system has an uncertain parameter during the speech recognition which cont...
In this paper, we discuss how discriminative training can be applied to the hidden vector state (HVS...
This paper demonstrates how unsupervised techniques can be used to learn models of deep linguistic s...
which permits unrestricted use, distribution, and reproduction in any medium, provided the original ...
Natural language understanding is to specify a computational model that maps sentences to their sema...
Natural language understanding (NLU) aims to map sentences to their semantic mean representations. S...
Natural language understanding (NLU) aims to map sen-tences to their semantic mean representations. ...
In this paper, we propose a learning approach to train conditional random fields from unaligned data...
This thesis focuses on robust analysis of natural language semantics. A primary bottleneck for seman...
We propose a hybrid generative/discriminative framework for semantic parsing which combines the hidd...
The Hidden Vector State (HVS) Model is an extension of the basic discrete Markov model in which cont...
This paper discusses semantic processing using the Hidden Vector State (HVS) model. The HVS model ex...
Finding the right representations for words is critical for building accurate NLP systems when domai...
We present a novel statistical approach to semantic parsing, WASP, for constructing a complete, form...
This paper presents a novel method of semantic parsing that maps a natural language (NL) sentence to...
Abstract: Spoken dialogue system has an uncertain parameter during the speech recognition which cont...
In this paper, we discuss how discriminative training can be applied to the hidden vector state (HVS...
This paper demonstrates how unsupervised techniques can be used to learn models of deep linguistic s...
which permits unrestricted use, distribution, and reproduction in any medium, provided the original ...
Natural language understanding is to specify a computational model that maps sentences to their sema...
Natural language understanding (NLU) aims to map sentences to their semantic mean representations. S...
Natural language understanding (NLU) aims to map sen-tences to their semantic mean representations. ...
In this paper, we propose a learning approach to train conditional random fields from unaligned data...
This thesis focuses on robust analysis of natural language semantics. A primary bottleneck for seman...
We propose a hybrid generative/discriminative framework for semantic parsing which combines the hidd...
The Hidden Vector State (HVS) Model is an extension of the basic discrete Markov model in which cont...
This paper discusses semantic processing using the Hidden Vector State (HVS) model. The HVS model ex...
Finding the right representations for words is critical for building accurate NLP systems when domai...
We present a novel statistical approach to semantic parsing, WASP, for constructing a complete, form...
This paper presents a novel method of semantic parsing that maps a natural language (NL) sentence to...
Abstract: Spoken dialogue system has an uncertain parameter during the speech recognition which cont...
In this paper, we discuss how discriminative training can be applied to the hidden vector state (HVS...
This paper demonstrates how unsupervised techniques can be used to learn models of deep linguistic s...