Statistical approaches to natural language processing generally obtain the parameters by using the maximum likelihood estimation (MLE) method. The MLE approaches, however, may fail to achieve good performance in difficult tasks, because the discrimination and robustness issues are not taken into consideration in the estimation processes. Motivated by that concern, a discrimination- and robustness-oriented learning algorithm is proposed in this paper for minimizing the error rate. In evaluating the robust learning procedure on a corpus of 1,000 sentences, 64.3 % of the sentences are assigned their correct syntactic structures, while only 53.1 % accuracy rate is obtained with the MLE approach. In addition, parameters are usually estimated poo...
We propose a preliminary model of a practical parameter setting procedure that aims at bridging the ...
Ambiguity is a pervasive and important as-pect of natural language. Ambiguities, which are disambigu...
The input data to grammar learning algorithms often consist of overt forms that do not contain full ...
Natural Language is highly ambiguous, on every level. This article describes a fast broad-coverage s...
This study shows that using computational linguistic models is beneficial for descriptive linguistic...
Despite recent advances in statistical machine learning that significantly improve performance, the ...
We propose a preliminary model of a practical parameter setting procedure that aims at bridging the ...
It is often assumed that when natural language processing meets the real world, the ideal of aiming ...
This thesis demonstrates that several important kinds of natural language ambiguities can be resolve...
This thesis demonstrates that several important kinds of natural language ambiguities can be resolve...
Syntactic ambiguity abounds in natural language, yet humans have no diffculty coping with it. In fac...
Lexical ambiguity resolution is a pervasive problem in natural language processing. An important exa...
We present a dataset for evaluating the grammatical sophistication of language models (LMs). We cons...
Syntactic ambiguity abounds in natural language, yet humans have no difficulty coping with it. In fa...
This paper deals with the interaction between two problems that arise in human language learning, st...
We propose a preliminary model of a practical parameter setting procedure that aims at bridging the ...
Ambiguity is a pervasive and important as-pect of natural language. Ambiguities, which are disambigu...
The input data to grammar learning algorithms often consist of overt forms that do not contain full ...
Natural Language is highly ambiguous, on every level. This article describes a fast broad-coverage s...
This study shows that using computational linguistic models is beneficial for descriptive linguistic...
Despite recent advances in statistical machine learning that significantly improve performance, the ...
We propose a preliminary model of a practical parameter setting procedure that aims at bridging the ...
It is often assumed that when natural language processing meets the real world, the ideal of aiming ...
This thesis demonstrates that several important kinds of natural language ambiguities can be resolve...
This thesis demonstrates that several important kinds of natural language ambiguities can be resolve...
Syntactic ambiguity abounds in natural language, yet humans have no diffculty coping with it. In fac...
Lexical ambiguity resolution is a pervasive problem in natural language processing. An important exa...
We present a dataset for evaluating the grammatical sophistication of language models (LMs). We cons...
Syntactic ambiguity abounds in natural language, yet humans have no difficulty coping with it. In fa...
This paper deals with the interaction between two problems that arise in human language learning, st...
We propose a preliminary model of a practical parameter setting procedure that aims at bridging the ...
Ambiguity is a pervasive and important as-pect of natural language. Ambiguities, which are disambigu...
The input data to grammar learning algorithms often consist of overt forms that do not contain full ...