Proceedings of the 16th Nordic Conference of Computational Linguistics NODALIDA-2007. Editors: Joakim Nivre, Heiki-Jaan Kaalep, Kadri Muischnek and Mare Koit. University of Tartu, Tartu, 2007. ISBN 978-9985-4-0513-0 (online) ISBN 978-9985-4-0514-7 (CD-ROM) pp. 372-379
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if th...
Traditionally, parsers are evaluated against gold standard test data. This can cause\ud problems if ...
Recent years have seen an increasing interest in developing standards for linguistic annotation, wit...
This paper presents a thorough examination of the validity of three evaluation measures on parser ou...
Recent studies focussed on the question whether less-congurational languages like German are harder ...
Recent years have seen an increasing interest in developing standards for linguistic annotation, wit...
In the last decade, the Penn treebank has become the standard data set for evaluating parsers. The f...
Proceedings of the Sixth International Workshop on Treebanks and Linguistic Theories. Editors: Ko...
We argue that the current dominant paradigm in parser evaluation work, which combines use of the Pen...
When a statistical parser is trained on one treebank, one usually tests it on another portion of the...
Quantitative evaluation of parsers has traditionally centered around the PARSEVAL measures of crossi...
Recent studies focussed on the question whether less-configurational languages like German are harde...
In the last decade, the Penn treebank has become the standard data set for evaluating parsers. The f...
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if the...
This paper is a contribution to the ongoing discussion on treebank annotation schemes and their impa...
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if th...
Traditionally, parsers are evaluated against gold standard test data. This can cause\ud problems if ...
Recent years have seen an increasing interest in developing standards for linguistic annotation, wit...
This paper presents a thorough examination of the validity of three evaluation measures on parser ou...
Recent studies focussed on the question whether less-congurational languages like German are harder ...
Recent years have seen an increasing interest in developing standards for linguistic annotation, wit...
In the last decade, the Penn treebank has become the standard data set for evaluating parsers. The f...
Proceedings of the Sixth International Workshop on Treebanks and Linguistic Theories. Editors: Ko...
We argue that the current dominant paradigm in parser evaluation work, which combines use of the Pen...
When a statistical parser is trained on one treebank, one usually tests it on another portion of the...
Quantitative evaluation of parsers has traditionally centered around the PARSEVAL measures of crossi...
Recent studies focussed on the question whether less-configurational languages like German are harde...
In the last decade, the Penn treebank has become the standard data set for evaluating parsers. The f...
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if the...
This paper is a contribution to the ongoing discussion on treebank annotation schemes and their impa...
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if th...
Traditionally, parsers are evaluated against gold standard test data. This can cause\ud problems if ...
Recent years have seen an increasing interest in developing standards for linguistic annotation, wit...