We develop a method for detecting errors in semantic predicate-argument annotation, based on the variation n-gram error detection method. After establishing an appropriate data representation, we detect inconsistencies by searching for identical text with varying annotation. By remaining data-driven, we are able to detect inconsistencies arising from errors at lower layers of annotation
We investigate how disagreement in natural language inference (NLI) annotation arises. We developed ...
We describe a novel method for detecting errors in task-based human-computer (HC) dialogues by autom...
This paper describes a statistical approach to detect annotation errors in dependency treebanks. As ...
Annotated data is an essential ingredient in natural language processing for training and evaluating...
This paper discusses an automatic, data-driven approach to treebank error detection. The approach ad...
We introduce a method for error detection in automatically annotated text, aimed at supporting the c...
Proceedings of the Sixth International Workshop on Treebanks and Linguistic Theories. Editors: Ko...
Recent work on error detection has shown that the quality of manually annotated corpora can be subst...
Automatic inconsistency detection in parsed corpora is significantly helpful for building more and l...
Thesis Abstract Akshay Aggarwal July 2020 This thesis attempts at correction of some errors and inco...
In this thesis, we investigate methods for automatic detection, and to some extent correction, of gr...
This paper describes a methodology for supporting the task of annotating sentiment in natural langua...
While the corpus-based research relies on human annotated corpora, it is often said that a non-negli...
In this dissertation, I investigate valencies and syntactically relevant semantic categories in Nort...
We present an error mining tool that is de-signed to help human annotators to find errors and incons...
We investigate how disagreement in natural language inference (NLI) annotation arises. We developed ...
We describe a novel method for detecting errors in task-based human-computer (HC) dialogues by autom...
This paper describes a statistical approach to detect annotation errors in dependency treebanks. As ...
Annotated data is an essential ingredient in natural language processing for training and evaluating...
This paper discusses an automatic, data-driven approach to treebank error detection. The approach ad...
We introduce a method for error detection in automatically annotated text, aimed at supporting the c...
Proceedings of the Sixth International Workshop on Treebanks and Linguistic Theories. Editors: Ko...
Recent work on error detection has shown that the quality of manually annotated corpora can be subst...
Automatic inconsistency detection in parsed corpora is significantly helpful for building more and l...
Thesis Abstract Akshay Aggarwal July 2020 This thesis attempts at correction of some errors and inco...
In this thesis, we investigate methods for automatic detection, and to some extent correction, of gr...
This paper describes a methodology for supporting the task of annotating sentiment in natural langua...
While the corpus-based research relies on human annotated corpora, it is often said that a non-negli...
In this dissertation, I investigate valencies and syntactically relevant semantic categories in Nort...
We present an error mining tool that is de-signed to help human annotators to find errors and incons...
We investigate how disagreement in natural language inference (NLI) annotation arises. We developed ...
We describe a novel method for detecting errors in task-based human-computer (HC) dialogues by autom...
This paper describes a statistical approach to detect annotation errors in dependency treebanks. As ...