We present an error mining tool that is de-signed to help human annotators to find errors and inconsistencies in their anno-tation. The output of the underlying al-gorithm is accessible via a graphical user interface, which provides two aggregate views: a list of potential errors in con-text and a distribution over labels. The user can always directly access the ac-tual sentence containing the potential er-ror, thus enabling annotators to quickly judge whether the found candidate is in-deed incorrectly labeled.
Annotated data is an essential ingredient in natural language processing for training and evaluating...
International audienceWe introduce an error mining technique for automatically detecting errors in r...
The paper describes a corpus of texts produced by non-native speakersof Czech. We discuss its annota...
We introduce a method for error detection in automatically annotated text, aimed at supporting the c...
Recent work on error detection has shown that the quality of manually annotated corpora can be subst...
Recent work on error detection has shown that the quality of manually annotated corpora can be subst...
Error coding of second-language learner text, that is, detecting, correcting and annotating errors, ...
Error coding of second-language learner text, that is, detecting, correcting and annotating errors, ...
Error coding of second-language learner text, that is, detecting, correcting and annotating errors, ...
In this thesis, we investigate methods for automatic detection, and to some extent correction, of gr...
Shortage of available training data is holding back progress in the area of automated error detectio...
Existing tools for annotating errors in learner corpora are developed for languages other than Arabi...
International audienceThis article presents a platform, named ACCOLÉ, for the collaborative annotati...
This is the accompanying data for our paper "Annotation Error Detection: Analyzing the Past and Pres...
While the corpus-based research relies on human annotated corpora, it is often said that a non-negli...
Annotated data is an essential ingredient in natural language processing for training and evaluating...
International audienceWe introduce an error mining technique for automatically detecting errors in r...
The paper describes a corpus of texts produced by non-native speakersof Czech. We discuss its annota...
We introduce a method for error detection in automatically annotated text, aimed at supporting the c...
Recent work on error detection has shown that the quality of manually annotated corpora can be subst...
Recent work on error detection has shown that the quality of manually annotated corpora can be subst...
Error coding of second-language learner text, that is, detecting, correcting and annotating errors, ...
Error coding of second-language learner text, that is, detecting, correcting and annotating errors, ...
Error coding of second-language learner text, that is, detecting, correcting and annotating errors, ...
In this thesis, we investigate methods for automatic detection, and to some extent correction, of gr...
Shortage of available training data is holding back progress in the area of automated error detectio...
Existing tools for annotating errors in learner corpora are developed for languages other than Arabi...
International audienceThis article presents a platform, named ACCOLÉ, for the collaborative annotati...
This is the accompanying data for our paper "Annotation Error Detection: Analyzing the Past and Pres...
While the corpus-based research relies on human annotated corpora, it is often said that a non-negli...
Annotated data is an essential ingredient in natural language processing for training and evaluating...
International audienceWe introduce an error mining technique for automatically detecting errors in r...
The paper describes a corpus of texts produced by non-native speakersof Czech. We discuss its annota...