Inter-annotator consistency is a concern for any corpus building effort relying on human annotation. Adjudication is as effective way to locate and correct discrepancies of various kinds. It can also be both difficult and time-consuming. This paper introduces Linguistic Data Consortium (LDC)'s model for decision point-based annotation and adjudication, and describes the annotation tools developed to enable this approach for the Automatic Content Extraction (ACE) Program. Using a customized user interface incorporating decision points, we improved adjudication efficiency over 2004 annotation rates, despite increased annotation task complexity. We examine the factors that lead to more efficient, less demanding adjudication. We further di...
Holter OM, Ell B. Human-Machine Collaborative Annotation: A Case Study with GPT-3. Presented at the ...
The annotation of texts and other material in the field of digital humanities and Natural Language P...
Computing inter-annotator agreement measures on a manually annotated corpus is necessary to evaluate...
We present a tool for annotation of linguistic data. AnnotatorPro offers both complete monito...
International audienceIn this abstract we present a methodology to improve Argument annotation guide...
Large-scale annotation efforts typically involve several experts who may disagree with each other. W...
The creation of large, richly annotated, multimodal corpora of human interactions is an expensive an...
Standard agreement measures for interannota-tor reliability are neither necessary nor suffi-cient to...
The creation of richly annotated, extendable and reusable corpora of multimodal interactions is an e...
Proper annotation process management is crucial to the construction of corpora, which are indispensa...
We report here on a study of interannotator agreement in the coreference task as defined by the Mess...
The paper proposes a new complex method of corpus creation under the constraint of bounded, short pe...
Corpora with high-quality linguistic annotations are an essential component in many NLP applications...
The usual practice in assessing whether a multimodal annotated corpus is fit for purpose is to calcu...
Hand crafted annotated corpora are acknowledged as critical elements for the Human Language Technolo...
Holter OM, Ell B. Human-Machine Collaborative Annotation: A Case Study with GPT-3. Presented at the ...
The annotation of texts and other material in the field of digital humanities and Natural Language P...
Computing inter-annotator agreement measures on a manually annotated corpus is necessary to evaluate...
We present a tool for annotation of linguistic data. AnnotatorPro offers both complete monito...
International audienceIn this abstract we present a methodology to improve Argument annotation guide...
Large-scale annotation efforts typically involve several experts who may disagree with each other. W...
The creation of large, richly annotated, multimodal corpora of human interactions is an expensive an...
Standard agreement measures for interannota-tor reliability are neither necessary nor suffi-cient to...
The creation of richly annotated, extendable and reusable corpora of multimodal interactions is an e...
Proper annotation process management is crucial to the construction of corpora, which are indispensa...
We report here on a study of interannotator agreement in the coreference task as defined by the Mess...
The paper proposes a new complex method of corpus creation under the constraint of bounded, short pe...
Corpora with high-quality linguistic annotations are an essential component in many NLP applications...
The usual practice in assessing whether a multimodal annotated corpus is fit for purpose is to calcu...
Hand crafted annotated corpora are acknowledged as critical elements for the Human Language Technolo...
Holter OM, Ell B. Human-Machine Collaborative Annotation: A Case Study with GPT-3. Presented at the ...
The annotation of texts and other material in the field of digital humanities and Natural Language P...
Computing inter-annotator agreement measures on a manually annotated corpus is necessary to evaluate...