DFKI-LT - Assessing Inter-Annotator Agreement for Translation Error Annotation

Arle Richard Lommel, Maja Popovic, Aljoscha Burchardt
Assessing Inter-Annotator Agreement for Translation Error Annotation
3 MTE: Workshop on Automatic and Manual Metrics for Operational Translation Evaluation, Reykjavik, Iceland, LREC, 2014
 
One of the key requirements for demonstrating the validity and reliability of an assessment method is that annotators are able to apply it consistently. This paper explores inter-annotator agreement for error classification task and investigates some of the facts that contribute to low IAA.
 
Files: BibTeX, LREC-Lommel-Burchardt-Popovic.pdf