DFKI-LT - Involving language professionals in the evaluation of machine translation
Involving language professionals in the evaluation of machine translation
6 Journal on Language Resources and Evaluation volume 48 number 4,
Significant breakthroughs in machine translation only seem possible if human translators are taken into the loop. While automatic evaluation and scoring mechanisms such as \bleu have enabled the fast development of systems, it is not clear how systems can meet real-world (quality) requirements in industrial translation scenarios today. The taraX‹ project has paved the way for wide usage of multiple machine translation outputs through various feedback loops in system development. The project has integrated human translators into the development process thus collecting feedback for possible improvements. This paper describes results from detailed human evaluation. Performance of different types of translation systems has been compared and analysed via ranking, error analysis and post-editing.
Files: BibTeX, s10579-014-9286-z, evaluation-revised.pdf