Publication
Fine-Grained Evaluation of English-Russian MT in 2025: Linguistic Challenges Mirroring Human Translator Training
Shushen Manakhimova; Maria Kunilovskaya; Ekaterina Lapshinova-Koltunski; Eleftherios Avramidis
In: Barry Haddow; Tom Kocmi; Philipp Koehn; Christof Monz (Hrsg.). Proceedings of the Tenth Conference on Machine Translation. Conference on Machine Translation (WMT-25), located at EMNLP2025, November 8-9, Suzhou, China, Pages 866-877, ISBN 979-8-89176-341-8, Association for Computational Linguistics, 11/2025.
Abstract
We analyze how English--Russian machine translation (MT) systems submitted to WMT25 perform on linguistically challenging translation tasks, similar to problems used in university professional translator training. We assessed the ten top-performing systems using a fine-grained test suite containing 465 manually devised test items, which cover 55 lexical, grammatical, and discourse phenomena, in 13 categories. By applying pass/fail rules with human adjudication and micro/macro aggregates, we observe three performance tiers. Compared with the official WMT25 ranking, our ranking broadly aligns but reveals notable shifts.Our findings show that in 2025, even top-performing MT systems still struggle with translation problems that require deep understanding and rephrasing, much like human novices do. The best systems exhibit creativity and can be very good at handling such challenges, often producing more natural translations rather than producing word-for-word renditions. However, persistent structural and lexical problems remain: literal word order carry-overs, misused verb forms, and rigid phrase translations were common, mirroring errors typically seen in beginner translator assignments.
