Skip to main content Skip to main navigation

Publication

Device-Type Influence in Crowd-based Natural Language Translation Tasks (short paper)

Michael Barz; Neslihan Büyükdemircioglu; Rikhu Prasad Surya; Tim Polzehl; Daniel Sonntag
In: Lora Aroyo; Anca Dumitrache; Praveen Paritosh; Alexander J. Quinn; Chris Welty; Alessandro Checco; Gianluca Demartini; Ujwal Gadiraju; Cristina Sarasua (Hrsg.). Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (SAD 2018 and CrowdBias 2018). Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management (CrowdBias-2018), located at 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018), July 5, Zürich, Switzerland, Pages 93-97, CEUR Workshop Proceedings (CEUR-WS.org), Vol. 2276, CEUR-WS.org, 12/2018.

Abstract

The effect of users’ interaction devices and their platform (mobile vs. desktop) should be taken into account when evaluating the performance of translation tasks in crowdsourcing contexts. We investigate the influence of the device type and platform in a crowd-based translation workflow. We implement a crowd translation workflow and use it for translating a subset of the IWSLT parallel corpus from English to Arabic. In addition, we consider machine translations from a state-of-the-art machine translation system which can be used as translation candidates in a human computation workflow. The results of our experiment suggest that users with a mobile device judge translations systematically lower than users with a desktop device, when assessing the quality of machine translations. The perceived quality of shorter sentences is generally higher than the perceived quality of longer sentences.

Weitere Links