Publikation

Crowdsourcing Quality of Experience Experiments

Sebastian Egger-Lampl, Judith Redi, Tobias Hoßfeld, Matthias Hirth, Sebastian Möller, Babak Naderi, Christian Keimel, Dietmar Saupe

In: Daniel Archambault, Helen Purchase, Tobias Hossfeld (Hrsg.). Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments: Dagstuhl Seminar 15481, Dagstuhl Castle, Germany, November 22 -- 27, 2015, Revised Contributions. Kapitel 2 Seiten 154-190 Lecture Notes in Computer Science ISBN 978-3-319-66435-4 Springer International Publishing Cham 2017.

Abstrakt

Crowdsourcing enables new possibilities for QoE evaluation by moving the evaluation task from the traditional laboratory environment into the Internet, allowing researchers to easily access a global pool of workers for the evaluation task. This makes it not only possible to include a more diverse population and real-life environments into the evaluation, but also reduces the turn-around time and increases the number of subjects participating in an evaluation campaign significantly, thereby circumventing bottle-necks in traditional laboratory setups. In order to utilise these advantages, the differences between laboratory-based and crowd-based QoE evaluation are discussed in this chapter.

Weitere Links

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence