Evaluating the translation of speech to virtually-performed sign language on AR glasses

Lan Thao Nguyen, Florian Schicktanz, Aeneas Stankowski, Eleftherios Avramidis

In: Proceedings of the Thirteenth International Conference on Quality of Multimedia Experience (QoMEX). International Conference on Quality of Multimedia Experience (QoMEX-2021) June 14-17 Montreal/Virtual Quebec Canada IEEE 6/2021.


This paper describes the proof-of-concept evaluation for a system that provides translation of speech to virtually performed sign language on augmented reality (AR) glasses. The discovery phase via interviews confirmed the idea for a signing avatar displayed within the users field of vision through AR glasses. In the evaluation of the first prototype through a wizard-of-Oz-experiment, the presented AR solution received a high acceptance rate among deaf and hard-of-hearing persons. However, the machine learning based method used to generate sign language from video still lacks the required accuracy for fully preserving comprehensibility. Signed sentences with dominant arm movements were understood better than sentences relying mainly on finger movements, where is only a small visible interaction space.


Nguyen_Schicktanz_Stankowski_Avramidis_-_Evaluating_the_translation_of_speech_tovirtually-performed_sign_language_on_AR_glasses_-_preprint.pdf (pdf, 5 MB )

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence