Publikation
Transfer Learning from Visual Speech Recognition to Mouthing Recognition in German Sign Language
Dinh Nam Pham; Eleftherios Avramidis
In: 2025 19th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2025). IEEE International Conference on Automatic Face and Gesture Recognition (FG-2025), May 26-30, Clearwater, FL, USA, IEEE, 5/2025.
Zusammenfassung
Sign Language Recognition (SLR) systems primarily focus on manual gestures, but non-manual features such as mouth movements, specifically mouthing, provide valuable linguistic information. This work directly classifies mouthing instances to their corresponding words in the spoken language while exploring the potential of transfer learning from Visual Speech Recognition (VSR) to mouthing recognition in German Sign Language. We leverage three VSR datasets: one in English, one in German with unrelated words and one in German containing the same target words as the mouthing dataset, to investigate the impact of task similarity in this setting. Our results demonstrate that multi-task learning improves both mouthing recognition and VSR accuracy as well as model robustness, suggesting that mouthing recognition should be treated as a distinct but related task to VSR. This research contributes to the field of SLR by proposing knowledge transfer from VSR to SLR datasets with limited mouthing annotations.