Skip to main content Skip to main navigation

Publikation

Watch your Tongue: A point-tracking visualisation system in Blender

Kristy James; Ingmar Steiner; Martijn Wieling; Alexander Hewer; Angelika Braun
In: Workshop on Feedback in Pronunciation Training. Workshop on Feedback in Pronunciation Training, November 4-6, Tholey, Germany, 11/2015.

Zusammenfassung

We present a new software tool for the visualisation of EMA data, using 3D animation in a game engine. This tool displays the movement of articulators in real-time, extrapolating from point-tracking data to a basic representation of tongue and lip movement, with plans to include more accurate tongue and palate models in the future. The tool is written in Python and reads data into Blender, an animation and game engine, in real-time. In addition, Blender game-like resources have been developed, so that a basic mouth ‘scene’ is provided, of which the user can fully customise the appearance and behaviour to their own needs. The tool features functionality to display pre-recorded data from various data formats, which may be of use in demonstrating recorded data from different speakers, as well as rendering a live data stream from an articulograph, which could be adapted to provide online feedback for pronunciation training. In both modes, game controls allow the user to choose their preferred viewpoint and set game parameters, whilst the researcher can set other parameters before the streaming commences. Furthermore, effort has been taken to incorporate several modalities: in static data mode, simultaneously recorded ultrasound videos can be overlaid on the image (motivated by the data described in Steiner et al., 2014), and in live mode a webcam recording can also be shown. Competing systems utilising proprietary software already demonstrate that some degree of placement feedback can be given using 3D animation in real-time (Katz et al., 2014), and feedback for the acquisition of foreign phones (Levitt and Katz, 2010) and phonetic correction in L2 (Badin et al., 2010) have been investigated. The degree to which EMA data can describe differences in L2 accent and the differences between native and foreign speech are an active area of research as well (Wieling et al., 2015). The open-source nature of both this package, as well as the ease of scripting with Python mean that this package would be ideally adapted for experimenting with visual feedback and feedback timing, as well as individualising placement feedback to a subject’s articulator location.

Weitere Links