We are currently conducting a feasibility study for the Bundesministerium für Arbeit und Soziales (German ministry for labour and social welfare) to explore the potential of avatars for (semi)automatic sign language translation. Apart from surveying existing work we will conduct structured expert interviews and empirical tests to find out the practical limits of avatar-based sign language communication.
Gaze is known to be a an important social cue in face-to-face communication indicating focus of attention. Speaker gaze can influence object perception and situated utterance comprehension by driving both interlocutor's visual attention towards the same object such that grounding and disambiguation of referring expressions is facilitated. The precise temporal and causal processes involved in on-line gaze following during concurrent utterance comprehension are, however, still largely unknown. Specifically, the alignment of referential gaze and speech cues may be essential to such benefit. we rely on eye-tracking studies exploiting a virtual character to systematically investigate how speaker gaze influences listeners' on-line comprehension.
Staudte, M., Heloir, A., Crocker M. and Kipp M. Talk at the 24th annual CUNY conference on Human Sentence Processing Speaker gaze affects listener comprehension beyond visual attention shifts 2011
Staudte, M., Heloir, A., Crocker M. and Kipp M. (submitted) On the importance of gaze and speech alignment for efficient communication (GW 2011) 2011
Staudte, M., Crocker M., Heloir A. and Kipp M. (submitted) Speaker gaze affects utterance comprehension beyond visual attention shifts 33nd Annual Conference of the Cognitive Science Society 2011
Embodied agents are a powerful paradigm for current and future multimodal interfaces, yet require high effort and expertise for their creation, assembly and animation control. Therefore, open animation engines and high-level control languages are required to make embodied agents accessible to researchers and developers. In this paper, we present EMBR, a new realtime character animation engine that offers a high degree of animation control via the EMBRScript language. We argue that a new layer of control, the animation layer, is necessary to keep the higher-level control layers (behavioral/functional) consistent and slim, while allowing a unified and abstract access to the animation engine, e.g. for the procedural animation of nonverbal behavior.
Evaluating data driven style translation of motion
This work presents an empirical evaluation of a ``Style Transformation'' method which consists of applying an automatically style on a neutral input motion to generate an appropriate style variant. Transformation parameters are extracted from an existing captured sequence. This data-driven method can be used either to enhance artist-generated gesture animations or to modify captured motion sequences according to a desired style. Although we used French Sign Language gesture data to perform the experiments described in this paper, the method may be applied to any kind of skeleton-based character animation.
Heloir A., Gibet S., Multon F., Courty N. Captured Motion Data Processing for Real-Time Synthesis of Sign Language. In Gesture in Human-Computer Interaction and Simulation, Springer, 2005, 168--171 (short paper)
Qualitative and quantitative evaluation of style in sign language gestures
We propose a qualitative and quantitative analysis of styled motion gesture data. The temporal, spatial and structural differences between styled gestures are confronted with the phenomenons described in the literature dedicated to sign language phonology. It is already well-known that gesture style modifies the temporal and the spatial aspects of gestures. In this paper, we address how style may also influence the structure organization of some lexical units. We then briefly present new insights for taking into account both the spatio-temporal and structural variations induced by style in existing gesture specification frameworks.
Temporal alignment of communicative gesture sequences
We address the problem of temporal alignment applied to captured communicative gestures conveying different styles. We propose a representation space is more robust to the spatial variability induced by style. By extending a multilevel dynamic time warping algorithm, we show how this extension can fulfill the goals of time correspondence between gesture sequences while preventing jerkiness introduced by standard time warping methods.
Heloir, A., Courty, N., Gibet, S. & Multon, F. Temporal alignment of communicative gesture sequences Computer Animation and Virtual Worlds, 2006, 17, 347--357
Modeling sign language discourse
Gibet S., Heloir A. Formalisme de description des gestes de la langue des signes française pour la génération du mouvement de signeurs virtuels, TAL, N° Spécial Langue des Signes, vol.48, 115-149.
Gibet S., Héloir A., Courty N, Kamp J.F., Gorce P., Rezzoug N., Multon F., Pelachaud, C. Virtual agent for deaf signing gestures. AMSE, Journal of the Association for the Advancement of Modelling and Simulation Techniques in Enterprises (Special edition HANDICAP), 2006.
Topological indexation of 3D meshes
This work has been dedicated to topological characterization and indexation of 3D surfaces based on Reed graphs.
This exploratory work has been conducted between January 2004 and July 2004, during my master thesis. For a complete investigation on the subject, I recommend the thesis manuscript of Julien tierny .