Learning a Continuous Control of Motion Style from Natural Examples

Janis Sprenger, Han Du, Noshaba Cheema, Erik Herrmann, Klaus Fischer, Philipp Slusallek

In: Motion, Interaction and Games. ACM SIGGRAPH Conference on Motion in Games (MIG-2019) October 28-30 Newcastle Upon Tyne United Kingdom Seiten 31-1 MIG '19 ISBN 978-1-4503-6994-7 ACM 2019.


The simulation of humanoid avatars is relevant for a multitude of applications, such as movies, games, simulations for autonomous vehicles, virtual avatars and many more. In order to achieve the simulation of realistic and believable characters, it is important to simulate motion with the natural motion style matching the character’s characteristic. A female avatar, for example, should move in a female style and different characters should vary in their expressiveness of this style. However, the manual definition, as well as the acting of a natural female or male style, is non-trivial. Previous work on style transfer is insufficient, as the style examples are not necessarily a natural depiction of female or male locomotion. We propose a novel data-driven method to infer the style information based on individual samples of male and female motion capture data. For this purpose, the data of 12 female and 12 male participants was captured in an experimental setting. A neural network based motion model is trained for each participant and the style dimension is learned in the latent representation of these models. Thus a linear style model is inferred on top of the motion models. It can be utilized to synthesize network models of different style expressiveness on a continuous scale while retaining the performance and content of the original network model. A user study supports the validity of our approach while highlighting issues with simpler approaches to infer the style.


Weitere Links

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence