Publication

Learning 3D joint constraints from vision-based motion capture datasets

Pramod Narasimha Murthy, Hammad Tanveer Butt, Sandesh Hiremath, Alireza Khoshhal, Didier Stricker

In: MVA 2019. IAPR Conference on Machine Vision Applications (MVA-2019) May 27-31 Tokyo Japan Springer 2019.

Abstract

Realistic estimation and synthesis of articulated human motion must satisfy anatomical constraints on joint angles. A data-driven approach is used to learn human joint limits from 3D motion capture datasets. We represent joint constraints with a new formulation (s1,s2,τ) using swing-twist representation in exponential maps form. Our parameterization is applied on Human3.6M dataset to create the lookup-map for each joint. These maps enable us to generate ‘synthetic’ datasets in entire joint rotation space of a given joint. A set of neural network discriminators is then trained with synthetic datasets to learn valid/invalid joint rotations. The discriminators achieve accuracy of [94.4−99.4%] for different joints. We validate precision-accuracy trade-off of discriminators and qualitatively evaluate classified poses with an interactive tool. The learned discriminators can be used as ‘priors’ for human pose estimation and motion synthesis.

Projekte

Weitere Links

MVA_2019.pdf (pdf, 2 MB)

German Research Center for Artificial Intelligence
Deutsches Forschungszentrum für Künstliche Intelligenz