In: MVA 2019. IAPR Conference on Machine Vision Applications (MVA-2019), May 27-31, Tokyo, Japan, Springer, 2019.
Zusammenfassung
Realistic estimation and synthesis of articulated human motion must satisfy anatomical constraints on joint angles. A data-driven approach is used to learn human joint limits from 3D motion capture datasets. We represent joint constraints with a new formulation (s1,s2,τ) using swing-twist representation in exponential maps form. Our parameterization is applied on Human3.6M dataset to create the lookup-map for each joint. These maps enable us to generate ‘synthetic’ datasets in entire joint rotation space of a given joint. A set of neural network discriminators is then trained with synthetic datasets to learn valid/invalid joint rotations. The discriminators achieve accuracy of [94.4−99.4%] for different joints. We validate precision-accuracy trade-off of discriminators and qualitatively evaluate classified poses with an interactive tool. The learned discriminators can be used as ‘priors’ for human pose estimation and motion synthesis.
@inproceedings{pub10489,
author = {
Murthy, Pramod Narasimha
and
Butt, Hammad Tanveer
and
Hiremath, Sandesh
and
Khoshhal, Alireza
and
Stricker, Didier
},
title = {Learning 3D joint constraints from vision-based motion capture datasets},
booktitle = {MVA 2019. IAPR Conference on Machine Vision Applications (MVA-2019), May 27-31, Tokyo, Japan},
year = {2019},
publisher = {Springer}
}
Deutsches Forschungszentrum für Künstliche Intelligenz German Research Center for Artificial Intelligence