Publikation

Dilated Temporal Fully-Convolutional Network for Semantic Segmentation of Motion Capture Data

Noshaba Cheema, Somayeh Hosseini, Janis Sprenger, Erik Herrmann, Han Du, Klaus Fischer, Philipp Slusallek

In: Thabo Beeler , Nils Thuerey , Melina Skouras (Hrsg.). Eurographics/ ACM SIGGRAPH Symposium on Computer Animation - Posters. ACM SIGGRAPH / Eurographics Symposium on Computer Animation (SCA-2018) July 11-13 Paris France Seiten 5-6 SCA Posters '18 ISBN 978-3-03868-070-3 The Eurographics Association 2018.

Abstrakt

Semantic segmentation of motion capture sequences plays a key part in many data-driven motion synthesis frameworks. It is a preprocessing step in which long recordings of motion capture sequences are partitioned into smaller segments. Afterwards, additional methods like statistical modeling can be applied to each group of structurally-similar segments to learn an abstract motion manifold. The segmentation task however often remains a manual task, which increases the effort and cost of generating large-scale motion databases. We therefore propose an automatic framework for semantic segmentation of motion capture data using a dilated temporal fully-convolutional network. Our model outperforms a state-of-the-art model in action segmentation, as well as three networks for sequence modeling. We further show our model is robust against high noisy training labels.

Projekte

DTFCN.pdf (pdf, 336 KB )

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence