Skip to main content Skip to main navigation

Publication

3D Semantic Label Transfer and Matching in Human-Robot Collaboration

Szilvia Szeier; Benjámin Baffy; Gábor Baranyi; Joul Skaf; László Kopácsi; Daniel Sonntag; Gábor Sörös; András Lőrincz
Learning to Generate 3D Shapes and Scenes, ECCV 2022 Workshop, 10/2022.

Abstract

Semantic 3D maps are highly useful for human-robot collaboration and joint task planning. We build upon an existing real-time 3D semantic reconstruction pipeline and extend it with semantic matching across human and robot viewpoints, which is required if class labels differ or are missing due to different perspectives during collaborative reconstruction. We use deep recognition networks, which usually perform well from higher (human) viewpoints but are inferior from ground robot viewpoints. Therefore, we propose several approaches for acquiring semantic labels for unusual perspectives. We group the pixels from the lower viewpoint, project voxel class labels of the upper perspective to the lower perspective and apply majority voting to obtain labels for the robot. The quality of the reconstruction is evaluated in the Habitat simulator and in a real environment using a robot car equipped with an RGBD camera. The proposed approach can provide high-quality semantic segmentation from the robot perspective with accuracy similar to the human perspective. Furthermore, as computations are close to real time, the approach enables interactive applications.

Weitere Links