Joint segmentation and tracking of object surfaces in depth movies along human/robot manipulations
Document typeConference report
Rights accessOpen Access
A novel framework for joint segmentation and tracking in depth videos of object surfaces is presented. Initially, the 3D colored point cloud obtained using the Kinect camera is used to segment the scene into surface patches, defined by quadratic functions. The computed segments together with their functional descriptions are then used to partition the depth image of the subsequent frame in a consistent manner with respect to the precedent frame. This way, solutions established in previous frames can be reused which improves the efficiency of the algorithm and the coherency of the segmentations along the movie. The algorithm is tested for scenes showing human and robot manipulations of objects. We demonstrate that the method can successfully segment and track the human/robot arm and object surfaces along the manipulations. The performance is evaluated quantitatively by measuring the temporal coherency of the segmentations and the segmentation covering using ground truth. The method provides a visual front-end designed for robotic applications, and can potentially be used in the context of manipulation recognition, visual servoing, and robot-grasping tasks
CitationDellen, B.; Husain, S.; Torras, C. Joint segmentation and tracking of object surfaces in depth movies along human/robot manipulations. A: International Conference on Computer Vision Theory and Applications. "VISAPP 2013 - Proceedings of the International Conference on Computer Vision Theory and Applications". Barcelona: 2013, p. 244-251.