DUST: dual union of spatio-temporal subspaces for monocular multiple object 3D reconstruction
Document typeConference report
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Rights accessOpen Access
European Commission's projectEC-H2020-687534-LOGIMATIC
We present an approach to reconstruct the 3D shape of multiple deforming objects from incomplete 2D trajectories acquired by a single camera. Additionally, we simultaneously provide spatial segmentation (i.e., we identify each of the objects in every frame) and temporal clustering (i.e., we split the sequence into primitive actions). This advances existing work, which only tackled the problem for one single object and non-occluded tracks. In order to handle several objects at a time from partial observations, we model point trajectories as a union of spatial and temporal subspaces, and optimize the parameters of both modalities, the non-observed point tracks and the 3D shape via augmented Lagrange multipliers. The algorithm is fully unsupervised and results in a formulation which does not need initialization. We thoroughly validate the method on challenging scenarios with several human subjects performing different activities which involve complex motions and close interaction. We show our approach achieves state-of-the-art 3D reconstruction results, while it also provides spatial and temporal segmentation.
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
CitationAgudo, A., Moreno-Noguer, F. DUST: dual union of spatio-temporal subspaces for monocular multiple object 3D reconstruction. A: IEEE Conference on Computer Vision and Pattern Recognition. "Proceedings of the 2017 IEEE Computer Society Conference on Computer Vision and Pattern Recognition". Honolulu: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 1513-1521.