Segmentation and 3D Reconstruction of NON-RIGID Shape from RGB Video
Document typeConference report
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Rights accessOpen Access
ProjectENTENDER EL MOVIMIENTO HUMANO PARA ADAPTAR EL COMPORTAMIENTO DE UN ROBOT (AEI-TIN2017-90086-R)
In this paper we propose a unsupervised and unified approach to simultaneously recover time-varying 3D shape, camera motion, and temporal clustering into deformations, all of them, from partial 2D point tracks in a RGB video and without assuming any pre-trained model. As the data are drawn from a sequentially ordered images, we fully exploit this information to constrain all model parameters we estimate. We present an energy-based formulation that is efficiently solved and allows to estimate all model parameters in the same loop via augmented Lagrange multipliers in polynomial time, enforcing similarities between images at any level. Validation is done in a wide variety of human video sequences, including articulated and continuous motion, and for dense and missing tracks. Our approach is shown to outperform state-of-the-art solutions in terms of 3D reconstruction and clustering.
© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
CitationAgudo, A. Segmentation and 3D Reconstruction of NON-RIGID Shape from RGB Video. A: IEEE International Conference on Image Processing. "2020 IEEE International Conference on Image Processing (ICIP)". Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 2845-2849. DOI 10.1109/ICIP40778.2020.9190750.