Multi-view 3D face reconstruction in the wild using siamese networks
Document typeConference report
PublisherComputer Vision Foundation
Rights accessOpen Access
In this work, we present a novel learning based approach to reconstruct 3D faces from a single or multiple images. Our method uses a simple yet powerful architecture based on siamese neural networks that helps to extract relevant features from each view while keeping the models small. Instead of minimizing multiple objectives, we propose to simultaneously learn the 3D shape and the individual camera poses by using a single term loss based on the reprojection error, which generalizes from one to multiple views. This allows to globally optimize the whole scene without having to tune any hyperparameters and to achieve low reprojection errors, which are important for further texture generation. Finally, we train our model on a large scale dataset with more than 6,000 facial scans. We report competitive results in 3DFAW 2019 challenge, showing the effectiveness of our method.
CitationRamon, E.; Escur, J.; Giro, X. Multi-view 3D face reconstruction in the wild using siamese networks. A: 3D Face Alignment in the Wild Challenge. "Proceedings of the 2nd 3D Face Alignment in the Wild Challenge". Computer Vision Foundation, 2019, p. 1-5.