Show simple item record

dc.contributor.authorPumarola Peris, Albert
dc.contributor.authorAgudo Martínez, Antonio
dc.contributor.authorMartinez, Aleix M.
dc.contributor.authorSanfeliu Cortés, Alberto
dc.contributor.authorMoreno-Noguer, Francesc
dc.contributor.otherUniversitat Politècnica de Catalunya. Doctorat en Automàtica, Robòtica i Visió
dc.contributor.otherInstitut de Robòtica i Informàtica Industrial
dc.contributor.otherUniversitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial
dc.date.accessioned2020-11-11T09:18:30Z
dc.date.available2020-11-11T09:18:30Z
dc.date.issued2019-01-01
dc.identifier.citationPumarola, A. [et al.]. GANimation: one-shot anatomically consistent facial animation. "International journal of computer vision", 1 Gener 2019, vol. 128, p. 698-713.
dc.identifier.issn0920-5691
dc.identifier.otherhttp://www.iri.upc.edu/files/scidoc/2253-GANimation:-One-shot-anatomically-consistent-facial-animation.pdf
dc.identifier.urihttp://hdl.handle.net/2117/331813
dc.descriptionThe final publication is available at link.springer.com
dc.description.abstractRecent advances in generative adversarial networks (GANs) have shown impressive results for the task of facial expression synthesis. The most successful architecture is StarGAN (Choi et al. in CVPR, 2018), that conditions GANs’ generation process with images of a specific domain, namely a set of images of people sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content and granularity of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on action units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combining several of them. Additionally, we propose a weakly supervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit a novel self-learned attention mechanism that makes our network robust to changing backgrounds, lighting conditions and occlusions. Extensive evaluation shows that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild. The code of this work is publicly available at https://github.com/albertpumarola/GANimation.
dc.format.extent16 p.
dc.language.isoeng
dc.subjectÀrees temàtiques de la UPC::Informàtica::Automàtica i control
dc.subject.otherGAN
dc.subject.otherFace animation
dc.subject.otherAction-unit condition
dc.titleGANimation: one-shot anatomically consistent facial animation
dc.typeArticle
dc.contributor.groupUniversitat Politècnica de Catalunya. VIS - Visió Artificial i Sistemes Intel·ligents
dc.contributor.groupUniversitat Politècnica de Catalunya. ROBiri - Grup de Robòtica de l'IRI
dc.identifier.doi10.1007/s11263-019-01210-3
dc.description.peerreviewedPeer Reviewed
dc.subject.inspecClassificació INSPEC::Pattern recognition
dc.relation.publisherversionhttps://link.springer.com/article/10.1007%2Fs11263-019-01210-3
dc.rights.accessOpen Access
local.identifier.drac25836446
dc.description.versionPostprint (author's final draft)
local.citation.authorPumarola, A.; Agudo, A.; Martinez, A.; Sanfeliu, A.; Moreno-Noguer, F.
local.citation.publicationNameInternational journal of computer vision
local.citation.volume128
local.citation.startingPage698
local.citation.endingPage713


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder