Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN

Loading...
Thumbnail Image
Date
2021
Journal Title
Journal ISSN
Volume Title
Publisher
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd
Abstract
Recent advances in generative adversarial networks (GANs) have shown tremendous success for facial expression generation tasks. However, generating vivid and expressive facial expressions at Action Units (AUs) level is still challenging, due to the fact that automatic facial expression analysis for AU intensity itself is an unsolved difficult task. In this paper, we propose a novel synthesis‐by‐analysis approach by leveraging the power of GAN framework and state‐of‐the‐art AU detection model to achieve better results for AU‐driven facial expression generation. Specifically, we design a novel discriminator architecture by modifying the patch‐attentive AU detection network for AU intensity estimation and combine it with a global image encoder for adversarial learning to force the generator to produce more expressive and realistic facial images. We also introduce a balanced sampling approach to alleviate the imbalanced learning problem for AU synthesis. Extensive experimental results on DISFA and DISFA+ show that our approach outperforms the state‐of‐the‐art in terms of photo‐realism and expressiveness of the facial expression quantitatively and qualitatively.
Description

        
@article{
10.1111:cgf.14202
, journal = {Computer Graphics Forum}, title = {{
Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN
}}, author = {
Zhao, Yong
 and
Yang, Le
 and
Pei, Ercheng
 and
Oveneke, Meshia Cédric
 and
Alioscha‐Perez, Mitchel
 and
Li, Longfei
 and
Jiang, Dongmei
 and
Sahli, Hichem
}, year = {
2021
}, publisher = {
© 2021 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd
}, ISSN = {
1467-8659
}, DOI = {
10.1111/cgf.14202
} }
Citation
Collections