Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks

Abstract
We present a real-time deep learning framework for video-based facial performance capture-the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5-10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character.We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.
Description

        
@inproceedings{
10.1145:3099564.3099581
, booktitle = {
Eurographics/ ACM SIGGRAPH Symposium on Computer Animation
}, editor = {
Bernhard Thomaszewski and KangKang Yin and Rahul Narain
}, title = {{
Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks
}}, author = {
Laine, Samuli
and
Karras, Tero
and
Aila, Timo
and
Herva, Antti
and
Saito, Shunsuke
and
Yu, Ronald
and
Li, Hao
and
Lehtinen, Jaakko
}, year = {
2017
}, publisher = {
ACM
}, ISSN = {
1727-5288
}, ISBN = {
978-1-4503-5091-4
}, DOI = {
10.1145/3099564.3099581
} }
Citation