Control of Feature-point-driven Facial Animation Using a Hypothetical Face

dc.contributor.authorSu, Ming-Shingen_US
dc.contributor.authorKo, Ming-Taten_US
dc.contributor.authorCheng, Kuo-Youngen_US
dc.date.accessioned2015-02-16T07:16:56Z
dc.date.available2015-02-16T07:16:56Z
dc.date.issued2001en_US
dc.description.abstractA new approach to the generation of a feature-point-driven facial animation is presented. In the proposed approach, a hypothetical face is used to control the animation of a face model. The hypothetical face is constructed by connecting some predefined facial feature points to create a net so that each facet of the net is represented by a Coon's surface. Deformation of the face model is controlled by changing the shape of the hypothetical face, which is performed by changing the locations of feature points and their tangents. Experimental results show that this hypothetical-face-based method can generate facial expressions which are visually almost identical to those of a real face.en_US
dc.description.number4en_US
dc.description.seriesinformationComputer Graphics Forumen_US
dc.description.volume20en_US
dc.identifier.doi10.1111/1467-8659.00547en_US
dc.identifier.issn1467-8659en_US
dc.identifier.pages179-189en_US
dc.identifier.urihttps://doi.org/10.1111/1467-8659.00547en_US
dc.publisherBlackwell Publishers Ltd and the Eurographics Associationen_US
dc.titleControl of Feature-point-driven Facial Animation Using a Hypothetical Faceen_US
Files
Collections