Browsing by Author "Deng, Z."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item MyEvents: A Personal Visual Analytics Approach for Mining Key Events and Knowledge Discovery in Support of Personal Reminiscence(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Parvinzamir, F.; Zhao, Y.; Deng, Z.; Dong, F.; Chen, Min and Benes, BedrichReminiscence is an important aspect in our life. It preserves precious memories, allows us to form our own identities and encourages us to accept the past. Our work takes the advantage of modern sensor technologies to support reminiscence, enabling self‐monitoring of personal activities and individual movement in space and time on a daily basis. This paper presents MyEvents, a web‐based personal visual analytics platform designed for non‐computing experts, that allows for the collection of long‐term location and movement data and the generation of event mementos. Our research is focused on two prominent goals in event reminiscence: (1) selection subjectivity and human involvement in the process of self‐knowledge discovery and memento creation; and (2) the enhancement of event familiarity by presenting target events and their related information for optimal memory recall and reminiscence. A novel multi‐significance event ranking model is proposed to determine significant events in the personal history according to user preferences for event category, frequency and regularity. The evaluation results show that MyEvents effectively fulfils the reminiscence goals and tasks.Reminiscence is an important aspect in our life. It preserves precious memories, allows us to form our own identities and encourages us to accept the past. Our work takes the advantage of modern sensor technologies to support reminiscence, enabling self‐monitoring of personal activities and individual movement in space and time on a daily basis. This paper presents MyEvents, a web‐based personal visual analytics platform designed for non‐computing experts, that allows for the collection of long‐term location and movement data and the generation of event mementos. Our research is focused on two prominent goals in event reminiscence: (1) selection subjectivity and human involvement in the process of self‐knowledge discovery and memento creation; and (2) the enhancement of event familiarity by presenting target events and their related information for optimal memory recall and reminiscence. A novel multi‐significance event ranking model is proposed to determine significant events in the personal history according to user preferences for event category, frequency and regularity. The evaluation results show that MyEvents effectively fulfils the reminiscence goals and tasks.Item Real‐Time Facial Expression Transformation for Monocular RGB Video(© 2019 The Eurographics Association and John Wiley & Sons Ltd., 2019) Ma, L.; Deng, Z.; Chen, Min and Benes, BedrichThis paper describes a novel real‐time end‐to‐end system for facial expression transformation, without the need of any driving source. Its core idea is to directly generate desired and photo‐realistic facial expressions on top of input monocular RGB video. Specifically, an unpaired learning framework is developed to learn the mapping between any two facial expressions in the facial blendshape space. Then, it automatically transforms the source expression in an input video clip to a specified target expression through the combination of automated 3D face construction, the learned bi‐directional expression mapping and automated lip correction. It can be applied to new users without additional training. Its effectiveness is demonstrated through many experiments on faces from live and online video, with different identities, ages, speeches and expressions.This paper describes a novel real‐time end‐to‐end system for facial expression transformation, without the need of any driving source. Its core idea is to directly generate desired and photo‐realistic facial expressions on top of input monocular RGB video. Specifically, an unpaired learning framework is developed to learn the mapping between any two facial expressions in the facial blendshape space. Then, it automatically transforms the source expression in an input video clip to a specified target expression through the combination of automated 3D face construction, the learned bi‐directional expression mapping and automated lip correction. It can be applied to new users without additional training. Its effectiveness is demonstrated through many experiments on faces from live and online video, with different identities, ages, speeches and expressions.