학술논문

Generating Facial Expression Data : Computational and Experimental Evidence
Document Type
Conference
Source
Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents. :94-96
Subject
embodied conversational agents
facial action coding system (facs)
facial expressions
machine learning
non-verbal communication
Language
English
Abstract
It is crucial that naturally-looking Embodied Conversational Agents (ECAs) display various verbal and non-verbal behaviors, including facial expressions. The generation of credible facial expressions has been approached by means of different methods, yet remains difficult because of the availability of naturalistic data. To infuse more variability into the facial expressions of ECAs, we proposed a model that considered temporal dynamic of facial behaviors as a countable-state Markov process. Once trained, the model was able to output new sequences of facial expressions from an existing dataset containing facial videos with Action Unit (AU) encodings. The approach was validated by having computer software and humans identify facial emotion from video. Half of the videos employed newly generated sequences of facial expressions using the model while the other half contained sequences selected directly from the original dataset. We found no statistically significant evidence that the newly generated facial expression sequences could be differentiated from the original ones, demonstrating that the model was able to generate new facial expression data that were indistinguishable from the original data. Our proposed approach could be used to expand the amount of labelled facial expression data in order to create new training sets for machine learning methods.

Online Access