학술논문

Facial Chirality: From Visual Self-Reflection to Robust Facial Feature Learning
Document Type
Periodical
Source
IEEE Transactions on Multimedia IEEE Trans. Multimedia Multimedia, IEEE Transactions on. 24:4275-4284 2022
Subject
Components, Circuits, Devices and Systems
Communication, Networking and Broadcast Technologies
Computing and Processing
General Topics for Engineers
Faces
Feature extraction
Transformers
Reflection
Robustness
Facial features
Face recognition
Facial expression
visual chirality
feature disent- anglement
deep learning
vision transformer
Language
ISSN
1520-9210
1941-0077
Abstract
As a fundamental vision task, facial expression recognition has made substantial progress recently. However, the recognition performance often degrades significantly in real-world scenarios due to the lack of robust facial features. In this paper, we propose an effective facial feature learning method that takes the advantage of facial chirality to discover the discriminative features for facial expression recognition. Most previous studies implicitly assume that human faces are symmetric. However, our work reveals that the facial asymmetric effect can be a crucial clue. Given a face image and its reflection without additional labels, we decouple the emotion-invariant facial features from the input image pair to better capture the emotion-related facial features. Moreover, as our model aligns emotion-related features of the image pair to enhance the recognition performance, the value of precise facial landmark alignment as a pre-processing step is reconsidered in this paper. Experiments demonstrate that the learned emotion-related features outperform the state of the art methods on several facial expression recognition benchmarks as well as real-world occlusion datasets, which manifests the effectiveness and robustness of the proposed model.