학술논문

Transformer embedded spectral-based graph network for facial expression recognition
Document Type
Original Paper
Source
International Journal of Machine Learning and Cybernetics. 15(6):2063-2077
Subject
Graph convolution
Facial expression
Transformer encoder
Action units
Language
English
ISSN
1868-8071
1868-808X
Abstract
Deep graph convolution networks which exploit relations among facial muscle movements for facial expression recognition (FER) have achieved great success. Due to the limited receptive field, existing graph convolution operations are difficult to model long-range muscle movement relations which plays a crucial role in FER. To alleviate this issue, we introduce the transformer encoder into graph convolution networks, in which the vision transformer enables all facial muscle movements to interact in the global receptive field and model more complex relations. Specifically, we construct facial graph data by cropping regions of interest (ROIs) which are associated with facial action units, and each ROI is represented by the representation of hidden layers from deep auto-encoder. To effectively extract features from the constructed facial graph data, we propose a novel transformer embedded spectral-based graph convolution network (TESGCN), in which the transformer encoder is exploited to interact with complex relations among facial RIOs for FER. Compared to vanilla graph convolution networks, we empirically show the superiority of the proposed model by conducting extensive experiments across four facial expression datasets. Moreover, our proposed TESGCN only has 80K parameters and 0.41MB model size, and achieves comparable results compared to existing lightweight networks.