학술논문

Time-Continuous Audiovisual Fusion with Recurrence vs Attention for In-The-Wild Affect Recognition
Document Type
Conference
Source
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) CVPRW Computer Vision and Pattern Recognition Workshops (CVPRW), 2022 IEEE/CVF Conference on. :2381-2390 Jun, 2022
Subject
Computing and Processing
Training
Recurrent neural networks
Face recognition
Computational modeling
Speech recognition
Network architecture
Data models
Language
ISSN
2160-7516
Abstract
This paper presents our contribution to the 3rd Affective Behavior Analysis in-the-Wild (ABAW) challenge. Exploiting the complementarity among multimodal data streams is of vital importance to recognise dimensional affect from in-the-wild audiovisual data, as the contribution affect-wise of the involved modalities might change over time. Recurrence and attention are two of the most widely used modelling mechanisms in the literature for capturing the temporal dependencies of audiovisual data sequences. To clearly understand the performance differences between recurrent and attention models in audiovisual affect recognition, we present a comprehensive evaluation of fusion models based on LSTM-RNNs, self-attention, and cross-modal attention, trained for valence and arousal estimation. Particularly, we study the impact of some key design choices: the modelling complexity of CNN backbones that provide features to temporal models, with and without end-to-end learning. We train the audiovisual affect recognition models on the in-the-wild Aff-wild2 corpus by systematically tuning the hyper-parameters involved in the network architecture design and training optimisation. Our extensive evaluation of the audiovisual fusion models indicate that under various experimental settings, compared to RNNs, attention models may not necessarily be the optimal choice for time-continuous multimodal fusion for emotion recognition.