학술논문

Audio Representation Learning by Distilling Video as Privileged Information
Document Type
Periodical
Source
IEEE Transactions on Artificial Intelligence IEEE Trans. Artif. Intell. Artificial Intelligence, IEEE Transactions on. 5(1):446-456 Jan, 2024
Subject
Computing and Processing
Representation learning
Knowledge engineering
Support vector machines
Deep learning
Speaker recognition
Emotion recognition
Audiovisual representation learning
deep learning
knowledge distillation
learning using privileged information (LUPI)
multimodal data
Language
ISSN
2691-4581
Abstract
Deep audio representation learning using multimodal audiovisual data often leads to a better performance compared to unimodal approaches. However, in real-world scenarios, both modalities are not always available at the time of inference, leading to performance degradation by models trained for multimodal inference. In this article, we propose a novel approach for deep audio representation learning using audiovisual data when the video modality is absent at inference. For this purpose, we adopt teacher–student knowledge distillation under the framework of learning using privileged information (LUPI). While the previous methods proposed for LUPI use soft labels generated by the teacher, in our proposed method, we use embeddings learned by the teacher to train the student network. We integrate our method in two different settings: sequential data where the features are divided into multiple segments throughout time, and nonsequential data where the entire features are treated as one whole segment. In the nonsequential setting, both the teacher and student networks are comprised of an encoder component and a task header. We use the embeddings produced by the encoder component of the teacher to train the encoder of the student, while the task header of the student is trained using ground-truth labels. In the sequential setting, the networks have an additional aggregation component that is placed between the encoder and the task header. We use two sets of embeddings produced by the encoder and the aggregation component of the teacher to train the student. Similar to the nonsequential setting, the task header of the student network is trained using ground-truth labels. We test our framework on two different audiovisual tasks, namely, speaker recognition and speech emotion recognition. Through these experiments, we show that by treating the video modality as privileged information for the main goal of audio representation learning, our method results in considerable improvements over sole audio-based recognition as well as prior works that use LUPI.