학술논문

T-CLAP: Temporal-Enhanced Contrastive Language-Audio Pretraining
Document Type
Working Paper
Source
Subject
Computer Science - Sound
Computer Science - Computation and Language
Computer Science - Machine Learning
Electrical Engineering and Systems Science - Audio and Speech Processing
Language
Abstract
Contrastive language-audio pretraining~(CLAP) has been developed to align the representations of audio and language, achieving remarkable performance in retrieval and classification tasks. However, current CLAP struggles to capture temporal information within audio and text features, presenting substantial limitations for tasks such as audio retrieval and generation. To address this gap, we introduce T-CLAP, a temporal-enhanced CLAP model. We use Large Language Models~(LLMs) and mixed-up strategies to generate temporal-contrastive captions for audio clips from extensive audio-text datasets. Subsequently, a new temporal-focused contrastive loss is designed to fine-tune the CLAP model by incorporating these synthetic data. We conduct comprehensive experiments and analysis in multiple downstream tasks. T-CLAP shows improved capability in capturing the temporal relationship of sound events and outperforms state-of-the-art models by a significant margin.
Comment: Preprint submitted to IEEE MLSP 2024