학술논문

Self-Supervised Representation Learning: Introduction, advances, and challenges
Document Type
Periodical
Source
IEEE Signal Processing Magazine IEEE Signal Process. Mag. Signal Processing Magazine, IEEE. 39(3):42-62 May, 2022
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Computing and Processing
Representation learning
Deep learning
Annotations
Computational efficiency
Self-supervised learning
Language
ISSN
1053-5888
1558-0792
Abstract
Self-supervised representation learning (SSRL) methods aim to provide powerful, deep feature learning without the requirement of large annotated data sets, thus alleviating the annotation bottleneck—one of the main barriers to the practical deployment of deep learning today. These techniques have advanced rapidly in recent years, with their efficacy approaching and sometimes surpassing fully supervised pretraining alternatives across a variety of data modalities, including image, video, sound, text, and graphs. This article introduces this vibrant area, including key concepts, the four main families of approaches and associated state-of-the-art techniques, and how self-supervised methods are applied to diverse modalities of data. We further discuss practical considerations including workflows, representation transferability, and computational cost. Finally, we survey major open challenges in the field, that provide fertile ground for future work.