학술논문

Self-Supervised Image Colorization for Semantic Segmentation of Urban Land Cover
Document Type
Conference
Source
2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS Geoscience and Remote Sensing Symposium IGARSS , 2021 IEEE International. :3468-3471 Jul, 2021
Subject
Aerospace
Geoscience
Photonics and Electrooptics
Signal Processing and Analysis
Integrated circuits
Image segmentation
Semantics
Decision making
Training data
Benchmark testing
Task analysis
Image colorization
Feature learning
Self-supervision
Transfer learning
Semantic segmentation
Language
ISSN
2153-7003
Abstract
The task of semantic segmentation plays a central role in the analysis of remotely sensed imagery. This relevance is reflected in the act of classifying each image pixel belonging to a particular class. This allows the acquisition of semantic knowledge in form of a classification map, which facilitates decision-making processes. Nowadays, the task of semantic segmentation is mainly solved with Supervised pre-training. It needs plenty of labels to learn a mapping function, which produces useful features. As alternative, Self-supervised learning (SSL) techniques entirely explore the data, find supervision signals and solve a challenge called Pretext task for coming upon robust representations. The current work investigates Image Colorization (IC) as Pretext task to learn feature representations, which will be transferred to an U-Net for predicting semantic segmentations of urban scenes. The study examines two benchmark datasets for validation and generation of classification maps. The results show that the learned features through colorization achieve accurate segmentation results. This was possible both using unlabeled ImageNet training data and the actual datasets. These contain up to half a million examples, which represents a modest amount compared to the number of annotated images present in ImageNet.