학술논문

Domain Invariant and Compact Prototype Contrast Adaptation for Hyperspectral Image Classification
Document Type
Periodical
Source
IEEE Transactions on Geoscience and Remote Sensing IEEE Trans. Geosci. Remote Sensing Geoscience and Remote Sensing, IEEE Transactions on. 62:1-14 2024
Subject
Geoscience
Signal Processing and Analysis
Self-supervised learning
Feature extraction
Prototypes
Adaptation models
Hyperspectral imaging
Three-dimensional displays
Sun
Contrastive learning
hyperspectral image classification (HSIC)
prototype contrast adaptation
Language
ISSN
0196-2892
1558-0644
Abstract
Contrastive learning achieves good performance on hyperspectral image classification (HSIC), but its application on cross-scene classification is still challenging due to domain shifts. The emergence of domain adaptation (DA) techniques can reduce domain discrepancy and transfer a model between two domains. Recently, instance-level contrast adaptation methods can connect two related domains, and domain-invariant features are extracted. However, it is sensitive to noisy samples and only learns low-level discriminative features. To solve these problems, a novel domain invariant and compact prototype contrast adaptation (DIC-proCA) framework is proposed for HSIC. About the proposed DIC-proCA, the prototype is introduced into the contrastive learning framework, which serves as a representative embedding of semantically similar samples, has class representativeness, and can alleviate the negative impact of outliers. Taking into account the class representativeness of the prototype and the discriminability of the sample itself, a bidirectional interdomain instance-to-prototype contrastive loss is proposed. It explicitly expresses feature relationships between categories in different domains, and then extracts domain-invariant features. Meanwhile, the mining of compact discriminative features within the target domain is facilitated by instance-level contrastive learning after data augmentation. In addition, the strategy of label smoothing (LS) promotes the clusters in the domain to be more compact and evenly separated, making the model more generalizable. Three cross-scene HSIC tasks demonstrate that the proposed DIC-proCA exhibits superior performance compared to some advanced DA algorithms.