학술논문

Deep Continuous Matching Network for more Robust Multi-Modal Remote Sensing Image Patch Matching
Document Type
Conference
Source
IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium Geoscience and Remote Sensing Symposium, IGARSS 2023 - 2023 IEEE International. :6057-6060 Jul, 2023
Subject
Aerospace
Components, Circuits, Devices and Systems
Fields, Waves and Electromagnetics
Geoscience
Signal Processing and Analysis
Degradation
Representation learning
Image matching
Optical imaging
Feature extraction
Robustness
Radar polarimetry
descriptor learning
continuous learning
modality invariance
image patch matching
Language
ISSN
2153-7003
Abstract
Due to the powerful feature extraction capabilities of deep neural networks, traditional approaches are gradually replaced by deep learning approaches for image matching tasks. For multi-modal image patch matching, the deep model should mainly learn the modality-invariant features. For multi-modal images with rotation transformation (RT), the deep model should learn the modality-invariant features and rotation-invariant features simultaneously. However, the performance of the latter trained model is degraded for the former task. The main reason is that the modality invariance of the features degenerates. This paper proposes a deep multi-modal remote sensing image matching network (DCMNet) that combines descriptor learning and continuous learning to solve this problem. Firstly, DCMNet is trained for learning modality-invariant features in multi-modal image patch matching. Then, DCMNet is optimized for multi-modal image patch matching with RT. In the later learning process, we reduce the change of important parameters for the modality-invariant features learning. Experiments demonstrate the effectiveness and robustness of DCMNet in alleviating the modal invariance degradation problem of features.