학술논문

Cross-City Semantic Segmentation (C2Seg) in Multimodal Remote Sensing: Outcome of the 2023 IEEE WHISPERS C2Seg Challenge
Document Type
Periodical
Source
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of. 17:8851-8862 2024
Subject
Geoscience
Signal Processing and Analysis
Power, Energy and Industry Applications
Analytical models
Semantic segmentation
Urban areas
Benchmark testing
Signal processing
Data models
Sensors
Artificial intelligence (AI)
cross-city
deep learning
hyperspectral
land cover
multimodal benchmark datasets
remote sensing
semantic segmentation
Language
ISSN
1939-1404
2151-1535
Abstract
Given the ever-growing availability of remote sensing data (e.g., Gaofen in China, Sentinel in the EU, and Landsat in the USA), multimodal remote sensing techniques have been garnering increasing attention and have made extraordinary progress in various Earth observation (EO)-related tasks. The data acquired by different platforms can provide diverse and complementary information. The joint exploitation of multimodal remote sensing has been proven effective in improving the existing methods of land-use/land-cover segmentation in urban environments. To boost technical breakthroughs and accelerate the development of EO applications across cities and regions, one important task is to build novel cross-city semantic segmentation models based on modern artificial intelligence technologies and emerging multimodal remote sensing data. This leads to the development of better semantic segmentation models with high transferability among different cities and regions. The Cross-City Semantic Segmentation contest is organized in conjunction with the 13th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS).