학술논문

Fuse-Seg: semantic division of Cityscapes based on RGB and Fusion of Thermal Data
Document Type
Conference
Source
2024 International Conference on Emerging Systems and Intelligent Computing (ESIC) Emerging Systems and Intelligent Computing (ESIC), 2024 International Conference on. :136-142 Feb, 2024
Subject
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
General Topics for Engineers
Signal Processing and Analysis
Temperature distribution
Uncertainty
Thermal resistance
Semantics
Lighting
Data integration
Data collection
Semantic segmentation
Urban scenes
Autonomous driving
Computer vision
RGB data
Thermal data
Language
Abstract
Urban scene semantic division is an important stage in many automated vehicle applications. As deep learning technologies advance, it advances significantly. The majority of the available semantic segmentation networks only employ data from a single sensory modality, these are typically visible cameras output images in the RGB format. but, when illumination circumstances are not met, that is low lighting or total blackness, these networks’ segmentation performance is susceptible to degradation. Our research shows those thermal pictures created using thermographic cameras resistant poor lighting conditions. Consequently, within this article we suggest a method called FuseSeg that improves the accuracy and robustness of the semantic division of Cityscapes environments by utilising both RGB and temperature data. RGB data represents the traditional color information captured by standard cameras, while thermal data captures the infrared emissions from objects in the scene. Combining thermal data with RGB data allows FuseSeg in order to get around restrictions of using RGB data alone, particularly in difficult situations like dim lighting or occlusions. By fusing the two modalities, the suggested approach aims to increase precision and reliability of urban scene understanding.