학술논문

Spatio-Temporal Self-Attention Network for Fire Detection and Segmentation in Video Surveillance
Document Type
article
Source
IEEE Access, Vol 10, Pp 1259-1275 (2022)
Subject
Fire detection
early detection
disaster management
small-sized fire
video fire segmentation
semi-supervised
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
Language
English
ISSN
2169-3536
Abstract
Convolutional Neural Networks (CNNs) based approaches are popular for various image/video related tasks due to their state-of-the-art performance. However, for problems like object detection and segmentation, CNNs still suffer from objects with arbitrary shapes, sizes, occlusions, and varying viewpoints. This problem makes it mostly unsuitable for fire detection and segmentation since flames can have an unpredictable scale and shape. In this paper, we propose a method that detects and segments fire-regions with special considerations of their arbitrary sizes and shapes. Specifically, our approach uses a self-attention mechanism to augment spatial characteristics with temporal features, allowing the network to reduce its reliance on spatial factors like shape or size and take advantage of robust spatial-temporal dependencies. As a whole, our pipeline has two stages: In the first stage, we take out region proposals using Spatial-Temporal features, and in the second stage, we classify whether each region proposal is flame or not. Due to the scarcity of generous fire datasets, we adopt a transfer learning strategy to pre-train our classifier with the ImageNet dataset. Additionally, our Spatial-Temporal Network only requires semi-supervision, where it only needs one ground-truth segmentation mask per frame-sequence input. The experimental results of our proposed method significantly outperform the state-of-the-art fire detection with a 2 ~ 4% relative enhancement in F1-score for large scale fires and a nearly ~ 60% relative improvement for small fires at a very early stage.