학술논문

Rethinking Deep Supervision for Brain Tumor Segmentation
Document Type
Periodical
Source
IEEE Transactions on Artificial Intelligence IEEE Trans. Artif. Intell. Artificial Intelligence, IEEE Transactions on. 5(5):2103-2116 May, 2024
Subject
Computing and Processing
Decoding
Tumors
Image segmentation
Transformers
Three-dimensional displays
Head
Encoding
Brain tumor segmentation
deep supervision
transformer
Language
ISSN
2691-4581
Abstract
Accurate segmentation of brain tumors is crucial for diagnostic evaluation and clinical planning. Convolutional-based and Transformer-based models have shown promising results in automatic brain tumor segmentation. In these models, a deep supervision strategy has been widely adopted for parameter optimization. As a key part of this strategy, the segmentation head is responsible for generating early segmentation in the training phase. However, although containing informative cues valuable for decoder refinement, the segmentation head is usually discarded during inference. In this work, we propose a novel approach called deep supervision guided transformer (DSGT) for brain tumor segmentation. DSGT leverages informative cues within the segmentation head to guide decoding by developing guided heads upon a Transformer-based decoder. Specifically, we first extract semantic features from the segmentation head and then design two guided modules for feature refinement and noisy removal to generate sample-wise guiding maps. The resulting maps are fed back to the decoder for auxiliary guidance. Our experiments on two publicly available brain tumor segmentation datasets, BraTS2019 and BraTS2020, demonstrate that DSGT achieves superior segmentation performance compared with state-of-the-art approaches.