학술논문

Reliable Mutual Distillation for Medical Image Segmentation Under Imperfect Annotations
Document Type
Periodical
Source
IEEE Transactions on Medical Imaging IEEE Trans. Med. Imaging Medical Imaging, IEEE Transactions on. 42(6):1720-1734 Jun, 2023
Subject
Bioengineering
Computing and Processing
Image segmentation
Reliability
Annotations
Noise measurement
Data models
Training
Cleaning
Imperfect annotation
mutual distillation
neural network
medical image segmentation
Language
ISSN
0278-0062
1558-254X
Abstract
Convolutional neural networks (CNNs) have made enormous progress in medical image segmentation. The learning of CNNs is dependent on a large amount of training data with fine annotations. The workload of data labeling can be significantly relieved via collecting imperfect annotations which only match the underlying ground truths coarsely. However, label noises which are systematically introduced by the annotation protocols, severely hinders the learning of CNN-based segmentation models. Hence, we devise a novel collaborative learning framework in which two segmentation models cooperate to combat label noises in coarse annotations. First, the complementary knowledge of two models is explored by making one model clean training data for the other model. Secondly, to further alleviate the negative impact of label noises and make sufficient usage of the training data, the specific reliable knowledge of each model is distilled into the other model with augmentation-based consistency constraints. A reliability-aware sample selection strategy is incorporated for guaranteeing the quality of the distilled knowledge. Moreover, we employ joint data and model augmentations to expand the usage of reliable knowledge. Extensive experiments on two benchmarks showcase the superiority of our proposed method against existing methods under annotations with different noise levels. For example, our approach can improve existing methods by nearly 3% DSC on the lung lesion segmentation dataset LIDC-IDRI under annotations with 80% noise ratio. Code is available at: https://github.com/Amber-Believe/ReliableMutualDistillation.