학술논문

Where and How to Transfer: Knowledge Aggregation-Induced Transferability Perception for Unsupervised Domain Adaptation
Document Type
Periodical
Source
IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE Trans. Pattern Anal. Mach. Intell. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 46(3):1664-1681 Mar, 2024
Subject
Computing and Processing
Bioengineering
Semantics
Visualization
Prototypes
Adaptation models
Task analysis
Knowledge engineering
Uncertainty
Transfer learning
unsupervised domain adaptation
semantic segmentation
medical lesions diagnosis
Language
ISSN
0162-8828
2160-9292
1939-3539
Abstract
Unsupervised domain adaptation without accessing expensive annotation processes of target data has achieved remarkable successes in semantic segmentation. However, most existing state-of-the-art methods cannot explore whether semantic representations across domains are transferable or not, which may result in the negative transfer brought by irrelevant knowledge. To tackle this challenge, in this paper, we develop a novel Knowledge Aggregation-induced Transferability Perception (KATP) module for unsupervised domain adaptation, which is a pioneering attempt to distinguish transferable or untransferable knowledge across domains. Specifically, the KATP module is designed to quantify which semantic knowledge across domains is transferable, by incorporating the transferability information propagation from constructed global category-wise prototypes. Based on KATP, we design a novel KATP Adaptation Network (KATPAN) to determine where and how to transfer. The KATPAN contains a transferable appearance translation module $\mathcal {T}_A(\cdot)$TA(·) and a transferable representation augmentation module $\mathcal {T}_R(\cdot)$TR(·), where both modules construct a virtuous circle of performance promotion. $\mathcal {T}_A(\cdot)$TA(·) develops a transferability-aware information bottleneck to highlight where to adapt transferable visual characterizations and modality information; $\mathcal {T}_R(\cdot)$TR(·) explores how to augment transferable representations while abandoning untransferable information, and promotes the translation performance of $\mathcal {T}_A(\cdot)$TA(·) in return. Comprehensive experiments on several representative benchmark datasets and a medical dataset support the state-of-the-art performance of our model.