학술논문

Source-Free Domain Adaptation for RGB-D Semantic Segmentation with Vision Transformers
Document Type
Conference
Source
2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW) WACVW Applications of Computer Vision Workshops (WACVW), 2024 IEEE/CVF Winter Conference on. :607-616 Jan, 2024
Subject
Bioengineering
Computing and Processing
Engineering Profession
Adaptation models
Image color analysis
Semantic segmentation
Semantics
Transformers
Feature extraction
Data models
Language
ISSN
2690-621X
Abstract
With the increasing availability of depth sensors, multimodal frameworks that combine color information with depth data are gaining interest. However, ground truth data for semantic segmentation is burdensome to provide, thus making domain adaptation a significant research area. Yet most domain adaptation methods are not able to effectively handle multimodal data. Specifically, we address the challenging source-free domain adaptation setting where the adaptation is performed without reusing source data. We propose MISFIT: MultImodal Source-Free Information fusion Transformer. a depth-aware framework which injects depth data into a segmentation module based on vision transformers at multiple stages, namely at the input, feature and output levels. Color and depth style transfer helps early-stage domain alignment while re-wiring self-attention between modalities creates mixed features, allowing the extraction of better semantic content. Furthermore, a depth-based entropy minimization strategy is also proposed to adaptively weight regions at different distances. Our frame-work, which is also the first approach using RGB-D vision transformers for source-free semantic segmentation, shows noticeable performance improvements with respect to standard strategies.