학술논문

Self-Distillation for Unsupervised 3D Domain Adaptation
Document Type
Conference
Source
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) WACV Applications of Computer Vision (WACV), 2023 IEEE/CVF Winter Conference on. :4155-4166 Jan, 2023
Subject
Computing and Processing
Engineering Profession
Point cloud compression
Training
Computer vision
Three-dimensional displays
Art
Benchmark testing
Graph neural networks
Algorithms: 3D computer vision
Machine learning architectures
formulations
and algorithms (including transfer
low-shot
semi-
self-
and un-supervised learning)
Language
ISSN
2642-9381
Abstract
Point cloud classification is a popular task in 3D vision. However, previous works, usually assume that point clouds at test time are obtained with the same procedure or sensor as those at training time. Unsupervised Domain Adaptation (UDA) instead, breaks this assumption and tries to solve the task on an unlabeled target domain, leveraging only on a supervised source domain. For point cloud classification, recent UDA methods try to align features across domains via auxiliary tasks such as point cloud reconstruction, which however do not optimize the discriminative power in the target domain in feature space. In contrast, in this work, we focus on obtaining a discriminative feature space for the target domain enforcing consistency between a point cloud and its augmented version. We then propose a novel iterative self-training methodology that exploits Graph Neural Networks in the UDA context to refine pseudo-labels. We perform extensive experiments and set the new state-of-the art in standard UDA benchmarks for point cloud classification. Finally, we show how our approach can be extended to more complex tasks such as part segmentation.