학술논문

GPT-COPE: A Graph-Guided Point Transformer for Category-Level Object Pose Estimation
Document Type
Periodical
Source
IEEE Transactions on Circuits and Systems for Video Technology IEEE Trans. Circuits Syst. Video Technol. Circuits and Systems for Video Technology, IEEE Transactions on. 34(4):2385-2398 Apr, 2024
Subject
Components, Circuits, Devices and Systems
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Shape
Point cloud compression
Pose estimation
Three-dimensional displays
Feature extraction
Solid modeling
Kernel
Object pose estimation
shape reconstruction
3D graph convolution
vision transformer
Language
ISSN
1051-8215
1558-2205
Abstract
Category-level object pose estimation aims to predict the 6D pose and 3D metric size of objects from given categories. Due to significant intra-class shape variations among different instances, existing methods have mainly focused on estimating dense correspondences between observed point clouds and their canonical representations, i.e., normalized object coordinate space (NOCS). Subsequently, a similarity transformation is applied to recover the object pose and size. Despite these efforts, current approaches still cannot fully exploit the intrinsic geometric features to individual instances, thus limiting their ability to handle objects with complex structures (i.e., cameras). To overcome this issue, this paper introduces GPT-COPE, which leverages a graph-guided point transformer to explore distinctive geometric features from the observed point cloud. Specifically, our GPT-COPE employs a Graph-Guided Attention Encoder to extract multiscale geometric features in a local-to-global manner and utilizes an Iterative Non-Parametric Decoder to aggregate the multiscale geometric features from finer scales to coarser scales without learnable parameters. After obtaining the aggregated geometric features, the object NOCS coordinates and shape are regressed through the shape prior adaptation mechanism, and the object pose and size are obtained using the Umeyama algorithm. The multiscale network design enables perceiving the overall shape and structural information of the object, which is beneficial to handle objects with complex structures. Experimental results on the NOCS-REAL and NOCS-CAMERA datasets demonstrate that our GPT-COPE achieves state-of-the-art performance and significantly outperforms existing methods. Furthermore, our GPT-COPE shows superior generalization ability compared to existing methods on the large-scale in-the-wild dataset Wild6D and achieves better performance on the REDWOOD75 dataset, which involves objects with unconstrained orientations.