학술논문

Unseen Object Instance Segmentation for Robotic Environments
Document Type
Periodical
Source
IEEE Transactions on Robotics IEEE Trans. Robot. Robotics, IEEE Transactions on. 37(5):1343-1359 Oct, 2021
Subject
Robotics and Control Systems
Computing and Processing
Components, Circuits, Devices and Systems
Three-dimensional displays
Image segmentation
Semantics
Two dimensional displays
Robots
Training
Noise measurement
Robot perception
sim-to-real
unseen object instance segmentation
Language
ISSN
1552-3098
1941-0468
Abstract
In order to function in unstructured environments, robots need the ability to recognize unseen objects. We take a step in this direction by tackling the problem of segmenting unseen object instances in tabletop environments. However, the type of large-scale real-world dataset required for this task typically does not exist for most robotic settings, which motivates the use of synthetic data. Our proposed method, unseen object instance segmentation (UOIS)-Net, separately leverages synthetic RGB and synthetic depth for unseen object instance segmentation. UOIS-Net is composed of two stages: first, it operates only on depth to produce object instance center votes in 2D or 3D and assembles them into rough initial masks. Second, these initial masks are refined using RGB. Surprisingly, our framework is able to learn from synthetic RGB-D data where the RGB is nonphotorealistic. To train our method, we introduce a large-scale synthetic dataset of random objects on tabletops. We show that our method can produce sharp and accurate segmentation masks, outperforming state-of-the-art methods on unseen object instance segmentation. We also show that our method can segment unseen objects for robot grasping.