학술논문

SG-Grasp: Semantic Segmentation Guided Robotic Grasp Oriented to Weakly Textured Objects Based on Visual Perception Sensors
Document Type
Periodical
Source
IEEE Sensors Journal IEEE Sensors J. Sensors Journal, IEEE. 23(22):28430-28441 Nov, 2023
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Robotics and Control Systems
Robots
Grasping
Semantic segmentation
Robot sensing systems
Sensors
Service robots
Feature extraction
Red-green-blue and depth (RGB-D) sensors
robotic grasp
robotic operation
semantic segmentation
weakly textured objects
Language
ISSN
1530-437X
1558-1748
2379-9153
Abstract
Weakly textured objects are frequently manipulated by industrial and domestic robots, and the most common two types are transparent and reflective objects; however, their unique visual properties present challenges even for advanced grasp detection algorithms. Many existing algorithms heavily rely on depth information, which is not accurately provided by ordinary red-green-blue and depth (RGB-D) sensors for transparent and reflective objects. To overcome this limitation, we propose an innovative solution that uses semantic segmentation to effectively segment weakly textured objects and guide grasp detection. By using only red-green-blue (RGB) images from RGB-D sensors, our segmentation algorithm (RTSegNet) achieves state-of-the-art performance on the newly proposed TROSD dataset. Importantly, our method enables robots to grasp transparent and reflective objects without requiring retraining of the grasp detection network (which is trained solely on the Cornell dataset). Real-world robot experiments demonstrate the robustness of our approach in grasping commonly encountered weakly textured objects; furthermore, results obtained from various datasets validate the effectiveness and robustness of our segmentation algorithm. Code and video are available at: https://github.com/meiguiz/SG-Grasp.