학술논문

Robotic Understanding of Spatial Relationships Using Neural-Logic Learning
Document Type
Conference
Source
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Intelligent Robots and Systems (IROS), 2020 IEEE/RSJ International Conference on. :8358-8365 Oct, 2020
Subject
Robotics and Control Systems
Training
Grounding
Neural networks
Robot sensing systems
Feature extraction
Spatial databases
Robots
spatial constraints
neural-logic learning
logic rules
cognitive human-robot interaction
deep learning in robotics and automation
Language
ISSN
2153-0866
Abstract
Understanding spatial relations of objects is critical in many robotic applications such as grasping, manipulation, and obstacle avoidance. Humans can simply reason object’s spatial relations from a glimpse of a scene based on prior knowledge of spatial constraints. The proposed method enables a robot to comprehend spatial relationships among objects from RGB-D data. This paper proposed a neural-logic learning framework to learn and reason spatial relations from raw data by following logic rules on spatial constraints. The neural-logic network consists of three blocks: grounding block, spatial logic block, and inference block. The grounding block extracts high-level features from the raw sensory data. The spatial logic blocks can predicate fundamental spatial relations by training a neural network with spatial constraints. The inference block can infer complex spatial relations based on the predicated fundamental spatial relations. Simulations and robotic experiments evaluated the performance of the proposed method.