학술논문

Autonomous Exploration Under Uncertainty via Deep Reinforcement Learning on Graphs
Document Type
Conference
Source
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Intelligent Robots and Systems (IROS), 2020 IEEE/RSJ International Conference on. :6140-6147 Oct, 2020
Subject
Robotics and Control Systems
Location awareness
Uncertainty
Supervised learning
Reinforcement learning
Robot sensing systems
Real-time systems
Sensors
Language
ISSN
2153-0866
Abstract
We consider an autonomous exploration problem in which a range-sensing mobile robot is tasked with accurately mapping the landmarks in an a priori unknown environment efficiently in real-time; it must choose sensing actions that both curb localization uncertainty and achieve information gain. For this problem, belief space planning methods that forward- simulate robot sensing and estimation may often fail in real-time implementation, scaling poorly with increasing size of the state, belief and action spaces. We propose a novel approach that uses graph neural networks (GNNs) in conjunction with deep reinforcement learning (DRL), enabling decision-making over graphs containing exploration information to predict a robot's optimal sensing action in belief space. The policy, which is trained in different random environments without human intervention, offers a real-time, scalable decision-making process whose high-performance exploratory sensing actions yield accurate maps and high rates of information gain.