학술논문

Imitation of path-tracking behavior by end-to-end learning of vision and action - Investigation of a method to collect datasets and train them offline - / 視覚と行動のend-to-end学習による経路追従行動の模倣 ―データセットを収集してオフラインで訓練する手法の検討―
Document Type
Journal Article
Source
The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec). 2023, :2-G07
Subject
End-to-end learning
Navigation
Offline
Language
Japanese
ISSN
2424-3124
Abstract
We investigate a method for offline learning of vision-based path-following behavior using pre-collected images and actions. Our previous method has learned such behavior online. The feature of the method is that it imitates the behavior generated by self-position estimation using a LiDAR sensor as an input to the behavior using vision as an input. However, it has been a problem that imitation learning requires a long training time. Therefore, we try to shorten the training time by using offline learning. Furthermore, we will clarify how much visual information around the path is required for applying the method to a real robot. As a result, we verified that the method shortens the training time. We also clarified the required visual information through experiments.

Online Access