학술논문

An Implementation of Vision Based Deep Reinforcement Learning for Humanoid Robot Locomotion
Document Type
Conference
Source
2019 IEEE International Symposium on INnovations in Intelligent SysTems and Applications (INISTA) INnovations in Intelligent SysTems and Applications (INISTA), 2019 IEEE International Symposium on. :1-5 Jul, 2019
Subject
Bioengineering
Computing and Processing
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Humanoid robots
Reinforcement learning
Training
Cameras
Robot vision systems
Deep reinforcement learning
humanoid robots
locomotion skills
control
Language
Abstract
Deep reinforcement learning (DRL) exhibits a promising approach for controlling humanoid robot locomotion. However, only values relating sensors such as IMU, gyroscope, and GPS are not sufficient robots to learn their locomotion skills. In this article, we aim to show the success of vision based DRL. We propose a new vision based deep reinforcement learning algorithm for the locomotion of the Robotis-op2 humanoid robot for the first time. In experimental setup, we construct the locomotion of humanoid robot in a specific environment in the Webots software. We use Double Dueling Q Networks (D3QN) and Deep Q Networks (DQN) that are a kind of reinforcement learning algorithm. We present the performance of vision based DRL algorithm on a locomotion experiment. The experimental results show that D3QN is better than DQN in that stable locomotion and fast training and the vision based DRL algorithms will be successfully able to use at the other complex environments and applications.