학술논문

Learning assistive strategies from a few user-robot interactions: Model-based reinforcement learning approach
Document Type
Conference
Source
2016 IEEE International Conference on Robotics and Automation (ICRA) Robotics and Automation (ICRA), 2016 IEEE International Conference on. :3346-3351 May, 2016
Subject
Computing and Processing
Robotics and Control Systems
Signal Processing and Analysis
Robot kinematics
Learning (artificial intelligence)
Electromyography
Data models
Robot control
Uncertainty
Language
Abstract
Designing an assistive strategy for exoskeletons is a key ingredient in movement assistance and rehabilitation. While several approaches have been explored, most studies are based on mechanical models of the human user, i.e., rigid-body dynamics or Center of Mass (CoM)-Zero Moment Point (ZMP) inverted pendulum model, or only focus on periodic movements with using oscillator models. On the other hand, the interactions between the user and the robot are often not considered explicitly because of its difficulty in modeling. In this paper, we propose to learn the assistive strategies directly from interactions between the user and the robot. We formulate the learning problem of assistive strategies as a policy search problem. To alleviate heavy burdens to the user for data acquisition, we exploit a data-efficient model-based reinforcement learning framework. To validate the effectiveness of our approach, an experimental platform composed of a real subject, an electromyography (EMG)-measurement system, and a simulated robot arm is developed. Then, a learning experiment with the assistive control task of the robot arm is conducted. As a result, proper assistive strategies that can achieve the robot control task and reduce EMG signals of the user are acquired only by 30 seconds interactions.