학술논문

Hardware-in-the-Loop Soft Robotic Testing Framework Using an Actor-Critic Deep Reinforcement Learning Algorithm
Document Type
Periodical
Source
IEEE Robotics and Automation Letters IEEE Robot. Autom. Lett. Robotics and Automation Letters, IEEE. 8(9):6076-6082 Sep, 2023
Subject
Robotics and Control Systems
Computing and Processing
Components, Circuits, Devices and Systems
Soft robotics
Robots
Robot sensing systems
Sensors
Actuators
Pneumatic systems
Flexible printed circuits
AI-enabled robotics
hardware-software integration in robotics
machine learning for robot control
modeling
control
and learning for soft robots
soft robot materials and design
reinforcement learning
Language
ISSN
2377-3766
2377-3774
Abstract
Polymer-based soft robots are difficult to characterize due to their non-linear nature. This difficulty is compounded by multiple additional degrees of movement freedom which adds complexity to any control strategy proposed. The following work proposes and demonstrates a modular framework to test, debug and characterize soft robots using the robot operating system (ROS), to enable modeless deep reinforcement learning control strategies through hardware-in-the-loop system training. The framework is demonstrated using an actor-critic algorithm to learn a locomotion policy for a two-actuator pneu-net soft robot with integrated resistive flex sensors. The result of convergent locomotion studies was an 89.5% increase in the likelihood of reaching the end of frame design goal versus random oracle actuation vectors.