학술논문

It Takes Two: Learning to Plan for Human-Robot Cooperative Carrying
Document Type
Conference
Source
2023 IEEE International Conference on Robotics and Automation (ICRA) Robotics and Automation (ICRA), 2023 IEEE International Conference on. :7526-7532 May, 2023
Subject
Robotics and Control Systems
Recurrent neural networks
Navigation
Predictive models
Human in the loop
Trajectory
Planning
Noise measurement
Language
Abstract
Cooperative table-carrying is a complex task due to the continuous nature of the action and state-spaces, multimodality of strategies, and the need for instantaneous adaptation to other agents. In this work, we present a method for predicting realistic motion plans for cooperative human-robot teams on the task. Using a Variational Recurrent Neural Network (VRNN) to model the variation in the trajectory of a human-robot team across time, we are able to capture the distribution over the team's future states while leveraging information from interaction history. The key to our approach is leveraging human demonstration data to generate trajectories that synergize well with humans during test time in a receding horizon fashion. Comparison between a baseline, sampling-based planner RRT (Rapidly-exploring Random Trees) and the VRNN planner in centralized planning shows that the VRNN generates motion more similar to the distribution of human-human demonstrations than the RRT. Results in a human-in-the-loop user study show that the VRNN planner outperforms decentralized RRT on task-related metrics, and is significantly more likely to be perceived as human than the RRT planner. Finally, we demonstrate the VRNN planner on a real robot paired with a human teleoperating another robot.