학술논문

Reverse Control for Humanoid Robot Task Recognition
Document Type
Periodical
Source
IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) IEEE Trans. Syst., Man, Cybern. B Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on. 42(6):1524-1537 Dec, 2012
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
General Topics for Engineers
Robotics and Control Systems
Power, Energy and Industry Applications
Humanoid robots
Robot control
Null space
Grasping
Reverse engineering
Humanoid robot
inverse kinematics
task-function formalism
task recognition
Language
ISSN
1083-4419
1941-0492
Abstract
Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.