학술논문

Homography-Based Visual Servoing of Eye-in-Hand Robots With Exact Depth Estimation
Document Type
Periodical
Source
IEEE Transactions on Industrial Electronics IEEE Trans. Ind. Electron. Industrial Electronics, IEEE Transactions on. 71(4):3832-3841 Apr, 2024
Subject
Power, Energy and Industry Applications
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Robots
Cameras
Visual servoing
End effectors
Estimation
Convergence
Visualization
Collaborative robot
composite learning
parameter convergence
unknown feature position
Language
ISSN
0278-0046
1557-9948
Abstract
Visual servoing can effectively control robots using visual feedback to improve their intelligence and reliability. For a feature point detected by a monocular camera, the time-varying depth appearing nonlinearly in the Jacobian matrix is difficult to be measured without the prior geometry knowledge of the observed object. Therefore, the depth of the feature point is one of the major uncertain parameters in visual servoing. Considering unknown Cartesian feature positions, this article presents a robot dynamics-based homography-based visual servoing (HBVS) controller for the 3-D pose regulation of eye-in-hand robot arms with monocular cameras. The uncertain depth is represented into a linear form of its Cartesian feature position, and a composite learning law is applied to estimate position parameters accurately, resulting in exact depth estimation. Compared to existing adaptive HBVS methods, the distinctive feature of the proposed method is that it is a dynamics-based design and guarantees exact depth estimation under a much weaker condition termed interval excitation compared to persistent excitation. Simulations and experiments on a collaborative robot with seven degrees of freedom named Franka Emika Panda have verified the effectiveness of the proposed method.