학술논문

Deep Direct Visual Servoing of Tendon-Driven Continuum Robots
Document Type
Conference
Source
2022 IEEE 18th International Conference on Automation Science and Engineering (CASE) Automation Science and Engineering (CASE), 2022 IEEE 18th International Conference on. :1977-1984 Aug, 2022
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Power, Energy and Industry Applications
Robotics and Control Systems
Deep learning
Limiting
Software algorithms
Pose estimation
Robot sensing systems
Feature extraction
Visual servoing
Language
ISSN
2161-8089
Abstract
Vision-based control provides a significant potential for the end-point positioning of continuum robots under physical sensing limitations. Traditional visual servoing requires feature extraction and tracking followed by full or partial pose estimation, limiting the controller’s efficiency. We hypothesize that employing deep learning models and implementing direct visual servoing can effectively resolve the issue by eliminating such intermediate steps, enabling control of a continuum robot without requiring an exact system model. This paper presents the control of a single-section tendon-driven continuum robot using a modified VGG-16 deep learning network and an eye-in-hand direct visual servoing approach. The proposed algorithm is first developed in Blender software using only one input image of the target and then implemented on a real robot. The convergence and accuracy of the results in normal, shadowed, and occluded scenes demonstrate the effectiveness and robustness of the proposed controller.