학술논문

Speech-Vision Based Multi-Modal AI Control of a Magnetic Anchored and Actuated Endoscope
Document Type
Conference
Source
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) Robotics and Biomimetics (ROBIO), 2022 IEEE International Conference on. :403-408 Dec, 2022
Subject
Computing and Processing
Robotics and Control Systems
Deep learning
Visualization
Target tracking
Endoscopes
Instruments
Magnetic resonance imaging
Surgery
Language
Abstract
In minimally invasive surgery (MIS), controlling the endoscope view is crucial for the operation. Many robotic endoscope holders were developed aiming to address this prob-lem,. These systems rely on joystick, foot pedal, simple voice command, etc. to control the robot. These methods requires surgeons extra effort and are not intuitive enough. In this paper, we propose a speech-vision based multi-modal AI approach, which integrates deep learning based instrument detection, automatic speech recognition and robot visual servo control. Surgeons could communicate with the endoscope by speech to indicate their view preference, such as the instrument to be tracked. The instrument is detected by the deep learning neural network. Then the endoscope takes the detected instrument as the target and follows it with the visual servo controller. This method is applied to a magnetic anchored and guided endoscope and evaluated experimentally. Preliminary results demonstrated this approach is effective and requires little efforts for the surgeon to control the endoscope view intuitively.