학술논문

Learning to Estimate Palpation Forces in Robotic Surgery From Visual-Inertial Data
Document Type
Periodical
Source
IEEE Transactions on Medical Robotics and Bionics IEEE Trans. Med. Robot. Bionics Medical Robotics and Bionics, IEEE Transactions on. 5(3):496-506 Aug, 2023
Subject
Bioengineering
Robotics and Control Systems
Computing and Processing
Instruments
Robots
Robot sensing systems
Force
Phantoms
Force sensors
Visualization
Force estimation
indirect force sensing
robot-assisted minimally invasive surgery
visual-inertial input
deep learning
Language
ISSN
2576-3202
Abstract
Surgeons cannot directly touch the patient’s tissue in robot-assisted minimally invasive procedures. Instead, they must palpate using instruments inserted into the body through trocars. This way of operating largely prevents surgeons from using haptic cues to localize visually undetectable structures such as tumors and blood vessels, motivating research on direct and indirect force sensing. We propose an indirect force-sensing method that combines monocular images of the operating field with measurements from IMUs attached externally to the instrument shafts. Our method is thus suitable for various robotic surgery systems as well as laparoscopic surgery. We collected a new dataset using a da Vinci Si robot, a force sensor, and four different phantom tissue samples. The dataset includes 230 one-minute-long recordings of repeated bimanual palpation tasks performed by four lay operators. We evaluated several network architectures and investigated the role of the network inputs. Using the DenseNet vision model and including inertial data best-predicted palpation forces (lowest average root-mean-square error and highest average coefficient of determination). Ablation studies revealed that video frames carry significantly more information than inertial signals. Finally, we demonstrated the model’s ability to generalize to unseen tissue and predict shear contact forces.