학술논문

Learning Surgical Motion Pattern from Small Data in Endoscopic Sinus and Skull Base Surgeries
Document Type
Conference
Source
2021 IEEE International Conference on Robotics and Automation (ICRA) Robotics and Automation (ICRA), 2021 IEEE International Conference on. :7751-7757 May, 2021
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
General Topics for Engineers
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Medical robotics
Robot kinematics
Instruments
Surgery
Kinematics
Aerospace electronics
Skull
Surgical Motion Pattern
Learn from Small Data
Supervised Autonomy
Robotic Surgery
Gaussian Process
Language
ISSN
2577-087X
Abstract
Existing studies demonstrated that surgical motion patterns are strongly correlated with surgical outcomes. Real surgeries are complicated and it is expensive to harvest surgical data. Consequently, existing researches on surgical motion patterns focus on specific concise surgical tasks or simple surgical procedures. The paper presents a surgical motion pattern modeling technique that uses small data but can be applied to virtually any Endoscopic Sinus and Skull Base Surgeries (ESSBSs). The proposed method decreases the dimensionalities of the feature space through projecting surgical instrument motions into the endoscope coordinate, based on human expert domain knowledge. Furthermore, the method uses kinematic features and learns the motion pattern with Gaussian Process learning techniques. Comparing with existing surgical motion pattern modeling methods, the proposed method: 1, learns the motion model from small data; 2, can be generally applied to ESSBSs because it neither assumes nor depends on specific surgical tasks; 3, provides informative results in a real-time manner for optimizing surgical motions for improving surgical outcomes. The proposed method was verified by predicting surgical skill levels on cadaver surgeries. The results show the real-time prediction precision is higher than 81% and the offline accumulated precision reach 100%.