학술논문

Personalised pose estimation from single-plane moving fluoroscope images using deep convolutional neural networks.
Document Type
article
Source
PLoS ONE, Vol 17, Iss 6, p e0270596 (2022)
Subject
Medicine
Science
Language
English
ISSN
1932-6203
Abstract
Measuring joint kinematics is a key requirement for a plethora of biomechanical research and applications. While x-ray based systems avoid the soft-tissue artefacts arising in skin-based measurement systems, extracting the object's pose (translation and rotation) from the x-ray images is a time-consuming and expensive task. Based on about 106'000 annotated images of knee implants, collected over the last decade with our moving fluoroscope during activities of daily living, we trained a deep-learning model to automatically estimate the 6D poses for the femoral and tibial implant components. By pretraining a single stage of our architecture using renderings of the implant geometries, our approach offers personalised predictions of the implant poses, even for unseen subjects. Our approach predicted the pose of both implant components better than about 0.75 mm (in-plane translation), 25 mm (out-of-plane translation), and 2° (all Euler-angle rotations) over 50% of the test samples. When evaluating over 90% of test samples, which included heavy occlusions and low contrast images, translation performance was better than 1.5 mm (in-plane) and 30 mm (out-of-plane), while rotations were predicted better than 3-4°. Importantly, this approach now allows for pose estimation in a fully automated manner.