학술논문

Global Multi-modal 2D/3D Registration via Local Descriptors Learning
Document Type
Working Paper
Source
Subject
Electrical Engineering and Systems Science - Image and Video Processing
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Machine Learning
Language
Abstract
Multi-modal registration is a required step for many image-guided procedures, especially ultrasound-guided interventions that require anatomical context. While a number of such registration algorithms are already available, they all require a good initialization to succeed due to the challenging appearance of ultrasound images and the arbitrary coordinate system they are acquired in. In this paper, we present a novel approach to solve the problem of registration of an ultrasound sweep to a pre-operative image. We learn dense keypoint descriptors from which we then estimate the registration. We show that our method overcomes the challenges inherent to registration tasks with freehand ultrasound sweeps, namely, the multi-modality and multidimensionality of the data in addition to lack of precise ground truth and low amounts of training examples. We derive a registration method that is fast, generic, fully automatic, does not require any initialization and can naturally generate visualizations aiding interpretability and explainability. Our approach is evaluated on a clinical dataset of paired MR volumes and ultrasound sequences.
Comment: This preprint was submitted to MICCAI 2022 and has not undergone post-submission improvements or corrections. The Version of Record of this contribution will be published in Springer LNCS