학술논문

GMCNet: A Generative Multi-Resolution Framework for Cardiac Registration
Document Type
Periodical
Source
IEEE Access Access, IEEE. 11:8185-8198 2023
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Strain
Image registration
Magnetic resonance imaging
Learning systems
Optimization
Convolutional neural networks
Biomedical imaging
Generative adversarial networks
cardiac cine MRI
deformable registration
generative network
learning-free framework
multi resolution
Language
ISSN
2169-3536
Abstract
Deformable image registration plays a crucial role in estimating cardiac deformation from a sequence of images. However, existing registration methods primarily process images as pairs instead of processing all images in a sequence together. This study proposes a novel end-to-end learning-free generative multi-resolution convolutional neural network (GMCNet) with the primary focus of registering images in a sequence. Even though learning-based methods have yielded high performance for image registration, their performance depends on their ability to learn information from a large number of samples which are difficult to obtain and might bias the framework to the specific domain of data. The proposed learning-free method eliminates the need for a dedicated training set while exploiting the capabilities of neural networks to achieve accurate deformation fields. Due to its capability of parameter sharing through the architecture, the GMCNet can be used as a groupwise registration as well as pairwise registration. The proposed method was evaluated on three different clinical cardiac magnetic resonance imaging datasets and compared quantitatively against nine other state-of-the-art learning and optimization-based algorithms. The proposed method outperformed other methods in all comparisons and yielded average Dice metric values ranging from 0.85 to 0.88 for the datasets. Different aspects of the GMCNet are also explored by assessing 1) the robustness; 2) performance on pairwise registration; 3) the influence of spatial transformation in a controlled environment; and 4) the impact of different multi-resolution structures. The results demonstrate that using temporal information to estimate the deformation fields leads to more accurate registration results and improved robustness under different noise levels. Moreover, the proposed method does not need images for training, and therefore, its prediction is not domain-specific and can be applied to any sequence of images.