학술논문

Sparse-View Cone Beam CT Reconstruction Using Data-Consistent Supervised and Adversarial Learning From Scarce Training Data
Document Type
Periodical
Source
IEEE Transactions on Computational Imaging IEEE Trans. Comput. Imaging Computational Imaging, IEEE Transactions on. 9:13-28 2023
Subject
Signal Processing and Analysis
Computing and Processing
General Topics for Engineers
Geoscience
Image reconstruction
Three-dimensional displays
Training
Computed tomography
Training data
Imaging
Image edge detection
Sparse-views
computed tomography
machine learning
deep learning
image reconstruction
Language
ISSN
2573-0436
2333-9403
2334-0118
Abstract
Reconstruction of CT images from a limited set of projections through an object is important in several applications ranging from medical imaging to industrial settings. As the number of available projections decreases, traditional reconstruction techniques such as the FDK algorithm and model-based iterative reconstruction methods perform poorly. Recently, data-driven methods such as deep learning-based reconstruction have garnered a lot of attention in applications because they yield better performance when enough training data is available. However, even these methods have their limitations when there is a scarcity of available training data. This work focuses on image reconstruction in such settings, i.e., when both the number of available CT projections and the training data is extremely limited. We adopt a sequential reconstruction approach over several stages using an adversarially trained shallow network for ‘destreaking’ followed by a data-consistency update in each stage. To deal with the challenge of limited data, we use image subvolumes to train our method, and patch aggregation during testing. To deal with the computational challenge of learning on 3D datasets for 3D reconstruction, we use a hybrid 3D-to-2D mapping network for the ‘destreaking’ part. Comparisons to other methods over several test examples indicate that the proposed method has much potential, when both the number of projections and available training data are highly limited.