학술논문

Deep Regularized Compound Gaussian Network for Solving Linear Inverse Problems
Document Type
Periodical
Source
IEEE Transactions on Computational Imaging IEEE Trans. Comput. Imaging Computational Imaging, IEEE Transactions on. 10:399-414 2024
Subject
Signal Processing and Analysis
Computing and Processing
General Topics for Engineers
Geoscience
Iterative methods
Inverse problems
Estimation
Imaging
Compounds
Training
Vectors
Machine learning
deep neural networks
inverse problems
nonlinear programming
least squares methods
Language
ISSN
2573-0436
2333-9403
2334-0118
Abstract
Incorporating prior information into inverse problems, e.g. via maximum-a-posteriori estimation, is an important technique for facilitating robust inverse problem solutions. In this paper, we devise two novel approaches for linear inverse problems that permit problem-specific statistical prior selections within the compound Gaussian (CG) class of distributions. The CG class subsumes many commonly used priors in signal and image reconstruction methods including those of sparsity-based approaches. The first method developed is an iterative algorithm, called generalized compound Gaussian least squares (G-CG-LS), that minimizes a regularized least squares objective function where the regularization enforces a CG prior. G-CG-LS is then unrolled, or unfolded, to furnish our second method, which is a novel deep regularized (DR) neural network, called DR-CG-Net, that learns the prior information. A detailed computational theory on convergence properties of G-CG-LS and thorough numerical experiments for DR-CG-Net are provided. Due to the comprehensive nature of the CG prior, these experiments show that DR-CG-Net outperforms competitive prior art methods in tomographic imaging and compressive sensing, especially in challenging low-training scenarios.