학술논문

Unsupervised MRI Super Resolution Using Deep External Learning and Guided Residual Dense Network With Multimodal Image Priors
Document Type
Periodical
Source
IEEE Transactions on Emerging Topics in Computational Intelligence IEEE Trans. Emerg. Top. Comput. Intell. Emerging Topics in Computational Intelligence, IEEE Transactions on. 7(2):426-435 Apr, 2023
Subject
Computing and Processing
Training
Image resolution
Magnetic resonance imaging
Three-dimensional displays
Signal resolution
Medical diagnostic imaging
Feature extraction
Super resolution
deep learning
unsupervised learning
Language
ISSN
2471-285X
Abstract
Deep learning techniques have led to state-of-the-art image super resolution with natural images. Normally, pairs of high-resolution and low-resolution images are used to train the deep learning models. These techniques have also been applied to medical image super-resolution. The characteristics of medical images differ significantly from natural images in several ways. First, it is difficult to obtain high-resolution images for training in real clinical applications due to the limitations of imaging systems and clinical requirements. Second, other modal high-resolution images are available (e.g., high-resolution T1-weighted images are available for enhancing low-resolution T2-weighted images). In this paper, we propose an unsupervised image super-resolution technique based on simple prior knowledge of the human anatomy. This technique does not require target T2WI high-resolution images for training. Furthermore, we present a guided residual dense network, which incorporates a residual dense network with a guided deep convolutional neural network for enhancing the resolution of low-resolution images by referring to different modal high-resolution images of the same subject. Experiments on a publicly available brain MRI database showed that our proposed method achieves better performance than the state-of-the-art methods.