학술논문

OCT Image Segmentation Using Neural Architecture Search and SRGAN
Document Type
Conference
Source
2020 25th International Conference on Pattern Recognition (ICPR) Pattern Recognition (ICPR), 2020 25th International Conference on. :6425-6430 Jan, 2021
Subject
Computing and Processing
Signal Processing and Analysis
Training
Image segmentation
Optical coherence tomography
Microprocessors
Superresolution
Computer architecture
Object segmentation
Language
Abstract
Medical image segmentation is a critical field in the domain of computer vision and with the growing acclaim of deep learning based models, research in this field is constantly expanding. Optical coherence tomography (OCT) is a non-invasive method that scans the human's retina with depth. It has been hypothesized that the thickness of the retinal layers extracted from OCTs could be an efficient and effective biomarker for early diagnosis of AD. In this work, we aim to design a self-training model architecture for the task of segmenting the retinal layers in OCT scans. Neural architecture search (NAS) is a subfield of AutoML domain, which has a significant impact on improving the accuracy of machine vision tasks. We integrate the NAS algorithm with a Unet auto-encoder architecture as its backbone. Then, we employ our proposed model to segment the retinal nerve fiber layer in our preprocessed OCT images with the aim of AD diagnosis. In this work, we trained a super-resolution generative adversarial network on the raw OCT scans to improve the quality of the images before the modeling stage. In our architecture search strategy, different primitive operations suggested to find down- & up-sampling Unet cell blocks and the binary gate method has been applied to make the search strategy more practical. Our architecture search method is empirically evaluated by training on the Unet and NAS-Unet from scratch. Specifically, the proposed NAS-Unet training significantly outperforms the baseline human-designed architecture by achieving 95.1% in the mean Intersection over Union metric and 79.1% in the Dice similarity coefficient.