학술논문

LoGo Transformer: Hierarchy Lightweight Full Self-Attention Network for Corneal Endothelial Cell Segmentation
Document Type
Conference
Source
2023 International Joint Conference on Neural Networks (IJCNN) Neural Networks (IJCNN), 2023 International Joint Conference on. :1-7 Jun, 2023
Subject
Components, Circuits, Devices and Systems
Computing and Processing
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Image segmentation
Convolution
Computational modeling
Neural networks
Semantics
Liver
Transformers
Corneal endothelial cell segmentation
Transformer
Lightweight
Robustness
Language
ISSN
2161-4407
Abstract
Corneal endothelial cell segmentation plays an important role in quantifying clinical indicators for the cornea health state evaluation. Although Convolution Neural Networks (CNNs) are widely used for medical image segmentation, their receptive fields are limited. Recently, Transformer outperforms convolution in modeling long-range dependencies but lacks local inductive bias so the pure transformer network is difficult to train on small medical image datasets. Moreover, Transformer networks cannot be effectively adopted for secular microscopes as they are parameter-heavy and computationally complex. To this end, we find that appropriately limiting attention spans and modeling information at different granularity can introduce local constraints and enhance attention representations. This paper explores a hierarchy full self-attention lightweight network for medical image segmentation, using Local and Global (LoGo) transformers to separately model attention representation at low-level and high-level layers. Specifically, the local efficient transformer (LoTr) layer is employed to decompose features into finer-grained elements to model local attention representation, while the global axial transformer (GoTr) is utilized to build long-range dependencies across the entire feature space. With this hierarchy structure, we gradually aggregate the semantic features from different levels efficiently. Experiment results on segmentation tasks of the corneal endothelial cell, the ciliary body, and the liver prove the accuracy, effectiveness, and robustness of our method. Compared with the convolution neural networks (CNNs) and the hybrid CNN-Transformer state-of-the-art (SOTA) methods, the LoGo transformer obtains the best result.