학술논문

LHU-Net: A Light Hybrid U-Net for Cost-Efficient, High-Performance Volumetric Medical Image Segmentation
Document Type
Working Paper
Source
Subject
Electrical Engineering and Systems Science - Image and Video Processing
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Machine Learning
Language
Abstract
The rise of Transformer architectures has revolutionized medical image segmentation, leading to hybrid models that combine Convolutional Neural Networks (CNNs) and Transformers for enhanced accuracy. However, these models often suffer from increased complexity and overlook the interplay between spatial and channel features, which is vital for segmentation precision. We introduce LHU-Net, a streamlined Hybrid U-Net for volumetric medical image segmentation, designed to first analyze spatial and then channel features for effective feature extraction. Tested on five benchmark datasets (Synapse, LA, Pancreas, ACDC, BRaTS 2018), LHU-Net demonstrated superior efficiency and accuracy, notably achieving a 92.66 Dice score on ACDC with 85\% fewer parameters and a quarter of the computational demand compared to leading models. This performance, achieved without pre-training, extra data, or model ensembles, sets new benchmarks for computational efficiency and accuracy in segmentation, using under 11 million parameters. This achievement highlights that balancing computational efficiency with high accuracy in medical image segmentation is feasible. Our implementation of LHU-Net is freely accessible to the research community on GitHub (https://github.com/xmindflow/LHUNet).