학술논문

GrapeLeafNet: A Dual-Track Feature Fusion Network With Inception-ResNet and Shuffle-Transformer for Accurate Grape Leaf Disease Identification
Document Type
Periodical
Source
IEEE Access Access, IEEE. 12:19612-19624 2024
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Diseases
Feature extraction
Deep learning
Computer architecture
Support vector machines
Convolutional neural networks
Crops
Plant diseases
Grape leaf disease
deep learning
transformer
CNN
attention
Language
ISSN
2169-3536
Abstract
Grapes are a widely cultivated crop in the horticultural industry, renowned for their unique flavor and nutritional benefits. However, this crop is highly susceptible to various diseases that can cause significant reductions in yield and quality, resulting in considerable financial losses. Therefore, it is imperative to identify these diseases to effectively manage their spread. Traditionally, the identification of grape leaf diseases has relied on scientific expertise and observational skills. However, with the advent of deep learning methods, it is now feasible to recognize disease patterns from images of infected leaves. In this research, we propose a novel dual-track feature fusion network titled ‘GrapeLeafNet’ for detecting grape leaf disease. It employs a dual-track feature fusion approach, combining Inception-ResNet blocks with CBAM for local feature extraction and Shuffle-Transformer for global feature extraction. The first track uses Inception-ResNet blocks to represent features at multiple scales and map significant features, and CBAM captures significant spatial and channel dependencies. The second track employs Shuffle-Transformer to extract long-term dependencies and complex global features in images. The extracted features are then fused using Coordinate attention, enabling the network to capture both local and global contextual information. Experimental results on the Grape leaf disease dataset from Plant Village demonstrate the effectiveness of the proposed network, achieving an accuracy of 99.56%.