학술논문

Improving Pipeline Magnetic Flux Leakage (MFL) Detection Performance With Mixed Attention Mechanisms (AMs) and Deep Residual Shrinkage Networks (DRSNs)
Document Type
Periodical
Source
IEEE Sensors Journal IEEE Sensors J. Sensors Journal, IEEE. 24(4):5162-5171 Feb, 2024
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Robotics and Control Systems
Pipelines
Training
Welding
Transportation
Measurement
Mathematical models
Feature extraction
Attention mechanisms (AMs)
deep learning
deep residual shrinkage network (DRSN)
defect detection
magnetic flux leakage (MFL)
oil and gas pipeline
Language
ISSN
1530-437X
1558-1748
2379-9153
Abstract
Magnetic flux leakage (MFL) detection is one of the most commonly used nondestructive testing methods and plays a crucial role in ensuring pipeline safety during transportation. However, the identification of abnormal MFL signals still relies on manual interpretation, leading to issues such as missed detection and false alarms. In addition, MFL data acquisition is prone to noise interference. To address these challenges, this article proposes a method that integrates comprehensive transfer learning (TL), attention mechanisms [including self-attention encoder (SE), contextualized attention (CA), convolutional block attention module (CBAM), and efficient channel attention (ECA)], and deep residual shrinkage networks (DRSNs). This approach effectively improves the training efficiency and recognition accuracy of the model while successfully suppressing the high-noise interference in MFL images during data acquisition. Furthermore, this article combines the Grad-CAM++ algorithm to visualize the recognition logic within the model and achieve preliminary localization of MFL abnormal features. Experimental results demonstrate that attention mechanisms significantly enhance the model’s recognition performance, achieving a best accuracy of 99.7%. Moreover, under high-noise interference, DRSNs effectively enhance the model’s anti-interference capability, with the highest improvement reaching 11.4%.