학술논문

A Novel Parameter Adaptive Dual Channel MSPCNN Based Single Image Dehazing for Intelligent Transportation Systems
Document Type
Periodical
Source
IEEE Transactions on Intelligent Transportation Systems IEEE Trans. Intell. Transport. Syst. Intelligent Transportation Systems, IEEE Transactions on. 24(3):3027-3047 Mar, 2023
Subject
Transportation
Aerospace
Communication, Networking and Broadcast Technologies
Computing and Processing
Robotics and Control Systems
Signal Processing and Analysis
Brightness
Computational modeling
Channel estimation
Seals
Real-time systems
Image color analysis
Feature extraction
Image dehazing
radiance
pulse coupled neural network
fusion
PYNQ-Z2
Language
ISSN
1524-9050
1558-0016
Abstract
Visibility issues in intelligent transportation systems are exacerbated by bad weather conditions such as fog and haze. It has been observed from recent studies that major road accidents have occurred in the world due to low visibility and inclement weather conditions. Single image dehazing attempts to restore a haze-free image from an unconstrained hazy image. We proposed a dehazing method by cascading two models utilizing a novel parameter-adaptive dual-channel modified simplified pulse coupled neural network (PA-DC-MSPCNN). The first model uses a new color channel for removing haze from images. The second model is the improved brightness preserving model (I-GIHE), which retains the brightness of the image while improving the gradient strength. To integrate the results from these two models and provide a pleasing haze-free image, a PA-DC-MSPCNN-based fusion is used. Furthermore, the proposed approach is deployed on a Xilinx Zynq SoC by exploiting the recently released PYNQ platform. The dehazing system runs on a PYNQ-Z2 all-programmable SoC platform, where it will input the camera feed through the FPGA unit and carry out the dehazing algorithm in the ARM core. This configuration has allowed reaching real-time processing speed for image dehazing. The results of dehazing are analyzed using both synthetic and real-world hazy images. Synthetic hazy images are acquired from the O-HAZE, I-HAZE, SOTS, and FRIDA datasets, while real-world hazy images are taken from the RailSem19, E-TUVD dataset, and the internet. For evaluation, twelve cutting-edge approaches are chosen. The proposed method is also analyzed on underwater and low-light images. Extensive experiments indicate that the proposed method outperforms state-of-the-art methods of qualitative and quantitative performances.