학술논문

Single Traffic Image Deraining via Similarity-Diversity Model
Document Type
Periodical
Source
IEEE Transactions on Intelligent Transportation Systems IEEE Trans. Intell. Transport. Syst. Intelligent Transportation Systems, IEEE Transactions on. 25(1):90-103 Jan, 2024
Subject
Transportation
Aerospace
Communication, Networking and Broadcast Technologies
Computing and Processing
Robotics and Control Systems
Signal Processing and Analysis
Rain
Degradation
Image restoration
Atmospheric modeling
Correlation
Task analysis
Image color analysis
Deep learning
single traffic image deraining
imaging model
similarity
diversity
Language
ISSN
1524-9050
1558-0016
Abstract
Single traffic image deraining technology based on deep learning is a vital branch of image preprocessing, which is of great help to intelligent monitoring systems and driving navigation system. It is well understood that established deraining methods are derived based on one specific imaging model, neglecting the underlying correlations between different weather models and thereby limiting the applicability of these standard methods in real scenarios. To ameliorate this issue, in this work, we first explore the inherent relationship between a rain model and the haze one established up to date. We discover that these two models experience similar degradations in the low-frequency components (i.e., similarity) but diverse degradations in the high-frequency areas (i.e., diversity). Based on these observations, we develop a Similarity-Diversity model to describe these characteristics. Afterwards, we introduce a novel deep neural network to restore the rain-free background embedding the similarity-diversity model, namely deep similarity-diversity network (DSDNet). Extensive experiments have been conducted to evaluate our proposed method that outperforms the other state of the art deraining techniques. On the other hand, we deploy the proposed algorithm with Google Vision API for object recognition, which also obtains satisfactory results both qualitatively and quantitatively.