학술논문

Fingerphoto Deblurring Using Attention-Guided Multi-Stage GAN
Document Type
Periodical
Source
IEEE Access Access, IEEE. 11:82709-82727 2023
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Fingerprint recognition
Generative adversarial networks
Cameras
Distortion
Generators
Biometrics (access control)
Photography
Biometrics
contactless fingerprints
fingerphoto deblurring
generative adversarial networks
guided attention
multi-stage generative architecture
Language
ISSN
2169-3536
Abstract
Using fingerphoto images acquired from mobile cameras, low-quality sensors, or crime scenes, it has become a challenge for automated identification systems to verify the identity due to various acquisition distortions. A significant type of photometric distortion that notably reduces the quality of a fingerphoto is the blurring of the image. This paper proposes a deep fingerphoto deblurring model to restore the ridge information degraded by the image blurring. As the core of our model, we utilize a conditional Generative Adversarial Network (cGAN) to learn the distribution of natural ridge patterns. We perform several modifications to enhance the quality of the reconstructed (deblurred) fingerphotos by our proposed model. First, we develop a multi-stage GAN to learn the ridge distribution in a coarse-to-fine framework. This framework enables the model to maintain the consistency of the ridge deblurring process at different resolutions. Second, we propose a guided attention module that helps the generator to focus mainly on blurred regions. Third, we incorporate a deep fingerphoto verifier as an auxiliary adaptive loss function to force the generator to preserve the ID information during the deblurring process. Finally, we evaluate the effectiveness of the proposed model through extensive experiments on multiple public fingerphoto datasets as well as real-world blurred fingerphotos. In particular, our method achieves 5.2 dB, 8.7%, and 7.6% improvement in PSNR, AUC, and EER, respectively, compared to a state-of-the-art deblurring method.