학술논문

Diffuse and Restore: A Region-Adaptive Diffusion Model for Identity-Preserving Blind Face Restoration
Document Type
Conference
Source
2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) WACV Applications of Computer Vision (WACV), 2024 IEEE/CVF Winter Conference on. :6331-6340 Jan, 2024
Subject
Computing and Processing
Degradation
Adaptation models
Image synthesis
Face recognition
Diffusion processes
Generative adversarial networks
Image restoration
Algorithms
Biometrics
face
gesture
body pose
Generative models for image
video
3D
etc.
Low-level and physics-based vision
Language
ISSN
2642-9381
Abstract
Blind face restoration (BFR) from severely degraded face images in the wild is a highly ill-posed problem. Due to the complex unknown degradation, existing generative works typically struggle to restore realistic details when the input is of poor quality. Recently, diffusion-based approaches were successfully used for high-quality image synthesis. But, for BFR, maintaining a balance between the fidelity of the restored image and the reconstructed identity information is important. Minor changes in certain facial regions may alter the identity or degrade the perceptual quality. With this observation, we present a conditional diffusion-based framework for BFR. We alleviate the drawbacks of existing diffusion-based approaches and design a region-adaptive strategy. Specifically, we use an identity preserving conditioner network to recover the identity information from the input image as much as possible and use that to guide the reverse diffusion process, specifically for important facial locations that contribute the most to the identity. This leads to a significant improvement in perceptual quality as well as face-recognition scores over existing GAN and diffusion-based restoration models. Our approach achieves superior results to prior art on a range of real and synthetic datasets, particularly for severely degraded face images.