학술논문

Multi-View Face Recognition Using Deep Attention-Based Face Frontalization
Document Type
Conference
Source
2021 IEEE International Conference on Multimedia and Expo (ICME) Multimedia and Expo (ICME), 2021 IEEE International Conference on. :1-6 Jul, 2021
Subject
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Signal Processing and Analysis
Solid modeling
Three-dimensional displays
Face recognition
Conferences
Buildings
Benchmark testing
Generative adversarial networks
Multi-view face recognition
face frontalization
3D morphable model
attentional GAN
Language
ISSN
1945-788X
Abstract
Face frontalization has been widely used in face recognition to alleviate distribution discrepancy between multi-view faces. Given a profile face, existing models learn to synthesize a frontal face from the whole region indistinguishably, often resulting in unsatisfactory frontalization caused by a lack of synthetic focus and disturbances of trivial backgrounds. This paper proposes a novel Deep Attention-based Face Frontalization (DAFF) method to address the above issues explicitly. We first inject the 3D spatial prior of the input face into an encoder-decoder model. This process locates the discriminative foreground for decomposing meaningful convolutional embeddings. After that, we propose a novel objective that served as the generator’s geometric guidance to pay more attention to the target’s essential regions. Therefore, we can leverage the attentional constraints to perform recovery refinement at both embedding and texture levels. Extensive experiments show that DAFF achieves satisfactory frontalization and competitive recognition performance under constrained and in-the-wild benchmarks.