학술논문
Physically-guided Disentangled Implicit Rendering for 3D Face Modeling
Document Type
Conference
Author
Source
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) CVPR Computer Vision and Pattern Recognition (CVPR), 2022 IEEE/CVF Conference on. :20321-20331 Jun, 2022
Subject
Language
ISSN
2575-7075
Abstract
This paper presents a novel Physically-guided Disentangled Implicit Rendering (PhyDIR) framework for highfidelity 3D face modeling. The motivation comes from two observations: Widely-used graphics renderers yield excessive approximations against photo-realistic imaging, while neural rendering methods produce superior appearances but are highly entangled to perceive 3D-aware operations. Hence, we learn to disentangle the implicit rendering via explicit physical guidance, while guaranteeing the properties of: (1) 3D-aware comprehension and (2) high-reality image formation. For the former one, PhyDIR explicitly adopts 3D shading and rasterizing modules to control the renderer, which disentangles the light, facial shape, and viewpoint from neural reasoning. Specifically, PhyDIR proposes a novel multi-image shading strategy to compensate for the monocular limitation, so that the lighting variations are accessible to the neural renderer. For the latter, PhyDIR learns the face-collection implicit texture to avoid ill-posed intrinsic factorization, then leverages a series of consistency losses to constrain the rendering robustness. With the disentangled method, we make 3D face modeling benefit from both kinds of rendering strategies. Extensive experiments on benchmarks show that PhyDIR obtains superior performance than state-of-the-art explicit/implicit methods on geometry/texture modeling.