학술논문

Show Your Face: Restoring Complete Facial Images from Partial Observations for VR Meeting
Document Type
Conference
Source
2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) WACV Applications of Computer Vision (WACV), 2024 IEEE/CVF Winter Conference on. :8673-8682 Jan, 2024
Subject
Computing and Processing
Headphones
Training
Solid modeling
Visualization
Three-dimensional displays
Computational modeling
Virtual reality
Applications
Virtual / augmented reality
Language
ISSN
2642-9381
Abstract
Virtual Reality (VR) headsets allow users to interact with the virtual world. However, the device physically blocks visual connections among users, causing huge inconveniences for VR meetings. To address this issue, studies have been conducted to restore human faces from images captured by Headset Mounted Cameras (HMC). Unfortunately, existing approaches heavily rely on high-resolution person-specific 3D models which are prohibitively expensive to apply to large-scale scenarios. Our goal is to design an efficient framework for restoring users’ facial data in VR meetings. Specifically, we first build a new dataset, named Facial Image Composition (FIC) data which approximates the real HMC images from a VR headset. By leveraging the heterogeneity of the HMC images, we decompose the restoration problem into a local geometry transformation and global color/style fusion. Then we propose a 2D light-weight facial image composition network (FIC-Net), where three independent local models are responsible for transforming raw HMC patches and the global model performs a fusion of the transformed HMC patches with a pre-recorded reference image. Finally, we also propose a stage-wise training strategy to optimize the generalization of our FIC-Net. We have validated the effectiveness of our proposed FIC-Net through extensive experiments.