학술논문

Animatable Virtual Humans: Learning Pose-Dependent Human Representations in UV Space for Interactive Performance Synthesis
Document Type
Periodical
Source
IEEE Transactions on Visualization and Computer Graphics IEEE Trans. Visual. Comput. Graphics Visualization and Computer Graphics, IEEE Transactions on. 30(5):2644-2650 May, 2024
Subject
Computing and Processing
Bioengineering
Signal Processing and Analysis
Rendering (computer graphics)
Geometry
Computational modeling
Shape
Real-time systems
X reality
Three-dimensional displays
Virtual Humans
Real-time Animation
Generative Neural Networks
Volumetric Video
XR Applications
Language
ISSN
1077-2626
1941-0506
2160-9306
Abstract
We propose a novel representation of virtual humans for highly realistic real-time animation and rendering in 3D applications. We learn pose dependent appearance and geometry from highly accurate dynamic mesh sequences obtained from state-of-the-art multiview-video reconstruction. Learning pose-dependent appearance and geometry from mesh sequences poses significant challenges, as it requires the network to learn the intricate shape and articulated motion of a human body. However, statistical body models like SMPL provide valuable a-priori knowledge which we leverage in order to constrain the dimension of the search space, enabling more efficient and targeted learning and to define pose-dependency. Instead of directly learning absolute pose-dependent geometry, we learn the difference between the observed geometry and the fitted SMPL model. This allows us to encode both pose-dependent appearance and geometry in the consistent UV space of the SMPL model. This approach not only ensures a high level of realism but also facilitates streamlined processing and rendering of virtual humans in real-time scenarios.