학술논문

Fine Detailed Texture Learning for 3D Meshes With Generative Models
Document Type
Periodical
Source
IEEE Transactions on Pattern Analysis and Machine Intelligence IEEE Trans. Pattern Anal. Mach. Intell. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 45(12):14563-14574 Dec, 2023
Subject
Computing and Processing
Bioengineering
Three-dimensional displays
Solid modeling
Cameras
Adaptation models
Geometry
Generative adversarial networks
Image reconstruction
3D texture learning
generative adversarial networks
3D reconstruction
Language
ISSN
0162-8828
2160-9292
1939-3539
Abstract
This paper presents a method to achieve fine detailed texture learning for 3D models that are reconstructed from both multi-view and single-view images. The framework is posed as an adaptation problem and is done progressively where in the first stage, we focus on learning accurate geometry, whereas in the second stage, we focus on learning the texture with a generative adversarial network. The contributions of the paper are in the generative learning pipeline where we propose two improvements. First, since the learned textures should be spatially aligned, we propose an attention mechanism that relies on the learnable positions of pixels. Second, since discriminator receives aligned texture maps, we augment its input with a learnable embedding which improves the feedback to the generator. We achieve significant improvements on multi-view sequences from Tripod dataset as well as on single-view image datasets, Pascal 3D+ and CUB. We demonstrate that our method achieves superior 3D textured models compared to the previous works.