학술논문

A Learning-Based Method for Conditioning Neural Light Fields From Limited Inputs
Document Type
Periodical
Author
Source
IEEE Sensors Journal IEEE Sensors J. Sensors Journal, IEEE. 24(7):10983-10992 Apr, 2024
Subject
Signal Processing and Analysis
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Robotics and Control Systems
Rendering (computer graphics)
Light fields
Three-dimensional displays
Feature extraction
Image color analysis
Training
Cameras
Convolutional neural network (CNN)
light field network
neural radiance field (NeRF)
novel view synthesis
Language
ISSN
1530-437X
1558-1748
2379-9153
Abstract
This work proposes a novel approach for few-shot novel view synthesis based on a neural light field representation. Our method leverages an implicit neural network to map each ray directly to its target pixel’s color based on a given target camera pose. This implicit neural network is conditioned on local ray features generated by coarse volumetric rendering from an explicit feature volume. The volume is built from the input images using convolutional neural networks (CNNs). Conditioning the network with local ray features enables us to generalize well to novel views of both seen and unseen scenes from sparse inputs. Moreover, using a light field network helps to reduce the computational cost while still allowing the network to learn complex relationships between input views and target views. Our approach achieves competitive performance across different datasets captured by the sensor camera, including LLFF data, synthetic neural radiance field (NeRF) data, and real multiview stereo (DTU) data, while offering much faster rendering speed than baselines.