학술논문

Light Field Synthesis from a Monocular Video Using Neural Radiance Fields
Document Type
Conference
Source
2024 International Conference on Electronics, Information, and Communication (ICEIC) Electronics, Information, and Communication (ICEIC), 2024 International Conference on. :1-4 Jan, 2024
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Fields, Waves and Electromagnetics
Photonics and Electrooptics
Video games
Roads
Media
Rendering (computer graphics)
Light fields
Light field
monocular video
neural radiance field
Language
ISSN
2767-7699
Abstract
Light field, known for capturing directional light rays, has garnered substantial interest owing to the growing demand for view synthesis in immersive media and recent advancements in deep learning techniques. However, existing light field synthesis methods focus on generating views with a limited baseline, which is the distance between sub-aperture images (SAIs). In this paper, we propose a novel method to compose a light field with an expanded baseline using successive frames from a monocular video. We create a synthetic light field dataset with a wide baseline derived from a video game, employing photorealistic rendering. This dataset consists of continuous light field frames and depth maps of the central sub-aperture images. The proposed network consists of two key steps, a preprocessing step that generates visible SAIs using RGBD images and a synthesis step that constructs a Neural Radiance Field with RGBD supervision.