학술논문

Federated Multi-View Synthesizing for Metaverse
Document Type
Periodical
Source
IEEE Journal on Selected Areas in Communications IEEE J. Select. Areas Commun. Selected Areas in Communications, IEEE Journal on. 42(4):867-879 Apr, 2024
Subject
Communication, Networking and Broadcast Technologies
Wireless communication
Data models
Computational modeling
Training
Streaming media
Federated learning
Transfer learning
Metaverse
virtual reality
multi-view synthesizing
federated learning
deep learning
Language
ISSN
0733-8716
1558-0008
Abstract
The metaverse is expected to provide immersive entertainment, education, and business applications. However, virtual reality (VR) transmission over wireless networks is data- and computation-intensive, making it critical to introduce novel solutions that meet stringent quality-of-service requirements. With recent advances in edge intelligence and deep learning, we have developed a novel multi-view synthesizing framework that can efficiently provide computation, storage, and communication resources for wireless content delivery in the metaverse. We propose a three-dimensional (3D)-aware generative model that uses collections of single-view images. These single-view images are transmitted to a group of users with overlapping fields of view, which avoids massive content transmission compared to transmitting tiles or whole 3D models. We then present a federated learning approach to guarantee an efficient learning process. The training performance can be improved by characterizing the vertical and horizontal data samples with a large latent feature space, while low-latency communication can be achieved with a reduced number of transmitted parameters during federated learning. We also propose a federated transfer learning framework to enable fast domain adaptation to different target domains. Simulation results have demonstrated the effectiveness of our proposed federated multi-view synthesizing framework for VR content delivery.