학술논문

SDSR: Optimizing Metaverse Video Streaming via Saliency-Driven Dynamic Super-Resolution
Document Type
Periodical
Source
IEEE Journal on Selected Areas in Communications IEEE J. Select. Areas Commun. Selected Areas in Communications, IEEE Journal on. 42(4):978-989 Apr, 2024
Subject
Communication, Networking and Broadcast Technologies
Streaming media
Adaptation models
Quality of experience
Superresolution
Bandwidth
Computational modeling
Metaverse
360-degree video streaming
quality of experience
super-resolution
bitrate adaptation
Language
ISSN
0733-8716
1558-0008
Abstract
Metaverse (especially 360-degree) video streaming allows broadcasting virtual events in the metaverse to a broad audience. To reduce the huge bandwidth consumption, quite a few super-resolution (SR)-enhanced 360-degree video streaming systems have been proposed. However, there is very limited work to investigate how the granularity of SR model affects the system performance, and how to choose a proper SR model for different video contents under diverse environmental conditions. In this paper, we first conduct a dedicated measurement study to unveil the impact of different granularities of SR models. It is found that the scene of a video largely determines the effectiveness of SR models in different granularities. Based on our observations, we propose a novel 360-degree video streaming framework with saliency-driven dynamic super-resolution, called SDSR. To maximize user QoE, we formally formulate an optimization problem and adopt the model predictive control (MPC) theory for bitrate adaptation and SR model selection. To improve the effectiveness of SR model, we leverage the saliency information, which well reflects users’ view interests, for model training. In addition, we reuse an SR model for similar chunks based on temporal redundancy of a video. Finally, we conduct extensive experiments on real traces and the results show that SDSR outperforms the state-of-the-art algorithms with an improvement up to 32.78% in terms of the average QoE.