학술논문

VisionScaling: Dynamic Deep Learning Model and Resource Scaling in Mobile Vision Applications
Document Type
Periodical
Source
IEEE Internet of Things Journal IEEE Internet Things J. Internet of Things Journal, IEEE. 11(9):15523-15539 May, 2024
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Computational modeling
Mobile handsets
Adaptation models
Graphics processing units
Vehicle dynamics
Servers
Energy consumption
Computation offloading
deep learning
dynamic voltage and frequency scaling (DVFS)
mobile vision service
model scaling
online convex optimization (OCO)
Language
ISSN
2327-4662
2372-2541
Abstract
As deep learning technology becomes advanced, mobile vision applications, such as augmented reality (AR) or autonomous vehicles, are prevalent. The performance of such services highly depends on computing capability of different mobile devices, dynamic service requests, stochastic mobile network environment, and learning models. Existing studies have independently optimized such mobile resource allocation and learning model design with given other side of parameters and computing/network resources. However, they cannot reflect realistic mobile environments since the time-varying wireless channel and service requests are assumed to follow specific distributions. Without these unrealistic assumptions, we propose an algorithm that jointly optimizes learning models and process/network resources adapting to system dynamics, namely, VisionScaling by leveraging the state-of-the-art online convex optimization (OCO) framework. This VisionScaling jointly makes decisions on 1) the learning model and the size of input layer at learning-side and 2) the GPU clock frequency, the transmission rate, and the computation offloading policy at resource-side every time slot. We theoretically show that VisionScaling asymptotically converges to an offline optimal performance with satisfying sublinearity. Moreover, we demonstrate that VisionScaling saves at least 24% of dynamic regret which captures energy consumption and processed frames per second (PFPS) under mean average precision (mAP) constraint via real trace-driven simulations. Finally, we show that VisionScaling attains 30.8% energy saving and improves 39.7% PFPS while satisfying the target mAP on the testbed with Nvidia Jetson TX2 and an edge server equipped with high-end GPU.