학술논문

Multi-Agent Collaborative Inference via DNN Decoupling: Intermediate Feature Compression and Edge Learning
Document Type
Periodical
Source
IEEE Transactions on Mobile Computing IEEE Trans. on Mobile Comput. Mobile Computing, IEEE Transactions on. 22(10):6041-6055 Oct, 2023
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Signal Processing and Analysis
Servers
Collaboration
Quantization (signal)
Task analysis
Optimization
Training
Computational modeling
Deep reinforcement learning
mobile edge computing
multi-user
collaborative inference
hybrid action space
Language
ISSN
1536-1233
1558-0660
2161-9875
Abstract
Recently, deploying deep neural network (DNN) models via collaborative inference, which splits a pre-trained model into two parts and executes them on user equipment (UE) and edge server respectively, becomes attractive. However, the large intermediate feature of DNN impedes flexible decoupling, and existing approaches either focus on the single UE scenario or simply define tasks considering the required CPU cycles, but ignore the indivisibility of a single DNN layer. In this article, we study the multi-agent collaborative inference scenario, where a single edge server coordinates the inference of multiple UEs. Our goal is to achieve fast and energy-efficient inference for all UEs. To achieve this goal, we design a lightweight autoencoder-based method to compress the large intermediate feature at first. Then we define tasks according to the inference overhead of DNNs and formulate the problem as a Markov decision process (MDP). Finally, we propose a multi-agent hybrid proximal policy optimization (MAHPPO) algorithm to solve the optimization problem with a hybrid action space. We conduct extensive experiments with different types of networks, and the results show that our method can reduce up to 56% of inference latency and save up to 72% of energy consumption.