학술논문

Fast Context Adaptation in Cost-Aware Continual Learning
Document Type
Periodical
Source
IEEE Transactions on Machine Learning in Communications and Networking Trans. Mach. Learn. Comm. Netw. Machine Learning in Communications and Networking, IEEE Transactions on. 2:479-494 2024
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Costs
Resource management
Training
Task analysis
Quality of service
Simulation
Data models
Resource allocation
reinforcement learning
cost of learning
continual learning
meta-learning
mobile edge computing
Language
ISSN
2831-316X
Abstract
In the past few years, Deep Reinforcement Learning (DRL) has become a valuable solution to automatically learn efficient resource management strategies in complex networks with time-varying statistics. However, the increased complexity of 5G and Beyond networks requires correspondingly more complex learning agents and the learning process itself might end up competing with users for communication and computational resources. This creates friction: on the one hand, the learning process needs resources to quickly converge to an effective strategy; on the other hand, the learning process needs to be efficient, i.e., take as few resources as possible from the user’s data plane, so as not to throttle users’ Quality of Service (QoS). In this paper, we investigate this trade-off, which we refer to as cost of learning, and propose a dynamic strategy to balance the resources assigned to the data plane and those reserved for learning. With the proposed approach, a learning agent can quickly converge to an efficient resource allocation strategy and adapt to changes in the environment as for the Continual Learning (CL) paradigm, while minimizing the impact on the users’ QoS. Simulation results show that the proposed method outperforms static allocation methods with minimal learning overhead, almost reaching the performance of an ideal out-of-band CL solution.

Online Access