학술논문

Task Scheduling With Multicore Edge Computing in Dense Small Cell Networks
Document Type
Periodical
Source
IEEE Access Access, IEEE. 9:141223-141232 2021
Subject
Aerospace
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Geoscience
Nuclear Engineering
Photonics and Electrooptics
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Task analysis
Servers
Multicore processing
Edge computing
Processor scheduling
Loading
Delays
Task scheduling
edge computing
multicore
Language
ISSN
2169-3536
Abstract
As a reaction and complement to cloud computing, edge computing is a computing paradigm designed for low-latency computing. Edge servers, deployed at the boundary of the Internet, bridge those distributed end devices and the centralized cloud server, forming a harmonic architecture with low latency and balanced loadings. Elaborated task scheduling, including task assignment and processor dispatching, is essential to the success of edge computing systems in dense small cell networks. Plenty of issues need to be considered, such as servers’ computing power, storage capacity, loadings, bandwidth and tasks’ sizes, delays, partitionability, etc. This study contributes to the task scheduling for multicore edge computing environments. We first show that this scheduling problem is an NP -hard problem. An efficient and effective heuristic is then proposed to tackle the problem. Our Multicore Task assignment for maximum Rewards (MAR) scheme differs from most previous schemes in jointly considering all three critical factors: namely task partitionability, multicore, and task properties. A task’s priority is decided by its cost function, which takes into account the task’s size, deadline, partitionability, cores’ loadings, processing power, and so forth. First, tasks from end devices are assigned to edge servers considering servers’ loadings and storage. Next, tasks are assigned to the cores of the selected server. Simulations compare the proposed scheme with First-Come-First-Serve (FCFS), Shortest Task First (STF), Delay Priority Scheduling (DPS), and Green Greedy Algorithm (GGA). Simulations demonstrate that the task completion ratio can be significantly increased, and the number of aborted tasks can be greatly reduced. Compared with FCFS (First-Come-First-Serve), STF (Shortest Task First), DPS (Delay Priority Scheduling), and GGA (Green Greedy Algorithm), the improvement in task completion ratio for hotspots is up to 26%, 25%, 22%, and 9%, respectively.