학술논문

On the Feasibility of Using Current Data Centre Infrastructure for Latency-Sensitive Applications
Document Type
Periodical
Source
IEEE Transactions on Cloud Computing IEEE Trans. Cloud Comput. Cloud Computing, IEEE Transactions on. 8(3):875-888 Sep, 2020
Subject
Computing and Processing
Communication, Networking and Broadcast Technologies
Data centers
Cloud computing
Time factors
Data models
Edge computing
Servers
Extraterrestrial measurements
edge computing
resource management
quality of service
low-latency
measurement
optimization
Language
ISSN
2168-7161
2372-0018
Abstract
It has been claimed that the deployment of fog and edge computing infrastructure is a necessity to make high-performance cloud-based applications a possibility. However, there are a large number of middle-ground latency-sensitive applications such as online gaming, interactive photo editing and multimedia conferencing that require servers deployed closer to users than in globally centralised clouds but do not necessarily need the extreme low-latency provided by a new infrastructure of micro data centres located at the network edge, e.g., in base stations and ISP Points of Presence. In this paper we analyse a snapshot of today's data centres and the distribution of users around the globe and conclude that existing infrastructure provides a sufficiently distributed platform for middle-ground applications requiring a response time of $20\hbox{-}200$20-200 ms. However, while placement and selection of edge servers for extreme low-latency applications is a relatively straightforward matter of choosing the closest, providing a high quality of experience for middle-ground latency applications that use the more widespread distribution of today's data centres, as we advocate in this paper, raises new management challenges to develop algorithms for optimising the placement of and the per-request selection between replicated service instances.