학술논문

A Better Model for Job Redundancy: Decoupling Server Slowdown and Job Size
Document Type
Periodical
Source
IEEE/ACM Transactions on Networking IEEE/ACM Trans. Networking Networking, IEEE/ACM Transactions on. 25(6):3353-3367 Dec, 2017
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Signal Processing and Analysis
Servers
Redundancy
Time factors
Analytical models
Computational modeling
Runtime
Queueing analysis
Stochastic systems
queueing analysis
Language
ISSN
1063-6692
1558-2566
Abstract
Recent computer systems research has proposed using redundant requests to reduce latency. The idea is to replicate a request so that it joins the queue at multiple servers. The request is considered complete as soon as any one of its copies completes. Redundancy allows us to overcome server-side variability–the fact that a server might be temporarily slow due to factors such as background load, network interrupts, and garbage collection to reduce response time. In the past few years, queueing theorists have begun to study redundancy, first via approximations, and, more recently, via exact analysis. Unfortunately, for analytical tractability, most existing theoretical analysis has assumed an Independent Runtimes (IR) model, wherein the replicas of a job each experience independent runtimes (service times) at different servers. The IR model is unrealistic and has led to theoretical results that can be at odds with computer systems implementation results. This paper introduces a much more realistic model of redundancy. Our model decouples the inherent job size ( $X$ ) from the server-side slowdown ( $S$ ), where we track both $S$ and $X$ for each job. Analysis within the $S\&X$ model is, of course, much more difficult. Nevertheless, we design a dispatching policy, Redundant-to-Idle-Queue, which is both analytically tractable within the $S\&X$ model and has provably excellent performance.