학술논문

Optimal Capacity Allocation for Executing MapReduce Jobs in Cloud Systems
Document Type
Conference
Source
2014 16th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), 2014 16th International Symposium on. :385-392 Sep, 2014
Subject
Computing and Processing
Bismuth
Resource management
Upper bound
Silicon
Scalability
Mathematical model
Optimization
MapReduce
Capacity Allocation
Performance bounds
Cloud Computing
Language
Abstract
Nowadays, analyzing large amount of data is of paramount importance for many companies. Big data and business intelligence applications are facilitated by the MapReduce programming model while, at infrastructural layer, cloud computing provides flexible and cost effective solutions for allocating on demand large clusters. Capacity allocation in such systems is a key challenge to providing performance for MapReduce jobs and minimize cloud resource cost. The contribution of this paper is twofold: (i) we formulate a linear programming model able to minimize cloud resources cost and job rejection penalties for the execution of jobs of multiple classes with (soft) deadline guarantees, (ii) we provide new upper and lower bounds for MapReduce job execution time in shared Hadoop clusters. Moreover, our solutions are validated by a large set of experiments. We demonstrate that our method is able to determine the global optimal solution for systems including up to 1000 user classes in less than 0.5 seconds. Moreover, the execution time of MapReduce jobs are within 19% of our upper bounds on average.