학술논문

Evaluation of Graph Analytics Frameworks Using the GAP Benchmark Suite
Document Type
Conference
Source
2020 IEEE International Symposium on Workload Characterization (IISWC) IISWC Workload Characterization (IISWC), 2020 IEEE International Symposium on. :216-227 Oct, 2020
Subject
Communication, Networking and Broadcast Technologies
Computing and Processing
Power, Energy and Industry Applications
Benchmark testing
Libraries
Kernel
Sparse matrices
Programming
Optimization
Software algorithms
graph algorithms
benchmarking
Language
Abstract
Graphs play a key role in data analytics. Graphs and the software systems used to work with them are highly diverse. Algorithms interact with hardware in different ways and which graph solution works best on a given platform changes with the structure of the graph. This makes it difficult to decide which graph programming framework is the best for a given situation. In this paper, we try to make sense of this diverse landscape. We evaluate five different frameworks for graph analytics: SuiteS-parse GraphBLAS, Galois, the NWGraph library, the Graph Kernel Collection, and GraphIt. We use the GAP Benchmark Suite to evaluate each framework. GAP consists of 30 tests: six graph algorithms (breadth-first search, single-source shortest path, PageRank, betweenness centrality, connected components, and triangle counting) on five graphs. The GAP Benchmark Suite includes high-performance reference implementations to provide a performance baseline for comparison. Our results show the relative strengths of each framework, but also serve as a case study for the challenges of establishing objective measures for comparing graph frameworks.