학술논문

Optimizing Deep Learning Recommender Systems Training on CPU Cluster Architectures
Document Type
Conference
Source
SC20: International Conference for High Performance Computing, Networking, Storage and Analysis SC High Performance Computing, Networking, Storage and Analysis, SC20: International Conference for. :1-15 Nov, 2020
Subject
Computing and Processing
Training
Deep learning
Sockets
Topology
Artificial intelligence
Recommender systems
Optimization
Language
Abstract
During the last two years, the goal of many researchers has been to squeeze the last bit of performance out of HPC system for AI tasks. Often this discussion is held in the context of how fast ResNet50 can be trained. Unfortunately, ResNet50 is no longer a representative workload in 2020. Thus, we focus on Recommender Systems which account for most of the AI cycles in cloud computing centers. More specifically, we focus on Facebook’s DLRM benchmark. By enabling it to run on latest CPU hardware and software tailored for HPC, we are able to achieve up to two-orders of magnitude improvement in performance on a single socket compared to the reference CPU implementation, and high scaling efficiency up to 64 sockets, while fitting ultra-large datasets which cannot be held in single node’s memory. Therefore, this paper discusses and analyzes novel optimization and parallelization techniques for the various operators in DLRM. Several optimizations (e.g. tensor-contraction accelerated MLPs, framework MPI progression, BFLOAT16 training with up to $1.8 \times $ speed-up) are general and transferable to many other deep learning topologies.