학술논문

In-datacenter performance analysis of a tensor processing unit
Document Type
Conference
Source
2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA) Computer Architecture (ISCA), 2017 ACM/IEEE 44th Annual International Symposium on. :1-12 Jun, 2017
Subject
Computing and Processing
Transmission line matrix methods
Graphics processing units
Artificial neural networks
Central Processing Unit
Tensile stress
Training
Hardware
DNN
MLP
CNN
RNN
LSTM
neural network
deep learning
domain-specific architecture
accelerator
TensorFlow
TPU
GPU
Language
Abstract
Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC-called a Tensor Processing Unit (TPU)-deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters' NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X–30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X–80X higher. Moreover, using the GPU's GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.