Skip to content

News & Events

Domain Specific Architectures for Deep Neural Networks: Three Generations of Tensor Processing Units (TPUs)

David Patterson (UC Berkeley/Google)

Distinguished Lecture Series

Tuesday, October 29, 2019, 3:30 pm

Abstract

David Patterson

The recent success of deep neural networks (DNN) has inspired a resurgence in domain specific architectures (DSAs) to run them, partially as a result of the deceleration of microprocessor performance improvement due to the ending of Moore's Law.

DNNs have two phases: training, which constructs accurate models, and inference, which serves those models. Google's first generation Tensor Processing Unit (TPUv1) offered 50X improvement in performance per watt over conventional architectures for inference. We naturally asked whether a successor could do the same for training.

This talk reviews TPUv1 and explores how Google built the first production DSA supercomputer for the much harder problem of training, which was deployed in 2017.

Google's TPUv2/TPUv3 supercomputers with up to 1024 chips train production DNNs at close to perfect linear speedup with 10X-40X higher floating point operations per Watt than general-purpose supercomputers running the high-performance computing benchmark Linpack.

Bio

David Patterson is a Berkeley CS professor emeritus, a Google distinguished engineer, and the RISC-V Foundation Vice-Chair. He received his BA, MS, and PhD degrees from UCLA. His Reduced Instruction Set Computer (RISC), Redundant Array of Inexpensive Disks (RAID), and Network of Workstation projects helped lead to multi-billion-dollar industries. This work led to 40 awards for research, teaching, and service plus many papers and seven books. The best known book is Computer Architecture: A Quantitative Approach and the newest is The RISC-V Reader: An Open Architecture Atlas. In 2017 he and John Hennessy shared the ACM A.M. Turing Award.