On Scale-out Deep Learning Training for Cloud and HPC
Srinivas Sridharan,K. Vaidyanathan,8 Authors,P. Dubey
TLDR
The philosophy, design, and implementation of Intel Machine Learning Scalability Library (MLSL) are described and proof-points demonstrating scaling DL training on 100s to 1000s of nodes across Cloud and HPC systems are presented.
Abstract
The exponential growth in use of large deep neural networks has accelerated the need for training these deep neural networks in hours or even minutes. This can only be achieved through scalable and efficient distributed training, since a single node/card cannot satisfy the compute, memory, and I/O requirements of today's state-of-the-art deep neural networks. However, scaling synchronous Stochastic Gradient Descent (SGD) is still a challenging problem and requires continued research/development. This entails innovations spanning algorithms, frameworks, communication libraries, and system design. In this paper, we describe the philosophy, design, and implementation of Intel Machine Learning Scalability Library (MLSL) and present proof-points demonstrating scaling DL training on 100s to 1000s of nodes across Cloud and HPC systems.
