Coarse grain parallelization of deep neural networks
Tipo de documentoComunicación de congreso
Fecha de publicación2016
EditorInstitute of Electrical and Electronics Engineers (IEEE)
Condiciones de accesoAcceso restringido por política de la editorial
Deep neural networks (DNN) have recently achieved extraordinary results in domains like computer vision and speech recognition. An essential element for this success has been the introduction of high performance computing (HPC) techniques in the critical step of training the neural network. This paper describes the implementation and analysis of a network-agnostic and convergence-invariant coarse-grain parallelization of the DNN training algorithm. The coarse-grain parallelization is achieved through the exploitation of the batch-level parallelism. This strategy is independent from the support of specialized and optimized libraries. Therefore, the optimization is immediately available for accelerating the DNN training. The proposal is compatible with multi-GPU execution without altering the algorithm convergence rate. The parallelization has been implemented in Caffe, a state-of-the-art DNN framework. The paper describes the code transformations for the parallelization and we also identify the limiting performance factors of the approach. We show competitive performance results for two state-of-the-art computer vision datasets, MNIST and CIFAR-10. In particular, on a 16-core Xeon E5-2667v2 at 3.30GHz we observe speedups of 8x over the sequential execution, at similar performance levels of those obtained by the GPU optimized Caffe version in a NVIDIA K40 GPU.
CitaciónGonzález, M. Coarse grain parallelization of deep neural networks. A: ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. "ACM SIGPLAN Notices (Vol. 51, Issue 8, August 2016, Article No. 1)". Barcelona: Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 1-12.
Versión del editorhttp://dl.acm.org/citation.cfm?doid=2851141.2851158