Scaling Deep Learning on Multiple In-Memory Processors
Lifan Xu,D. Zhang,N. Jayasena
2015
19 Citations
TLDR
This paper selects three typical layers: the convolutional, pooling, and fully connected layers from common deep learning models and parallelize them using different schemes and shows preliminary results show they are able to reach competitive or even better performance using multiple PIM devices when comparing with traditional GPU parallelization.
Cited Papers
Citing Papers
