ICLR 2018
Skip to yearly menu bar Skip to main content


Workshop

Spatially Parallel Convolutions

Peter Jin · Boris Ginsburg · Kurt Keutzer

East Meeting Level 8 + 15 #6

The training of convolutional neural networks with large inputs on GPUs is limited by the available GPU memory capacity. In this work, we describe spatially parallel convolutions, which sidestep the memory capacity limit of a single GPU by partitioning tensors along their spatial axes across multiple GPUs. On modern multi-GPU systems, we demonstrate that spatially parallel convolutions attain excellent scaling when applied to input tensors with large spatial dimensions.

Live content is unavailable. Log in and register to view live content