Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training

Joya Chen · Kai Xu · Yuhui Wang · Yifei Cheng · Angela Yao

MH1-2-3-4 #49

Keywords: [ vision transformer ] [ activation compressed training ] [ top-k ] [ dropping activations ] [ dropping intermediate tensors ] [ cnn ] [ Applications ]


Abstract:

A standard hardware bottleneck when training deep neural networks is GPU memory. The bulk of memory is occupied by caching intermediate tensors for gradient computation in the backward pass. We propose a novel method to reduce this footprint - Dropping Intermediate Tensors (DropIT). DropIT drops min-k elements of the intermediate tensors and approximates gradients from the sparsified tensors in the backward pass. Theoretically, DropIT reduces noise on estimated gradients and therefore has a higher rate of convergence than vanilla-SGD. Experiments show that we can drop up to 90\% of the intermediate tensor elements in fully-connected and convolutional layers while achieving higher testing accuracy for Visual Transformers and Convolutional Neural Networks on various tasks (e.g., classification, object detection, instance segmentation). Our code and models are available at https://github.com/chenjoya/dropit.

Chat is not available.