Skip to yearly menu bar Skip to main content


Poster

The Intrinsic Dimension of Images and Its Impact on Learning

Phil Pope · Chen Zhu · Ahmed Abdelkader · Micah Goldblum · Tom Goldstein

Virtual

Keywords: [ CIFAR ] [ imagenet ] [ dimension ] [ manifold ] [ generalization ]


Abstract:

It is widely believed that natural image data exhibits low-dimensional structure despite the high dimensionality of conventional pixel representations. This idea underlies a common intuition for the remarkable success of deep learning in computer vision. In this work, we apply dimension estimation tools to popular datasets and investigate the role of low-dimensional structure in deep learning. We find that common natural image datasets indeed have very low intrinsic dimension relative to the high number of pixels in the images. Additionally, we find that low dimensional datasets are easier for neural networks to learn, and models solving these tasks generalize better from training to test data. Along the way, we develop a technique for validating our dimension estimation tools on synthetic data generated by GANs allowing us to actively manipulate the intrinsic dimension by controlling the image generation process. Code for our experiments may be found \href{https://github.com/ppope/dimensions}{here}.

Chat is not available.