word2ket: Space-efficient Word Embeddings inspired by Quantum Entanglement

Aliakbar Panahi, Seyran Saeedi, Tom Arodz

Keywords: memory, nlp, word embedding, word embeddings

Monday: Sequence Representations

Abstract: Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words. A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors. Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors. However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may stain systems with limited GPU memory. Here, we used approaches inspired by quantum computing to propose two related methods, word2ket and word2ketXS, for storing word embedding matrix during training and inference in a highly efficient way. Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks.

Similar Papers

Minimizing FLOPs to Learn Efficient Sparse Representations
Biswajit Paria, Chih-Kuan Yeh, Ian E.H. Yen, Ning Xu, Pradeep Ravikumar, Barnabás Póczos,
Encoding word order in complex embeddings
Benyou Wang, Donghao Zhao, Christina Lioma, Qiuchi Li, Peng Zhang, Jakob Grue Simonsen,
Low-dimensional statistical manifold embedding of directed graphs
Thorben Funke, Tian Guo, Alen Lancic, Nino Antulov-Fantulin,