Poster
On the Relation between Trainability and Dequantization of Variational Quantum Learning Models
Elies Gil-Fuster · Casper Gyurik · Adrian Perez-Salinas · Vedran Dunjko
Quantum machine learning (QML) explores the potential advantages of quantum computers for machine learning tasks, with variational QML among the main current approaches.While quantum computers promise to solve problems that are classically intractable, it has been recently shown that a particular quantum algorithm which outperforms all pre-existing classical algorithms can be matched by a newly developed classical approach (often inspired by the quantum algorithm).We say such algorithms have been dequantized.For QML models to be effective, they must be trainable and non-dequantizable.The relationship between these properties is still not fully understood and recent works raised into question to what extent we could ever have QML models which are both trainable and non-dequantizable.This challenges the potential of QML altogether.In this work we answer open questions regarding when trainability and non-dequantization are compatible.We first formalize the key concepts and put them in the context of prior research.We introduce the role of "variationalness" of QML models using well-known quantum circuit architectures as leading examples.Our results provide recipes for variational QML models that are trainable and non-dequantizable.By ensuring that variational QML models are both trainable and non-dequantizable, we pave the way toward practical relevance.
Live content is unavailable. Log in and register to view live content