On Reducing the Amount of Samples Required for Training of QNNs: Constraints on the Linear Structure of the Training Data

arXiv (Cornell University)(2023)

引用 0|浏览0
暂无评分
摘要
Training classical neural networks generally requires a large number of training samples. Using entangled training samples, Quantum Neural Networks (QNNs) have the potential to significantly reduce the amount of training samples required in the training process. However, to minimize the number of incorrect predictions made by the resulting QNN, it is essential that the structure of the training samples meets certain requirements. On the one hand, the exact degree of entanglement must be fixed for the whole set of training samples. On the other hand, training samples must be linearly independent and non-orthogonal. However, how failing to meet these requirements affects the resulting QNN is not fully studied. To address this, we extend the proof of the QNFL theorem to (i) provide a generalization of the theorem for varying degrees of entanglement. This generalization shows that the average degree of entanglement in the set of training samples can be used to predict the expected quality of the QNN. Furthermore, we (ii) introduce new estimates for the expected accuracy of QNNs for moderately entangled training samples that are linear dependent or orthogonal. Our analytical results are (iii) experimentally validated by simulating QNN training and analyzing the quality of the QNN after training.
更多
查看译文
关键词
qnns
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要