A Quantum Entanglement-Based Approach For Computing Sentence Similarity

IEEE ACCESS(2020)

引用 3|浏览17
暂无评分
摘要
It is important to learn directly from original texts in natural language processing (NLP). Many deep learning (DP) models needing a large number of manually annotated data are not effective in deriving much information from corpora with few annotated labels. Existing methods using unlabeled language information to provide valuable messages consume considerable time and cost. Our provided sentence representation based on quantum computation (called Model I) needs no prior knowledge except word2vec. To reduce some semantic noise caused by the tensor product on the entangled words vector, two improved models (called Model II and Model III) are proposed to reduce the dimensions of the sentence embedding stimulated by Model I. The provided models are evaluated in the STS tasks of 2012, 2014, 2015 and 2016, for a total of 21 corpora. Experimental results show that using quantum entanglement and dimensionality reduction in sentence embedding yields state-of-the-art performances on semantic relations and syntactic structures. Compared to the Pearson correlation coefficient (Pcc) and mean squared error (MSE), the results of 16 out of 16 corpora are better than the results of the comparative methods.
更多
查看译文
关键词
Computational modeling, Semantics, Quantum computing, Quantum entanglement, Tensile stress, Natural language processing, Quantum computation, text representation, sentence similarity, tensor product, dimensionality reduction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要