AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
Since the one-shot architecture framework is not applicable in this problem, we propose an natural gradient based search algorithm that is efficient compared with the other baselines

Neural Recurrent Structure Search for Knowledge Graph Embedding

Cited by: 1|Views44
Full Text
Bibtex
Weibo

Abstract

Knowledge graph (KG) embedding is a fundamental problem in mining relational patterns. It aims to encode the entities and relations in KG into low dimensional vector space that can be used for subsequent algorithms. Lots of KG embedding models have been proposed to learn the interactions between entities and relations, which contain mea...More

Code:

Data:

0
Introduction
Highlights
  • Knowledge Graph (KG) (Auer et al 2007; Socher et al 2013; Wang et al 2017), as a special kind of graph with lots of relational facts, is important to data mining and machine learning
  • Inspired by the success of neural architecture search (NAS) (ElsKen et al 2019). we propose S2E in this paper, which can automatically combine Structural and Semantic information for Knowledge graph (KG) Embeddings
  • We propose a search algorithm based on natural gradient (NG) (Amari 1998; Pascanu and Bengio 2013), which guides more efficient search by generating pseudo gradient
  • We propose an NAS method S2E to search RNN for learning from relational paths, which contains structural and semantic information, in KGs
  • By designing a specific search space based on human-designed KG embedding models, S2E can adaptively distill structural information and combine with semantic information for different KG tasks
  • Since the one-shot architecture framework is not applicable in this problem, we propose an NG based search algorithm that is efficient compared with the other baselines
Methods
  • 5.1 Experiment setup

    Following (Grover and Leskovec 2016; Guo, Sun, and Hu 2019), the authors sample the relational paths from biased random walks.
  • The paths make up the training set and will not be resampled for each datasets when evaluating different models.
  • S is the validation or testing set and ranki, i = 1.
  • Training details of each task are given in Appendix C.3.
  • The two KGs have a set of entities that are aligned Ealgn = {(e1, e2)|e1 ∈ E1, e2 ∈ E2}, and Ealgn is split into training/validation/testing sets, During training, the paths can walk through the aligned entity pairs in training set Etra to interact between G1 and G2.
Conclusion
  • The authors propose an NAS method S2E to search RNN for learning from relational paths, which contains structural and semantic information, in KGs.
  • Since the one-shot architecture framework is not applicable in this problem, the authors propose an NG based search algorithm that is efficient compared with the other baselines
Tables
  • Table1: Recurrent function of existing KG embedding models. “ ” is the elementwise multiply, and “⊗” is the Hermitian product (<a class="ref-link" id="cTrouillon_et+al_2017_a" href="#rTrouillon_et+al_2017_a">Trouillon et al 2017</a>) in complex space. σ is a non-linear activation function
  • Table2: Candidates of operations and activation functions for OP1-3
  • Table3: Comparison of state-of-the-arts NAS methods for general RNN with the proposed S2E
  • Table4: Performance comparison on entity alignment tasks. H@k is short for Hit@k. The results of TransD (<a class="ref-link" id="cJi_et+al_2015_a" href="#rJi_et+al_2015_a">Ji et al 2015</a>), BootEA (<a class="ref-link" id="cSun_et+al_2018_a" href="#rSun_et+al_2018_a">Sun et al 2018</a>), IPTransE (<a class="ref-link" id="cZhu_et+al_2017_a" href="#rZhu_et+al_2017_a">Zhu et al 2017</a>) and RSN (<a class="ref-link" id="cGuo_et+al_2019_a" href="#rGuo_et+al_2019_a"><a class="ref-link" id="cGuo_et+al_2019_a" href="#rGuo_et+al_2019_a">Guo et al 2019</a></a>) are copied from (<a class="ref-link" id="cGuo_et+al_2019_a" href="#rGuo_et+al_2019_a"><a class="ref-link" id="cGuo_et+al_2019_a" href="#rGuo_et+al_2019_a">Guo et al 2019</a></a>)
  • Table5: Performance comparison on link prediction tasks
  • Table6: Performance in Countries datasets
  • Table7: Statistics of the datasets we used for entity alignment. Each single KG contains 15,000 entities. There are
  • Table8: Statistics of the datasets we used for link prediction. “rels” is short for “relations”
  • Table9: Parameter used for sampling path
  • Table10: Searching range of hyper-parameters hyper-param ranges
  • Table11: Percentage of the n-hop triplets in validation and testing datasets
Download tables as Excel
Related work
  • Before we present the related work, we first give the notations used in this paper. We denote vectors by lowercase boldface, and matrix by uppercase boldface. Knowledge Graph G = (E, R, S) is defined by the set of entities E, relations R and triplets S. A single triplet (s, r, o) ∈ S represents a relation r that links from the subject entity s to the object entity o. The embeddings in this paper are denoted as boldface letters of indexes, e.g. s, r, o are embeddings of s, r, o respectively. Vectors are denoted by lowercase boldface, and matrices by uppercase boldface.
Reference
  • Akimoto, Y.; Shirakawa, S.; Yoshinari, N.; Uchida, K.; Saito, S.; and Nishida, K. 2019. Adaptive stochastic natural gradient method for one-shot neural architecture search. In ICML, 171–180.
    Google ScholarLocate open access versionFindings
  • Amari, S. 1998. Natural gradient works efficiently in learning. Neural Computation 10(2):251–276.
    Google ScholarLocate open access versionFindings
  • Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; and Ives, Z. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web. Springer. 722–735.
    Google ScholarFindings
  • Baker, B.; Gupta, O.; Naik, N.; and Raskar, R. 2017. Designing neural network architectures using reinforcement learning. In ICLR.
    Google ScholarFindings
  • Bender, G.; Kindermans, P.-J.; Zoph, B.; Vasudevan, V.; and Le, Q. 2018. Understanding and simplifying one-shot architecture search. In ICML, 549–558.
    Google ScholarLocate open access versionFindings
  • Bergstra, J. S.; Bardenet, R.; Bengio, Y.; and Kegl, B. 2011. Algorithms for hyper-parameter optimization. In NeurIPS, 2546–2554.
    Google ScholarLocate open access versionFindings
  • Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; and Yakhnenko, O. 2013. Translating embeddings for modeling multi-relational data. In NeurIPS, 2787–2795.
    Google ScholarFindings
  • Bouchard, G.; Singh, S.; and Trouillon, T. 2015. On approximate reasoning capabilities of low-rank vector spaces. In AAAI Spring Symposium Series.
    Google ScholarFindings
  • Boyd, K.; Eng, K. H.; and Page, C. 2013. Area under the precision-recall curve: point estimates and confidence intervals. In Joint ECML-KDD, 451–466. Springer.
    Google ScholarLocate open access versionFindings
  • Chung, J.; Gulcehre, C.; Cho, K.; and Bengio, Y. 2015. Gated feedback recurrent neural networks. In ICML, 2067– 2075.
    Google ScholarLocate open access versionFindings
  • Conn, A. R.; Scheinberg, K.; and Vicente, L. N. 2009. Introduction to derivative-free optimization, volume 8. Siam.
    Google ScholarLocate open access versionFindings
  • Dettmers, T.; Minervini, P.; Stenetorp, P.; and Riedel, S. 2017. Convolutional 2D knowledge graph embeddings. In AAAI.
    Google ScholarFindings
  • Dong, X.; Gabrilovich, E.; Heitz, G.; Horn, W.; Lao, N.; Murphy, K.; Strohmann, T.; Sun, S.; and Zhang, W. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In SIGKDD, 601–610.
    Google ScholarLocate open access versionFindings
  • Elsken, T.; Metzen, J. H.; and Hutter, F. 2019. Neural architecture search: A survey. JMLR 20(55):1–21.
    Google ScholarLocate open access versionFindings
  • Grover, A., and Leskovec, J. 2016. Node2vec: Scalable feature learning for networks. In SigKDD, 855–864. ACM.
    Google ScholarLocate open access versionFindings
  • Guo, L.; Sun, Z.; and Hu, W. 2019. Learning to exploit long-term relational dependencies in knowledge graphs. In ICML.
    Google ScholarFindings
  • Guthrie, D.; Allison, B.; Liu, W.; Guthrie, L.; and Wilks, Y. 2006. A closer look at skip-gram modelling. In LREC, 1222–1225.
    Google ScholarLocate open access versionFindings
  • Hutter, F.; Kotthoff, L.; and Vanschoren, J., eds. 20Automated Machine Learning: Methods, Systems, Challenges. Springer. In press, available at http://automl.org/book.
    Findings
  • Ji, G.; He, S.; Xu, L.; Liu, K.; and Zhao, J. 2015. Knowledge graph embedding via dynamic mapping matrix. In ACL, volume 1, 687–696.
    Google ScholarLocate open access versionFindings
  • Kingma, D. P., and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980.
    Findings
  • Kipf, T. N., and Welling, M. 2016. Semisupervised classification with graph convolutional networks. arXiv:1609.02907.
    Findings
  • Lacroix, T.; Usunier, N.; and Obozinski, G. 2018. Canonical tensor decomposition for knowledge base completion. In ICML.
    Google ScholarFindings
  • Lin, Y.; Liu, Z.; Luan, H.; Sun, M.; Rao, S.; and Liu, S. 2015. Modeling relation paths for representation learning of knowledge bases. Technical report, arXiv:1506.00379.
    Findings
  • Liu, H.; Simonyan, K.; and Yang, Y. 2019. DARTS: Differentiable architecture search. In ICLR.
    Google ScholarFindings
  • Lukovnikov, D.; Fischer, A.; Lehmann, J.; and Auer, S. 2017. Neural network-based question answering over knowledge graphs on word and character level. In WWW, 1211–1220.
    Google ScholarFindings
  • Nickel, M.; Murphy, K.; Tresp, V.; and Gabrilovich, E. 2016. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE 104(1):11–33.
    Google ScholarLocate open access versionFindings
  • Pascanu, R., and Bengio, Y. 2013. Revisiting natural gradient for deep networks. Technical report, arXiv preprint arXiv:1301.3584.
    Findings
  • Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic differentiation in pytorch. In ICLR.
    Google ScholarLocate open access versionFindings
  • Perozzi, B.; Al-Rfou, R.; and Skiena, S. 2014. Deepwalk: Online learning of social representations. In SIGKDD, 701– 710. ACM.
    Google ScholarLocate open access versionFindings
  • Socher, R.; Chen, D.; Manning, C.; and Ng, A. 2013. Reasoning with neural tensor networks for knowledge base completion. In NeurIPS, 926–934.
    Google ScholarLocate open access versionFindings
  • Sun, Z.; Hu, W.; Zhang, Q.; and Qu, Y. 2018. Bootstrapping entity alignment with knowledge graph embedding. In IJCAI, 4396–4402.
    Google ScholarFindings
  • Sundermeyer, M.; Schluter, R.; and Ney, H. 2012. Lstm neural networks for language modeling. In CISCA.
    Google ScholarFindings
  • Toutanova, K., and Chen, D. 2015. Observed versus latent features for knowledge base and text inference. In Workshop on CVSMC, 57–66.
    Google ScholarLocate open access versionFindings
  • Trouillon, T.; Dance, C.; Gaussier, E.; Welbl, J.; Riedel, S.; and Bouchard, G. 2017. Knowledge graph completion via complex tensor factorization. JMLR 18(1):4735–4772.
    Google ScholarLocate open access versionFindings
  • van der Maaten, L., and Hinton, G. E. 2008. Visualizing data using t-SNE. JMLR.
    Google ScholarFindings
  • Wang, Q.; Mao, Z.; Wang, B.; and Guo, L. 2017. Knowledge graph embedding: A survey of approaches and applications. TKDE 29(12):2724–2743.
    Google ScholarLocate open access versionFindings
  • Wang, Z.; Lv, Q.; Lan, X.; and Zhang, Y. 2018. Crosslingual knowledge graph alignment via graph convolutional networks. In EMNLP, 349–357.
    Google ScholarLocate open access versionFindings
  • Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. MLJ 8(3-4):229–256.
    Google ScholarLocate open access versionFindings
  • Xie, S.; Zheng, H.; Liu, C.; and Lin, L. 2018. SNAS: stochastic neural architecture search. In ICLR.
    Google ScholarFindings
  • Zhang, F.; Yuan, N. J.; Lian, D.; Xie, X.; and Ma, W.-Y. 2016. Collaborative knowledge base embedding for recommender systems. In SIGKDD, 353–362.
    Google ScholarLocate open access versionFindings
  • Zhu, H.; Xie, R.; Liu, Z.; and Sun, M. 2017. Iterative entity alignment via joint knowledge embeddings. In IJCAI, 4258– 4264.
    Google ScholarLocate open access versionFindings
  • Zoph, B., and Le, Q. 2017. Neural architecture search with reinforcement learning. In ICLR.
    Google ScholarFindings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科