AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
We propose a method to learn the Gaussian embedding by the deep variational model, namely Deep Variational Network Embedding in Wasserstein Space, which can model the uncertainties of nodes

Deep Variational Network Embedding in Wasserstein Space.

KDD, pp.2827-2836, (2018)

被引用79|浏览2215
EI
下载 PDF 全文
引用
微博一下

摘要

Network embedding, aiming to embed a network into a low dimensional vector space while preserving the inherent structural properties of the network, has attracted considerable attentions recently. Most of the existing embedding methods embed nodes as point vectors in a low-dimensional continuous space. In this way, the formation of the ed...更多

代码

数据

0
简介
  • Network embedding has attracted considerable research attentions in the past few years.
  • Most of existing network embedding methods represent each node by a single point in a low-dimensional vector space.
  • In this way, the formation of the whole network structure is deterministic.
  • In social network, human behavior is multi-faceted which makes the generation of edges uncertain [47]
  • For all of these cases, without considering the uncertainty of networks, the learned embeddings will be less effective in network analysis and inference tasks
重点内容
  • Network embedding has attracted considerable research attentions in the past few years
  • We propose a novel Deep Variational Network Embedding in Wasserstein Space method in this paper, named Deep Variational Network Embedding in Wasserstein Space
  • We propose Deep Variational Network Embedding in Wasserstein Space, an novel method that learns the Gaussian embedding in the Wasserstein space, which can well preserve the transitivity in networks and reflect the uncertainties of nodes
  • We focus on the problem of network embedding with first-order and second-order proximity preserved
  • We propose a method to learn the Gaussian embedding by the deep variational model, namely Deep Variational Network Embedding in Wasserstein Space, which can model the uncertainties of nodes
  • Deep Variational Network Embedding in Wasserstein Space uses the 2-Wasserstein distance as the similarity measure to better preserve the transitivity in the network with the linear time complexity
方法
  • The authors use the following five methods as the baselines.

    DVNE_kl : In order to show the advantages of W2 distance in undirected network.
  • As the datasets have no attribute information, the authors compare with the one-hot encoding version of Graph2Gauss as described in the paper
结论
  • The authors propose a method to learn the Gaussian embedding by the deep variational model, namely DVNE, which can model the uncertainties of nodes.
  • The method preserves first-order proximity and second-order proximity between nodes to capture the local and global network structure.
  • DVNE uses the 2-Wasserstein distance as the similarity measure to better preserve the transitivity in the network with the linear time complexity.
  • The authors' future direction is to find a good Gaussian prior for each node to better capture the network structure and model the uncertainties of nodes
表格
  • Table1: Statistics of datasets. |V | denotes the number of nodes , |E| denotes the number of edges and |C | denotes the number of classes
  • Table2: AUC scores for Network Reconstruction
  • Table3: AUC scores for Link Prediction
Download tables as Excel
相关工作
  • Because of the popularity of networked data, network embedding has received more and more attentions in recent years. We briefly review some network embedding methods, and readers can referred to [13] for a comprehensive survey. Deepwalk [37] first uses the language modeling technique to learn the latent representations of a network by truncated random walks. LINE [39] embeds the network into a low-dimensional space where the first-order and second-order proximity between nodes are preserved. Node2vec [22] learns a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. HOPE [36] proposes a high-order proximity preserved embedding method. Furthermore, deep learning method for network embedding is also studied. SDNE [44] first considers the high nonlinearity in network embedding and proposes a deep autoencoder to preserve the first- and the second-order proximities. The graph variational autoencoder (GAE) [27] learns node embeddings in an unsupervised manner with variational autoencoder (VAE) [16].
基金
  • This work was supported in part by National Program on Key Basic Research Project (No 2015CB352300), National Natural Science Foundation of China (No 61772304, No 61521002, No 61531006, No 61702296), National Natural Science Foundation of China Major Project (No.U1611461), the research fund of Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology, and the Young Elite Scientist Sponsorship Program by CAST
  • All opinions, findings, conclusions and recommendations in this paper are those of the authors and do not necessarily reflect the views of the funding agencies
引用论文
  • Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. 2008. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media.
    Google ScholarFindings
  • Aleksandar Bojchevski and Stephan Günnemann. 2017. Deep gaussian embedding of attributed graphs: Unsupervised inductive learning via ranking. arXiv preprint arXiv:1707.03815 (2017).
    Findings
  • A. Bojchevski and S. Günnemann. 2017. Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking. ArXiv e-prints (July 2017). arXiv:stat.ML/1707.03815
    Google ScholarFindings
  • Nicolas Bonneel, Julien Rabin, Gabriel Peyré, and Hanspeter Pfister. 2015. Sliced and radon wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision 51, 1 (2015), 22–45.
    Google ScholarLocate open access versionFindings
  • Nicolas Bonneel, Michiel Van De Panne, Sylvain Paris, and Wolfgang Heidrich. 2011. Displacement interpolation using Lagrangian mass transport. In ACM Transactions on Graphics (TOG), Vol. 30. ACM, 158.
    Google ScholarLocate open access versionFindings
  • Victor Bryant. 1985. Metric spaces: iteration and application. Cambridge University Press.
    Google ScholarFindings
  • Chen Chen and Hanghang Tong. 2015. Fast eigen-functions tracking on dynamic graphs. In Proceedings of the 2015 SIAM International Conference on Data Mining. SIAM, 559–567.
    Google ScholarLocate open access versionFindings
  • Siheng Chen, Sufeng Niu, Leman Akoglu, Jelena Kovačević, and Christos Faloutsos. 2017. Fast, Warped Graph Embedding: Unifying Framework and One-Click Algorithm. arXiv preprint arXiv:1702.05764 (2017).
    Findings
  • Philippe Clement and Wolfgang Desch. 2008. An elementary proof of the triangle inequality for the Wasserstein metric. Proc. Amer. Math. Soc. 136, 1 (2008), 333– 339.
    Google ScholarLocate open access versionFindings
  • Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289 (2015).
    Findings
  • Nicolas Courty, Rémi Flamary, and Mélanie Ducoffe. 2017. Learning Wasserstein Embeddings. arXiv preprint arXiv:1710.07457 (2017).
    Findings
  • Nicolas Courty, Rémi Flamary, Devis Tuia, and Alain Rakotomamonjy. 2017. Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence 39, 9 (2017), 1853–1865.
    Google ScholarLocate open access versionFindings
  • Peng Cui, Xiao Wang, Jian Pei, and Wenwu Zhu. 2017. A Survey on Network Embedding. arXiv preprint arXiv:1711.08752 (2017).
    Findings
  • Marco Cuturi and Arnaud Doucet. 20Fast computation of Wasserstein barycenters. In International Conference on Machine Learning. 685–693.
    Google ScholarLocate open access versionFindings
  • Fernando De Goes, Katherine Breeden, Victor Ostromoukhov, and Mathieu Desbrun. 2012. Blue noise through optimal transport. ACM Transactions on Graphics (TOG) 31, 6 (2012), 171.
    Google ScholarLocate open access versionFindings
  • Carl Doersch. 20Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908 (2016).
    Findings
  • Ludovic Dos Santos, Benjamin Piwowarski, and Patrick Gallinari. 2016. Multilabel classification on heterogeneous graphs with gaussian embeddings. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 606–622.
    Google ScholarLocate open access versionFindings
  • Tom Fawcett. 2006. An introduction to ROC analysis. Pattern recognition letters 27, 8 (2006), 861–874.
    Google ScholarLocate open access versionFindings
  • Bent Fuglede and Flemming Topsoe. 2004. Jensen-Shannon divergence and Hilbert space embedding. In Information Theory, 2004. ISIT 2004. Proceedings. International Symposium on. IEEE, 31.
    Google ScholarLocate open access versionFindings
  • Clark R Givens, Rae Michael Shortt, et al. 1984. A class of Wasserstein metrics for probability distributions. The Michigan Mathematical Journal 31, 2 (1984), 231–240.
    Google ScholarLocate open access versionFindings
  • Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. 249–256.
    Google ScholarLocate open access versionFindings
  • Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 855–864.
    Google ScholarLocate open access versionFindings
  • Steve R Gunn et al. 1998. Support vector machines for classification and regression. ISIS technical report 14, 1 (1998), 5–16.
    Google ScholarFindings
  • Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. 2015. Learning to represent knowledge graphs with gaussian embedding. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. ACM, 623–632.
    Google ScholarLocate open access versionFindings
  • Paul W Holland and Samuel Leinhardt. 1972. Holland and Leinhardt reply: some evidence on the transitivity of positive interpersonal sentiment.
    Google ScholarFindings
  • Zhipeng Huang and Nikos Mamoulis. 2017. Heterogeneous Information Network Embedding for Meta Path based Proximity. arXiv preprint arXiv:1701.05291 (2017).
    Findings
  • Thomas N Kipf and Max Welling. 2016. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016).
    Findings
  • Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics 22, 1 (1951), 79–86.
    Google ScholarLocate open access versionFindings
  • Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. 2006. A tutorial on energy-based learning. Predicting structured data 1, 0 (2006).
    Google ScholarFindings
  • Jure Leskovec and Julian J Mcauley. 2012. Learning to discover social circles in ego networks. In Advances in neural information processing systems. 539–547.
    Google ScholarLocate open access versionFindings
  • David Liben-Nowell and Jon Kleinberg. 2007. The link-prediction problem for social networks. journal of the Association for Information Science and Technology 58, 7 (2007), 1019–1031.
    Google ScholarLocate open access versionFindings
  • Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research 9, Nov (2008), 2579–2605.
    Google ScholarLocate open access versionFindings
  • Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. 2000. Automating the construction of internet portals with machine learning. Information Retrieval 3, 2 (2000), 127–163.
    Google ScholarLocate open access versionFindings
  • Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10). 807–814.
    Google ScholarLocate open access versionFindings
  • Feiping Nie, Wei Zhu, and Xuelong Li. 2017. Unsupervised Large Graph Embedding.. In AAAI. 2422–2428.
    Google ScholarLocate open access versionFindings
  • Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. 2016. Asymmetric transitivity preserving graph embedding. In Proc. of ACM SIGKDD. 1105– 1114.
    Google ScholarLocate open access versionFindings
  • Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 701–710.
    Google ScholarLocate open access versionFindings
  • Zafarani Reza and Liu Huan. 2009. Social Computing Data Repository. (2009).
    Google ScholarFindings
  • Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web. ACM, 1067–1077.
    Google ScholarLocate open access versionFindings
  • Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning 4, 2 (2012), 26–31.
    Google ScholarFindings
  • Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. 2017. Wasserstein Auto-Encoders. arXiv preprint arXiv:1711.01558 (2017).
    Findings
  • Ke Tu, Peng Cui, Xiao Wang, Fei Wang, and Wenwu Zhu. 2017. Structural Deep Embedding for Hyper-Networks. arXiv preprint arXiv:1711.10146 (2017).
    Findings
  • Luke Vilnis and Andrew McCallum. 2014. Word representations via gaussian embedding. arXiv preprint arXiv:1412.6623 (2014).
    Findings
  • Daixin Wang, Peng Cui, and Wenwu Zhu. 2016. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 1225–1234.
    Google ScholarLocate open access versionFindings
  • Huahua Wang and Arindam Banerjee. 2014. Bregman alternating direction method of multipliers. In Advances in Neural Information Processing Systems. 2816–2824.
    Google ScholarLocate open access versionFindings
  • Xiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, and Shiqiang Yang. 2017. Community Preserving Network Embedding. (2017).
    Google ScholarFindings
  • Chengxi Zang, Peng Cui, Christos Faloutsos, and Wenwu Zhu. 2017. Long Short Memory Process: Modeling Growth Dynamics of Microscopic Social Connectivity. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 565–574.
    Google ScholarLocate open access versionFindings
您的评分 :
0

 

标签
评论
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科